论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization

总结

利用生成对抗网络实现无监督的二部图嵌入方法,聚合时先聚合二跳邻居到一跳再聚合到自己身上以规避不同类型的问题

二部图嵌入方式

  • 随机游走法
  • 重构法,包含协同过滤和特征聚合

本文的重点

以前的算法都只停留在比较局部的范围内处理信息,这篇提出的模型可以提取整体的属性,包含同构顶点间的社区结构和异构顶点的远程依赖关系。呈现的形式是顶点嵌入。

互信息最大化

定义:
I ( X ; Y ) = D K L ( p ( x , y ) ∣ ∣ p ( x ) p ( y ) ) = E p ( x , y ) [ log ⁡ p ( x , y ) p ( x ) p ( y ) ] I(X;Y)=D_{KL}(p(x,y)||p(x)p(y))=\mathbb{E}_{p(x,y)}\left[ \log\frac{p(x,y)}{p(x)p(y)} \right] I(X;Y)=DKL​(p(x,y)∣∣p(x)p(y))=Ep(x,y)​[logp(x)p(y)p(x,y)​]
论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization

二部图编码器

二部图中直接用一跳邻居聚合肯定有问题,因此可以先将两跳邻居聚合到一跳中,再聚合到目标节点中:
v ^ j k = δ ( W ^ v k ⋅ M E A N ( { u i k − 1 : u i ∈ N ( v j ) } ) ) \hat{v}^k_j = \delta(\hat{W}^k_v \cdot {\rm MEAN}(\{u^{k-1}_i:u_i \in \mathcal{N}(v_j)\})) v^jk​=δ(W^vk​⋅MEAN({uik−1​:ui​∈N(vj​)}))
u ˉ i k = δ ( W ˉ u k ⋅ M E A N ( v ^ j k : v j ∈ N ( u i ) ) ) \bar{u}^k_i=\delta(\bar{W}^k_u \cdot {\rm MEAN({\hat{v}^k_j:v_j \in \mathcal{N}(u_i)})}) uˉik​=δ(Wˉuk​⋅MEAN(v^jk​:vj​∈N(ui​)))
u i k = W u k ⋅ [ u ˉ i k ∣ u i k − 1 ] u_i^k = W^k_u \cdot [\bar{u}^k_i | u^{k-1}_i] uik​=Wuk​⋅[uˉik​∣uik−1​]

全局表示

p u = M E A N ( { u i : u i ∈ U } ) ,   p v = M E A N ( { v i : v i ∈ V } ) , p_u={\rm MEAN}(\{u_i:u_i \in U\}),\ p_v={\rm MEAN}(\{v_i:v_i \in V\}), pu​=MEAN({ui​:ui​∈U}), pv​=MEAN({vi​:vi​∈V}),
g = C O M ( p u , p v ) = [ σ ( p u ) ∣ σ ( p v ) ] g = {\rm COM}(p_u,p_v)=[\sigma(p_u)|\sigma(p_v)] g=COM(pu​,pv​)=[σ(pu​)∣σ(pv​)]

局部表示

H-hop包围子图

对于一条边的两端u,v两个点来说
G h ( u ) = { v i ∣ d i s ( v i , u ) ≤ h } , G h ( v ) = { u i ∣ d i s ( u i , u ) ≤ h } G^h(u)=\{v_i|dis(v_i,u)\leq h\},G^h(v)=\{u_i|dis(u_i,u)\leq h\} Gh(u)={vi​∣dis(vi​,u)≤h},Gh(v)={ui​∣dis(ui​,u)≤h}


然后可以获得注意力权重:
α u , i = e x p { ( W a ⋅ v i ) T ⋅ ( W a ′ ⋅ u ) } ∑ v j ∈ G h ( u ) e x p { ( W a ⋅ v j ) T ⋅ ( W a ′ ⋅ u ) } ,   v i ∈ G h ( u ) \alpha_{u,i}=\frac{{\rm exp}\{(W_a \cdot v_i)^T \cdot (W_a^{'} \cdot u)\}}{\sum_{v_j \in G^h(u)} {\rm exp}\{(W_a \cdot v_j)^T \cdot(W_{a}^{'} \cdot u)\}},\ v_i \in G^h(u) αu,i​=∑vj​∈Gh(u)​exp{(Wa​⋅vj​)T⋅(Wa′​⋅u)}exp{(Wa​⋅vi​)T⋅(Wa′​⋅u)}​, vi​∈Gh(u)
g ( u , v ) h = [ σ ( ∑ v i ∈ G h ( u ) α u , i v i + u ) ∣ σ ( ∑ u i ∈ G h ( v ) α v , i u i + v ) ] g^h_{(u,v)}=[\sigma(\sum_{v_i \in G^h(u)} \alpha_{u,i}v_i+u) | \sigma(\sum_{u_i \in G^h(v)} \alpha_{v,i}u_i+v)] g(u,v)h​=[σ(∑vi​∈Gh(u)​αu,i​vi​+u)∣σ(∑ui​∈Gh(v)​αv,i​ui​+v)]

信息最大化目标

基本思想是利用生成对抗网络,即一份真图,一份假图,让模型可以分辨真假。
需要建立改变部分边的corrupted graph
S i , j = B e r n o u l l i ( β ) S_{i,j}={\rm Bernoulli}(\beta) Si,j​=Bernoulli(β),伯努利分布,应该是概率变0或1
G ~ = ( U , V , E ~ ) = C ( G , β ) = A ⊕ S \widetilde{G}=(U,V,\widetilde{E}) = C(G,\beta) = A \oplus S G =(U,V,E )=C(G,β)=A⊕S
其中, β \beta β是腐化概率, ⊕ \oplus ⊕是异或操作
L m = − 1 ∣ E ∣ + ∣ E ~ ∣ ( ∑ i = 1 ∣ E ∣ E G [ log ⁡ D ( g ( u , v ) i h , g ) ] + ∑ i = 1 ∣ E ∣ E G ~ [ log ⁡ ( 1 − D ( g ~ ( u , v ) i h , g ) ) ] ) \mathcal{L}_m = -\frac{1}{|E| + |\widetilde{E}|}(\sum_{i=1}^{|E|}\mathbb{E}_G[\log\mathcal{D}(g^h_{(u,v)_i},g)]+\sum_{i=1}^{|E|}\mathbb{E}_{\widetilde{G}}[\log(1-\mathcal{D}(\widetilde{g}^h_{(u,v)_i},g))]) Lm​=−∣E∣+∣E ∣1​(∑i=1∣E∣​EG​[logD(g(u,v)i​h​,g)]+∑i=1∣E∣​EG ​[log(1−D(g ​(u,v)i​h​,g))])
D ( g ( u , v ) i h , g ) = σ ( ( g ( u , v ) i h ) T W b g ) \mathcal{D}(g^h_{(u,v)_i},g)=\sigma((g^h_{(u,v)_i})^TW_bg) D(g(u,v)i​h​,g)=σ((g(u,v)i​h​)TWb​g)
论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization

训练

L = λ L m + ( 1 − λ ) L r \mathcal{L} = \lambda\mathcal{L}_m+(1-\lambda)\mathcal{L}_r L=λLm​+(1−λ)Lr​
L r = ∑ ( u , v ) ∈ E ∑ ( u ′ , v ′ ) ∈ E ( u , v ) ′ [ γ + ϕ ( [ u ′ ∣ v ′ ] ) − ϕ ( [ u ∣ v ] ) ] + \mathcal{L}_r=\sum_{(u,v)\in E}\sum_{(u^{'},v^{'})\in E^{'}_{(u,v)}}[\gamma + \phi([u^{'}|v^{'}])-\phi([u|v])]_+ Lr​=∑(u,v)∈E​∑(u′,v′)∈E(u,v)′​​[γ+ϕ([u′∣v′])−ϕ([u∣v])]+​
其中 ϕ \phi ϕ为两层全连接层来获取等级, [ x ] + [x]_+ [x]+​表示x中的正的部分, γ \gamma γ是边界。
E ( u , v ) ’ = { ( u ′ , v ) ∣ u ′ ∈ U } ∪ { ( u , v ′ ∣ v ′ ∈ V ) } E^{’}_{(u,v)}=\{(u^{'},v)|u^{'}\in U\} \cup \{(u,v^{'}|v^{'} \in V)\} E(u,v)’​={(u′,v)∣u′∈U}∪{(u,v′∣v′∈V)}

整个模型是端到端的模型。


数据集

都是推荐系统的数据集,MovieLens,Wikipedia,DBLP。
DBLP,MovieLens测试由预测top-k推荐进行,Wikipedia为边预测。
论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization

实验

论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization
论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization
论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization
论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization
论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization论文阅读笔记:[WSDM 2021]Bipartite Graph Embedding via Mutual InformationMaximization


值得借鉴的地方

  • 利用生成对抗网络去进行无监督学习,在标签过少的数据集上也适用

弊端

  • 当图过大时,mean pooling所有顶点特征作为图特征可能会过于简单,以至于模型无法区分真假图


原文

https://dl.acm.org/doi/abs/10.1145/3437963.3441783

上一篇:C# 2进制、8进制、10进制、16进制...各种进制转换


下一篇:深度学习 Embedding层 两大作用 - 转载