Contents

How Powerful are Graph Neural Networks?

image-20200614111013565

预备知识

GNN

注:下表中GCN一行的公式包含了AGGREGATE和COMBINE两步。

  AGGREGATE COMBINE
GraphSAGE \(a_v^{(k)}=MAX(\{ReLU(W\cdot{h_v^{(k-1)}),\forall{u\in}N(v)\}})\) \(W\cdot{[h_v^{(k-1)},a_v^{(k)}]}\)
GCN(2017第四代) \(h_v^{(k)}=ReLU(W\cdot{MEAN\{h_v^{(k-1)},\forall{u\in}N(v)\bigcup{\{v\}}\}})\)  
GCN的公式为什么可以理解为求平均?

参考资料:1. GraphSAGE:我寻思GCN也没我牛逼,2. 一文读懂图卷积GCN

\[H^{l+1}=\sigma(\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}H^{(l)}W^{(l)})\\ =\sigma(\hat{A}H^{(l)}W^{(l)})\]
  • \(\hat{A}\): 邻接矩阵A的归一化

    image-20200613233145947
  • \(\hat{A}\cdot{H}\)的含义:

    image-20200613233648758 image-20200613233831780
    GCN聚合的理解

Contents

SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels

image-20200530235104179

SplineCNN

符号说明

Input graphs.有向图\(G=(V,E,U)\),其中\(V={1,...,N}\)表示节点集合,\(E\subseteq{V\times{V}}\)表示边集合,\(U\in{[0,1]^{N\times{N}\times{d}}}\)由每条有向边的\(d\)维伪坐标\(u(i,j)\in{[0,1]^d}\)组成;对于一个节点\(i\in{V}\),它的邻居节点集合表示为\(N(i)\)。

注:伪坐标 (pseudo-coordinates) U的定义是重点,与MoNet论文类似。

U can be interpreted as an adjacency matrix with d-dimensional, normalized entries u(i, j) if (i, j) ∈ E and 0 otherwise

Input node features. 定义映射\(f:V\rightarrow{R^{M_{in}}}\),其中\(f(i)\in{R^{M_{in}}}\)表示每个节点\(i\in{V}\)的\(M_{in}\)维输入特征的向量。对于\(1\leq{l}\leq{M_{in}}\),\(f(i)\)中每个特征值的集合\({\{f_l(i)\vert{i\in{V}}\}}\)被称为输入的特征图 (feature map)。

B-spline basis function. \(((N^m_{1,i})_{1\leq{i}\leq{k_1}},...,(N^m_{d,i})_{1\leq{i}\leq{k_d}})\)表示d组\(m\)阶的开(open)B-样条基函数,节点(knots)向量等距分布,即样条均匀(uniform),其中\(k=(k_1,...,k_d)\)定义了\(d\)维的核大小。

Contents

B-样条基函数

最简单的B样条基函数是由de Boor-Cox递推公式定义的:

\[N_{i,0}(t) = \left\{\begin{matrix} 1 & t_i < t< t_{i+1}\\ 0 & otherwise \end{matrix}\right.\\ N_{i,k} = \frac{t - t_i}{t_{i+k} - t_{i}}N_{i,k-1}(t) + \frac{t_{i+k+1} - t}{t_{i+k+1} - t_{i+1}} N_{i+1,k-1}(t)\]

其中\(t\)为节点向量,非减的实数序列:

\[t_0,t_1,...,t_{k-1},t_k,...,t_n,t_{n+1},...,t_{n+k-1},t_{n+k+1}.\]
image-20200531013126376

加深理解:基函数的非零区间?节点个数?

image-20200531005925945