图神经网络总结+比赛

图神经网络7日打卡心得体会

课程总结

我是一名做计算机视觉的学生,在外面实习的时候,做了3D视觉的项目,其中涉及3D目标检测与分割,所以报名了百度的图神经网络课程。图神经网络针对计算机视觉中3D点云的分割很有效果。以下是课程学习的内容总结。

图神经网络总结+比赛

关于比赛的心得

在比赛中,我复现了图神经网络的结构,包括GCN,GAT RESGAT这几种,在阅读源码的时候我看到ResGAT之后很有感触,这是GAT的模型融合了CV领域的RESnet算法,即加入Short-cut进行直连的结构。整体算法代码如下:

// An highlighted block
class ResGAT(object):
    """Implement of ResGAT"""

    def __init__(self, config, num_class):
        self.num_class = num_class
        self.num_layers = config.get("num_layers", 1)
        self.num_heads = config.get("num_heads", 8)
        self.hidden_size = config.get("hidden_size", 8)
        self.feat_dropout = config.get("feat_drop", 0.6)
        self.attn_dropout = config.get("attn_drop", 0.6)
        self.edge_dropout = config.get("edge_dropout", 0.0)

    def forward(self, graph_wrapper, feature, phase):
        # feature [num_nodes, 100]
        if phase == "train":
            edge_dropout = self.edge_dropout
        else:
            edge_dropout = 0
        feature = L.fc(feature, size=self.hidden_size * self.num_heads, name="init_feature")
        for i in range(self.num_layers):
            ngw = pgl.sample.edge_drop(graph_wrapper, edge_dropout)

            res_feature = feature
            # res_feature [num_nodes, hidden_size * n_heads]
            feature = conv.gat(ngw,
                               feature,
                               self.hidden_size,
                               activation=None,
                               name="gat_layer_%s" % i,
                               num_heads=self.num_heads,
                               feat_drop=self.feat_dropout,
                               attn_drop=self.attn_dropout)
            # feature [num_nodes, num_heads * hidden_size]
            feature = res_feature + feature
            # [num_nodes, num_heads * hidden_size] + [ num_nodes, hidden_size * n_heads]
            feature = L.relu(feature)
            feature = L.layer_norm(feature, name="ln_%s" % i)

        ngw = pgl.sample.edge_drop(graph_wrapper, edge_dropout)
        feature = conv.gat(ngw,
                           feature,
                           self.num_class,
                           num_heads=1,
                           activation=None,
                           feat_drop=self.feat_dropout,
                           attn_drop=self.attn_dropout,
                           name="output")
        return feature
可以看到在上面的代码中,feature = res_feature + feature表示short-cut的结构
// An highlighted block
feature = res_feature + feature
受到启发,我在对其中的特征展开链接,在原来单一尺度单路的特=特征中加入多尺度特征,这也借鉴了CV中Inception-net加宽网络的思想,具体代码如下:concat_feature表示三个并行支路的特征融合。
// An highlighted block
		for i in range(self.num_layers):
            ngw = pgl.sample.edge_drop(graph_wrapper, edge_dropout)

            res_feature = feature
            res_feature = L.fc(res_feature, size=self.hidden_size * self.num_heads, name="third_feature")

            # res_feature [num_nodes, hidden_size * n_heads]
            feature1 = conv.gat(ngw,
                                feature,
                                self.hidden_size,
                                activation=None,
                                name="gat_layer_%s" % i,
                                num_heads=self.num_heads,
                                feat_drop=self.feat_dropout,
                                attn_drop=self.attn_dropout)
            # feature [num_nodes, num_heads * hidden_size]
            feature2 = conv.gat(ngw,
                                feature,
                                self.hidden_size,
                                activation=None,
                                name="gat_layer_%s" % i,
                                num_heads=self.num_heads,
                                feat_drop=self.feat_dropout,
                                attn_drop=self.attn_dropout)

            feature3 = conv.gat(ngw,
                                feature,
                                self.hidden_size,
                                activation=None,
                                name="gat_layer_%s" % i,
                                num_heads=self.num_heads,
                                feat_drop=self.feat_dropout,
                                attn_drop=self.attn_dropout)

            concat_feature = feature1 + feature2 + feature3
            feature = (res_feature + concat_feature) + 0.8*res_feature*concat_feature

时间有限,先写到这里了,后续还会继续更新的

上一篇:分离链接法的删除操作函数2021/5/5


下一篇:jenkins 插件Generic Webhook Trigger的使用