3D点云语义分割篇——PointNet

PointNet: Deep Learning on Point Sets for 3D Classifification and Segmentation

Charles R. Qi*         Hao Su*          Kaichun Mo        Leonidas J. Guibas Stanford University

       随着深度学习在二维图像处理及应用的逐渐成熟,对于三维点云也希望能利用强大的深度学习去解决诸如:分类、识别、分割、补全、配准等问题,这篇2017年发表的文章可以算是将深度学习直接应用于散乱、无序的三维点云的开山鼻祖。网络结构简单而有效(当然有效只是相对的),作者在ModelNet40上验证了分类任务,在ShapeNet上验证了部件分割,在S3DIS上验证了语义分割,本文主要针对语义分割做介绍(原理+代码)。

直接看网络结构

3D点云语义分割篇——PointNet

 网络的核心思想就两点:

1.直接使用多层感知机(MLP)。比如第一个mlp(64,64)就是指有两层网络,神经元的个数分别为64,64。为什么不用卷积,作者是这样解释的:传统的卷积框架为了共享权重和优化,需要输入的数据有很规整的形式,但是点云是无序的,一般需要变成体素网格或者经过一系列规整化的处理,但是这样无疑引入了太多的人为干预,可能对数据造成破坏。

2.最大池化(MaxPooling)。充分考虑到点云的无序性,对称性和空间变换不变性,所以提出了最大池化。具体来说文中对于像点云这种无序输入提出了三种应对策略:1)直接排序;2)把点云当成序列使用RNN来做,同时做数据增强(改变序列中点的排列);3)用对称函数聚合每个点的信息,比如最大池化(对称就是无论你的输入顺序是怎么样的,最终结果都是不变的)。当然作者通过对比实验验证了最大池化的方法是最好的。

网络流程

输入:整个网络的输入是固定数目n个点,每个点含有xyz三维坐标信息。

input transform: 就是网络结构图中左下角的那个小模块,原文里面叫做Joint Alignment Network,因为我们对点云打的标签是不会随着空间变换而改变的,所以希望网络学到的东西也是空间变换不变的,所以这个模块的目的就是估计一个仿射变换矩阵(3x3),在特征提取之前用估计得到的这个矩阵乘以输入点云(就是对输入点云做一个仿射变换),模块中T-Net也是由MLP构成的,具体结构在代码分析部分。

mlp(64,64):两层感知机,每一层的神经元个数分别为64,64;shared的意思就是网络结构是固定的,所有的点都要输入这个结构然后得到对应的输出(铁打的营盘流水的兵)。

feature transform:和input transform一样的思路,不过是在更高维的特征空间,估计的仿射变换也是更高维的(64x64)。由于特征空间维度更高,优化难度大,所以在计算损失函数的时候加了一项正则项,让求解出来的仿射变换矩阵接近于正交矩阵(原文说因为正交变换不会丢失输入的信息)。A就是T-Net要估计的放射变换矩阵。

3D点云语义分割篇——PointNet

mlp(64,128,1024):三层感知机,每层的神经元个数分别为64,128,1024。

maxpooling:至此,输入的n个点中的每一个点都有1024维特征,然后我们在n个点(点数这个维度上)选取最大值得到一个全局特征;如果是分类任务,就直接将这个特征输入到下一个mlp(512,256,k),其中k为类别数,得到对应类别的分数。对于分割任务来说,将这个全局特征复制n次,和前面第二层mlp得到的特征拼接在一起,作为Segmentation Network的输入。

Segmentation Network:输入相当于是把每个点的局部特征和全局特征进行了拼接,然后经过两个MLP,最终得到维度为nxm的输出,即对于每个点进行m分类,输出它对应每一类的预测分数。

语义分割实验      
本文主要针对语义分割实验进行介绍,代码来源为GitHub(fork别人的),可以在这里访问。代码为pytorch实现,里面也包括了其他的任务,以及PointNet++的代码,直接将整个项目拉取下来即可。数据集为S3DIS,可以访问这里下载(需要*,然后需要填写一些联系信息就可以进入谷歌云下载数据了)。

3D点云语义分割篇——PointNet

下载的数据文件名为Stanford3dDataset_v1.2_Aligned_Version.下载之后将其解压到data/s3dis/Stanford3dDataset_v1.2_Aligned_Version/

然后这里简单做一个数据集介绍。数据集包含由Matterport scanners进行扫描的6个室内区域,总共包含271个房间,扫描的每个点都进行了语义标注,共计13个类别:ceiling、floor、wall、beam、column、window、door、table、chair、sofa、bookcase、board、clutter。解压下载的压缩文件后房间的数据为txt文件,每行代表一个点,含六个维度(XYZRGB),Anotation文件夹下包含这个房间里面含有的物体的txt文件,每个物体单独为一个txt文件,同样包含XYZRGB数据。可以用notebook打开txt文件,用列编辑模式在每行开头加上“v空格",然后把后缀名改为obj(也可以自己写一小段代码完成,代码放在文章最后),然后用meshlab直接打开obj文件:

3D点云语义分割篇——PointNet可以“走”进去看看,这是一间会议室,里面有桌子、椅子、黑板等,当然还有天花板和地板:

3D点云语义分割篇——PointNet

在查看了原始数据之后,由于拉取下来的文件中已经有训练好的模型文件(保存在log\sem_seg\pointnet_sem_seg\checkpoints\best_model.pth),我们可以直接进行测试,测试之前需要对原始数据集做一些处理,把原来的数据和标签放在一个文件,每行数据为XYZRGBL(L代表label:0~12),具体操作为:进入data_utils文件夹,运行collect_indoor3d_data.py文件即可,会在data\下面新建一个文件夹stanford_indoor3d\,在原始数据的文件夹下面生成一堆.npy文件(就是numpy格式的文件),把这些.npy文件剪切到stanford_indoor3d\下面,(总之就是要把生成的.npy文件放在这个文件夹下面),就可以进行测试代码的运行了。

运行测试文件:

python test_semseg.py --log_dir pointnet_sem_seg --test_area 5 --visual

测试区域为5号区域,生成的obj文件会保存log/sem_seg/pointnet2_sem_seg/visual/下(包括ground truth和predict),直接用meshlab打开即可,下面是测试的结果:

3D点云语义分割篇——PointNet

avg class IoU: 0.436354

avg class acc: 0.526488

whole scene point accuracy: 0.787821

用meshlab打开其中一个标签和预测结果

3D点云语义分割篇——PointNet3D点云语义分割篇——PointNet

代码详解:

按照模型文件、数据处理文件、训练文件、测试文件的顺序调试一遍代码。

模型文件:models\pointnet_utils.py,models\pointnet_sem_seg.py

pointnet_utils中,首先定义了STN3d、STNkd模块,也就是结构图中的这两个小网络:

3D点云语义分割篇——PointNet

以STN3d为例,其实和主干网络的结构是类似的,都是由一系列mlp(用1*1的一维卷积实现,卷积核就相当于神经元之间的权重连接,数据本身为神经元节点)和全连接层组成。模块的输入是xyz三维坐标,输出是3*3或k*k的仿射变换矩阵。

class STN3d(nn.Module):
    def __init__(self, channel):
        super(STN3d, self).__init__()
        self.conv1 = torch.nn.Conv1d(channel, 64, 1)#in_channels,out_channels,kernel_size
        self.conv2 = torch.nn.Conv1d(64, 128, 1)
        self.conv3 = torch.nn.Conv1d(128, 1024, 1)
        self.fc1 = nn.Linear(1024, 512)
        self.fc2 = nn.Linear(512, 256)
        self.fc3 = nn.Linear(256, 9)#9=3x3
        self.relu = nn.ReLU()

        self.bn1 = nn.BatchNorm1d(64)
        self.bn2 = nn.BatchNorm1d(128)
        self.bn3 = nn.BatchNorm1d(1024)
        self.bn4 = nn.BatchNorm1d(512)
        self.bn5 = nn.BatchNorm1d(256)

    def forward(self, x):
        batchsize = x.size()[0]# 输入维度 Batchsize Channel N-point
        x = F.relu(self.bn1(self.conv1(x)))
        x = F.relu(self.bn2(self.conv2(x)))
        x = F.relu(self.bn3(self.conv3(x)))
        x = torch.max(x, 2, keepdim=True)[0]# 在N维度上max
        x = x.view(-1, 1024)

        x = F.relu(self.bn4(self.fc1(x)))
        x = F.relu(self.bn5(self.fc2(x)))
        x = self.fc3(x)

        iden = Variable(torch.from_numpy(np.array([1, 0, 0, 0, 1, 0, 0, 0, 1]).astype(np.float32))).view(1, 9).repeat(
            batchsize, 1)#tensor不能反向传播,variable可以反向传播。
        if x.is_cuda:
            iden = iden.cuda()
        x = x + iden#更新的过程:在单位矩阵的基础上做调整
        x = x.view(-1, 3, 3)#对每一个batch的所有点估计这个3*3的仿射变换矩阵
        return x

STNkd结构基本一致,只是最后估计的仿射变换矩阵大小是k*k(default:k=64)

然后定义了PointNetEncoder,结构图中的这个部分,输入是n个D维的点,每个点除了xyz外可以包含其他维度的信息;内部调用了STN和STNkd模块并设置了一个开关绝对是否使用STNkd,有三个输出,其中output1分为两种情况:用作分类时输出1*1024的global feature,用作分割时需要把global复制n次和第二个mlp得到的特征进行拼接然后再输出,作为分割网络的输入;output2为STN网络估计的3*3空间变换矩阵,output3为STNkd估计的k*k特征变换矩阵(当开关关闭时为None).

3D点云语义分割篇——PointNet

class PointNetEncoder(nn.Module):
    def __init__(self, global_feat=True, feature_transform=False, channel=3):
        '''global_feat=True表明是分类 分割时为False
           feature_transform=True表明需要对特征空间做变换,相当于STNkd模块的开关
        '''
        super(PointNetEncoder, self).__init__()
        self.stn = STN3d(channel)
        self.conv1 = torch.nn.Conv1d(channel, 64, 1)
        self.conv2 = torch.nn.Conv1d(64, 128, 1)
        self.conv3 = torch.nn.Conv1d(128, 1024, 1)
        self.bn1 = nn.BatchNorm1d(64)
        self.bn2 = nn.BatchNorm1d(128)
        self.bn3 = nn.BatchNorm1d(1024)
        self.global_feat = global_feat
        self.feature_transform = feature_transform
        if self.feature_transform:
            self.fstn = STNkd(k=64)

    def forward(self, x):
        B, D, N = x.size()#第二个维度叫dimension或者channel都行
        trans = self.stn(x)
        x = x.transpose(2, 1)#B C N->B N C 把通道维度放最后
        if D > 3:#如果输入大于三维,只处理前三维:XYZ(stn对坐标进行变换)
            feature = x[:, :, 3:]#特征从第四维开始
            x = x[:, :, :3]
        x = torch.bmm(x, trans)#tensor*matrix  stn网络的目的就是估计一个空间变换 然后对x实施这个空间变换
        if D > 3:
            x = torch.cat([x, feature], dim=2)#处理完之后又把特征拼回来
        x = x.transpose(2, 1)#处理完了把通道又换回B C N
        x = F.relu(self.bn1(self.conv1(x)))

        if self.feature_transform:#如果STNkd的开关打开:
            trans_feat = self.fstn(x)
            x = x.transpose(2, 1)#每次都得把Channel换到最后
            x = torch.bmm(x, trans_feat)#对特征空间估计的变换矩阵
            x = x.transpose(2, 1)#然后又换回来
        else:#否则不执行变换
            trans_feat = None

        pointfeat = x#经过第二个mlp后的特征,用于拼接
        x = F.relu(self.bn2(self.conv2(x)))
        x = self.bn3(self.conv3(x))
        x = torch.max(x, 2, keepdim=True)[0]
        x = x.view(-1, 1024)#结构图中maxpooling后的global_feature
        if self.global_feat:#如果是分类,直接返回x,即global_feature
            return x, trans, trans_feat  #trans:对3d空间估计的变换矩阵  trans_feat:对特征空间估计的变换矩阵
        else:#如果是分割,x需要在N-point的维度上复制N次(然后和前面经过第二个mlp后得到的特征做拼接,作为segmentation网络的输入)
            x = x.view(-1, 1024, 1).repeat(1, 1, N)
            return torch.cat([x, pointfeat], 1), trans, trans_feat

pointnet_utils.py文件的最后还定义了一个函数:feature_transform_reguliarzer,实现前面在网络流程部分提到的对特征空间进行对齐时,需要添加正则约束,让求解出来的矩阵接近于正交矩阵,也就是实现这个式子,A就是估计的k*k矩阵(函数输入)。模块返回此正则项,后面在计算loss的时候会用到。

3D点云语义分割篇——PointNet

def feature_transform_reguliarzer(trans):#对齐特征的时候,由于特征空间维度更高,优化难度大,所以加了一项正则项,让求解出来的仿射变换矩阵接近于正交
    d = trans.size()[1]#通道数
    I = torch.eye(d)[None, :, :]#增加维度
    if trans.is_cuda:
        I = I.cuda()
    loss = torch.mean(torch.norm(torch.bmm(trans, trans.transpose(2, 1)) - I, dim=(1, 2)))
    return loss

接下来看看pointnet_sem_seg文件,描述了分割任务用到的整个网络结构。导入在上一个文件中定义好的PointNetEncoder, feature_transform_reguliarzer两个模块,定义了get_model和get_loss两个类。

class get_model(nn.Module):
    def __init__(self, num_class):
        super(get_model, self).__init__()
        self.k = num_class
        self.feat = PointNetEncoder(global_feat=False, feature_transform=True, channel=9)#输入为9D数据 xyzrgb、以及点在房间中相对位置(归一化到0~1之间) gloabal_feat=False表明是分割
        self.conv1 = torch.nn.Conv1d(1088, 512, 1)
        self.conv2 = torch.nn.Conv1d(512, 256, 1)
        self.conv3 = torch.nn.Conv1d(256, 128, 1)
        self.conv4 = torch.nn.Conv1d(128, self.k, 1)
        self.bn1 = nn.BatchNorm1d(512)
        self.bn2 = nn.BatchNorm1d(256)
        self.bn3 = nn.BatchNorm1d(128)

    def forward(self, x):
        batchsize = x.size()[0]
        n_pts = x.size()[2] #B C N
        x, trans, trans_feat = self.feat(x)#trans:3*3; trans_feat:64*64
        x = F.relu(self.bn1(self.conv1(x)))
        x = F.relu(self.bn2(self.conv2(x)))
        x = F.relu(self.bn3(self.conv3(x)))
        x = self.conv4(x)#n*m
        x = x.transpose(2,1).contiguous() #深拷贝,断开和原变量的依赖 https://blog.csdn.net/kdongyi/article/details/108180250
        x = F.log_softmax(x.view(-1,self.k), dim=-1)#虽然在数学上等价于log(softmax(x)),但做这两个单独操作速度较慢,数值上也不稳定。这个函数使用另一种公式来正确计算输出和梯度。
        x = x.view(batchsize, n_pts, self.k)#n*m 每个点输出m分类的分数
        return x, trans_feat#返回transfeat的目的是计算损失时要对它实现前面提到的正则化约束 即公式中的A

class get_loss(torch.nn.Module):
    def __init__(self, mat_diff_loss_scale=0.001):#相当于调整正则化力度的因子
        super(get_loss, self).__init__()
        self.mat_diff_loss_scale = mat_diff_loss_scale

    def forward(self, pred, target, trans_feat, weight):
        loss = F.nll_loss(pred, target, weight = weight)# (negative log likelihood loss) NLLLoss 的 输入 是一个对数概率向量和一个目标标签. 它不会为我们计算对数概率. 适合网络的最后一层是log_softmax. 损失函数 nn.CrossEntropyLoss() 与 NLLLoss() 相同, 唯一的不同是它为我们去做 softmax.
                                                        #本质:CrossEntropyLoss()=log_softmax() + NLLLoss()
        mat_diff_loss = feature_transform_reguliarzer(trans_feat)
        total_loss = loss + mat_diff_loss * self.mat_diff_loss_scale#对齐特征时候的正则化约束
        return total_loss

这个文件中是写了__main__函数的,所以可以直接运行查看模型概要

if __name__ == '__main__':
    model = get_model(13)
    xyz = torch.rand(12, 9, 2048)#B C=3 N 注意代码中定义的输入维度为9,所以这里要把原来的3改为9
    (model(xyz))
    print(model)

 输出:

get_model(
  (feat): PointNetEncoder(
    (stn): STN3d(
      (conv1): Conv1d(9, 64, kernel_size=(1,), stride=(1,))
      (conv2): Conv1d(64, 128, kernel_size=(1,), stride=(1,))
      (conv3): Conv1d(128, 1024, kernel_size=(1,), stride=(1,))
      (fc1): Linear(in_features=1024, out_features=512, bias=True)
      (fc2): Linear(in_features=512, out_features=256, bias=True)
      (fc3): Linear(in_features=256, out_features=9, bias=True)
      (relu): ReLU()
      (bn1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn4): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn5): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (conv1): Conv1d(9, 64, kernel_size=(1,), stride=(1,))
    (conv2): Conv1d(64, 128, kernel_size=(1,), stride=(1,))
    (conv3): Conv1d(128, 1024, kernel_size=(1,), stride=(1,))
    (bn1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (bn2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (bn3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (fstn): STNkd(
      (conv1): Conv1d(64, 64, kernel_size=(1,), stride=(1,))
      (conv2): Conv1d(64, 128, kernel_size=(1,), stride=(1,))
      (conv3): Conv1d(128, 1024, kernel_size=(1,), stride=(1,))
      (fc1): Linear(in_features=1024, out_features=512, bias=True)
      (fc2): Linear(in_features=512, out_features=256, bias=True)
      (fc3): Linear(in_features=256, out_features=4096, bias=True)
      (relu): ReLU()
      (bn1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn4): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (bn5): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (conv1): Conv1d(1088, 512, kernel_size=(1,), stride=(1,))
  (conv2): Conv1d(512, 256, kernel_size=(1,), stride=(1,))
  (conv3): Conv1d(256, 128, kernel_size=(1,), stride=(1,))
  (conv4): Conv1d(128, 13, kernel_size=(1,), stride=(1,))
  (bn1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (bn2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (bn3): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)

 数据处理文件:data_utils\S3DISDataLoader.py,data_utils\indoor3d_util.py,data_utils\collect_indoor3d_data.py

indoor3d_util.py,collect_indoor3d_data.py这两个文件是用来对下载的原始数据进行处理,把标签和数据整合成.npy文件,文件中的每一行为XYZRGB以及label索引。indoord_util.py中定义了很多工具,有点像个工具包,同时定义了一些文件路径,collect_indoor3d_data.py使用了其中的collect_point_label这个工具。运行collect_indoor3d_data.py文件会在data/stanford_indoor3d中生成实验所需要的.npy(numpy)文件。

重点看S3DISDataLoader.py这个文件,首先定义了加载器S3DISDataset,继承torch.utils.data.Dataset,pytorch要实现自定义DataLoader至少需要实现__getitem__和__getlen__两个类方法,init方法中传入的参数还包括了测试区域(默认为5),以及block的大小(训练的时候是在1m*1m的block里面随机采集4096个点作为网络输入)。

class S3DISDataset(Dataset):
    def __init__(self, split='train', data_root='trainval_fullarea', num_point=4096, test_area=5, block_size=1.0, sample_rate=1.0, transform=None):
        super().__init__()
        self.num_point = num_point
        self.block_size = block_size
        self.transform = transform
        rooms = sorted(os.listdir(data_root))
        rooms = [room for room in rooms if 'Area_' in room]#271个.npy文件
        if split == 'train':
            rooms_split = [room for room in rooms if not 'Area_{}'.format(test_area) in room]#除了5号区域之外的其他区域
        else:
            rooms_split = [room for room in rooms if 'Area_{}'.format(test_area) in room]#5号区域

        self.room_points, self.room_labels = [], []
        self.room_coord_min, self.room_coord_max = [], []
        num_point_all = []
        labelweights = np.zeros(13)#初始化每一类的权重[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]

        for room_name in tqdm(rooms_split, total=len(rooms_split)):#tqdm传入一个可迭代的对象,制作进度条 eg:room_name:'Area_1_WC_1.npy'
            room_path = os.path.join(data_root, room_name)#eg:'data/s3dis/stanford_indoor3d/Area_1_WC_1.npy'
            room_data = np.load(room_path)  # xyzrgbl, N*7
            points, labels = room_data[:, 0:6], room_data[:, 6]  # xyzrgb, N*6; l, N
            tmp, _ = np.histogram(labels, range(14))#对输入的数据,也就是当前房间做直方图统计,返回每一类的点数,eg:[192039 185764 488740      0      0      0  28008      0      0      0,      0      0 218382]
            labelweights += tmp
            coord_min, coord_max = np.amin(points, axis=0)[:3], np.amax(points, axis=0)[:3]#xyz三个维度分别求最大最小值,
            self.room_points.append(points), self.room_labels.append(labels)
            self.room_coord_min.append(coord_min), self.room_coord_max.append(coord_max)
            num_point_all.append(labels.size)#每个房间的总点数 比如训练时为204个房间 list:204
        labelweights = labelweights.astype(np.float32)
        labelweights = labelweights / np.sum(labelweights)
        self.labelweights = np.power(np.amax(labelweights) / labelweights, 1 / 3.0)#(倒数*最大值)^(1/3)
        print(self.labelweights)#统计所有训练数据的各类权重eg:[1.124833  1.1816078 1.  2.2412012 2.340336  2.343587  1.7070498 2.0335796 1.8852289 3.8252103 1.7948895 2.7857335 1.3452303]
        sample_prob = num_point_all / np.sum(num_point_all)#每个房间的采样概率list:204:每个房间的点数/所有房间点总数
        num_iter = int(np.sum(num_point_all) * sample_rate / num_point)#迭代次数:所有房间总点数*采样率(1代表全采)/4096 总共要迭代这么多次才能把所有房间采完
        room_idxs = []
        for index in range(len(rooms_split)):
            room_idxs.extend([index] * int(round(sample_prob[index] * num_iter)))#不断往后面添加元素,添加元素为索引0~203,每次循环添加个数为:迭代次数*该索引房间的采样率。元素(索引0~204)的个数就是对应索引的房间采样的次数
        self.room_idxs = np.array(room_idxs)#每次采样都是按照room_idxs中的元素选房间来采,按比例分配的思想,不是随机采或者大家一样
        print("Totally {} samples in {} set.".format(len(self.room_idxs), split))#样本数

    def __getitem__(self, idx):
        room_idx = self.room_idxs[idx]
        points = self.room_points[room_idx]#该索引房间内的所有点   # N * 6
        labels = self.room_labels[room_idx] #该索引房间内所有点的标签  # N
        N_points = points.shape[0]

        while (True):
            center = points[np.random.choice(N_points)][:3]#随机选择一个点坐标作为block中心
            block_min = center - [self.block_size / 2.0, self.block_size / 2.0, 0]#block的区域范围:三个坐标的最小值
            block_max = center + [self.block_size / 2.0, self.block_size / 2.0, 0]#block的区域范围:三个坐标的最大值
            point_idxs = np.where((points[:, 0] >= block_min[0]) & (points[:, 0] <= block_max[0]) & (points[:, 1] >= block_min[1]) & (points[:, 1] <= block_max[1]))[0]#在block范围内的点的索引
            if point_idxs.size > 1024:#直到采集的点超过1024个,否则就再随机采
                break

        if point_idxs.size >= self.num_point:#如果采的点比4096多就再选4096,少就复制一些点凑满4096
            selected_point_idxs = np.random.choice(point_idxs, self.num_point, replace=False)#就在这些点里面再随机选4096个
        else:
            selected_point_idxs = np.random.choice(point_idxs, self.num_point, replace=True)#replace:True表示可以取相同数字,False表示不可以取相同数字

        # normalize
        selected_points = points[selected_point_idxs, :]  # num_point * 6
        current_points = np.zeros((self.num_point, 9))  # num_point * 9
        current_points[:, 6] = selected_points[:, 0] / self.room_coord_max[room_idx][0]#论文中提到的输入为9维,最后三维为该点在房间中的相对位置
        current_points[:, 7] = selected_points[:, 1] / self.room_coord_max[room_idx][1]
        current_points[:, 8] = selected_points[:, 2] / self.room_coord_max[room_idx][2]
        selected_points[:, 0] = selected_points[:, 0] - center[0]#对x,y去中心化,但z没有
        selected_points[:, 1] = selected_points[:, 1] - center[1]
        selected_points[:, 3:6] /= 255.0#RGB归一化
        current_points[:, 0:6] = selected_points
        current_labels = labels[selected_point_idxs]
        if self.transform is not None:
            current_points, current_labels = self.transform(current_points, current_labels)
        return current_points, current_labels#返回4096个点和对应的标签

    def __len__(self):
        return len(self.room_idxs)

这个文件里面还定义了ScannetDatasetWholeScene(),讲解测试文件的时候再看。

训练文件:train_semseg.py,用到了provider.py中的rotate_point_cloud_z()做了数据增强(随机沿z轴旋转)。代码里面都是很常规的操作:设置用户参数,建立文件夹,设置超参数和参数自动调整方案,加载dataset生成dataloader,加载模型,然后train,eval,模型保存,打印信息。

测试文件:test_semseg.py

由于测试的时候并不是像训练那样随机采样block,而是需要把整个场景全部输入网络,所以用到了S3DISDataLoader.py中定义的ScannetDatasetWholeScene()来制作数据。具体来说是将一个房间按给定步长网格化,然后有重叠的移动block进行点的采样,和训练的时候一样,block中的点如果不足4096,就重复采样一些点。这样在每个block内部一般都会有数个小的batch,将每个batch输入网络进行预测得到相应的预测分数进行保存,最后计算IOU,并将每个点类别信息和语义标签的颜色信息进行关联,然后一同写入文件。

S3DISDataLoader.py:

class ScannetDatasetWholeScene():
    # prepare to give prediction on each points
    def __init__(self, root, block_points=4096, split='test', test_area=5, stride=0.5, block_size=1.0, padding=0.001):
        self.block_points = block_points
        self.block_size = block_size
        self.padding = padding
        self.root = root
        self.split = split
        self.stride = stride
        self.scene_points_num = []
        assert split in ['train', 'test']
        if self.split == 'train':
            self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is -1]
        else:
            self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is not -1]
        self.scene_points_list = []
        self.semantic_labels_list = []
        self.room_coord_min, self.room_coord_max = [], []
        for file in self.file_list:
            data = np.load(root + file)#加载的是一个房间内所有点
            points = data[:, :3]
            self.scene_points_list.append(data[:, :6])
            self.semantic_labels_list.append(data[:, 6])
            coord_min, coord_max = np.amin(points, axis=0)[:3], np.amax(points, axis=0)[:3]
            self.room_coord_min.append(coord_min), self.room_coord_max.append(coord_max)
        assert len(self.scene_points_list) == len(self.semantic_labels_list)

        labelweights = np.zeros(13)
        for seg in self.semantic_labels_list:
            tmp, _ = np.histogram(seg, range(14))
            self.scene_points_num.append(seg.shape[0])
            labelweights += tmp
        labelweights = labelweights.astype(np.float32)
        labelweights = labelweights / np.sum(labelweights)
        self.labelweights = np.power(np.amax(labelweights) / labelweights, 1 / 3.0)

    def __getitem__(self, index):
        point_set_ini = self.scene_points_list[index]#取出一个房间中的点
        points = point_set_ini[:,:6]
        labels = self.semantic_labels_list[index]
        coord_min, coord_max = np.amin(points, axis=0)[:3], np.amax(points, axis=0)[:3]
        grid_x = int(np.ceil(float(coord_max[0] - coord_min[0] - self.block_size) / self.stride) + 1)#把一个房间分成网格
        grid_y = int(np.ceil(float(coord_max[1] - coord_min[1] - self.block_size) / self.stride) + 1)
        data_room, label_room, sample_weight, index_room = np.array([]), np.array([]), np.array([]),  np.array([])
        for index_y in range(0, grid_y):
            for index_x in range(0, grid_x):
                s_x = coord_min[0] + index_x * self.stride#start
                e_x = min(s_x + self.block_size, coord_max[0])#end 如果超出边界就取边界
                s_x = e_x - self.block_size#移动完后归位,准备下一次步长移动,
                s_y = coord_min[1] + index_y * self.stride
                e_y = min(s_y + self.block_size, coord_max[1])
                s_y = e_y - self.block_size
                point_idxs = np.where(
                    (points[:, 0] >= s_x - self.padding) & (points[:, 0] <= e_x + self.padding) & (points[:, 1] >= s_y - self.padding) & (
                                points[:, 1] <= e_y + self.padding))[0]#一个block中的点其实还挺多的,eg:69678
                if point_idxs.size == 0:#如果恰好这个block里面一个点也没有,就接着移动
                    continue
                num_batch = int(np.ceil(point_idxs.size / self.block_points))#eg:18
                point_size = int(num_batch * self.block_points)#这个操作把点数转为4096的整数倍eg:73728
                replace = False if (point_size - point_idxs.size <= point_idxs.size) else True#point_size是point_idxs.size向上最接近的4096的整数倍,point_size-point_idxs.size是向上取整需要补足的点数,如果采样的点不够补足就需要重复采一些点,实际上只有采样点不足2048的时候才需要
                point_idxs_repeat = np.random.choice(point_idxs, point_size - point_idxs.size, replace=replace)#随机采样补足向上取整的那部分
                point_idxs = np.concatenate((point_idxs, point_idxs_repeat))#eg:73728
                np.random.shuffle(point_idxs)
                data_batch = points[point_idxs, :]#73728*6
                normlized_xyz = np.zeros((point_size, 3))
                #点在这个房间中的相对位置(0~1)之间 xyz
                normlized_xyz[:, 0] = data_batch[:, 0] / coord_max[0]
                normlized_xyz[:, 1] = data_batch[:, 1] / coord_max[1]
                normlized_xyz[:, 2] = data_batch[:, 2] / coord_max[2]
                data_batch[:, 0] = data_batch[:, 0] - (s_x + self.block_size / 2.0)#变换到局部坐标
                data_batch[:, 1] = data_batch[:, 1] - (s_y + self.block_size / 2.0)
                data_batch[:, 3:6] /= 255.0
                data_batch = np.concatenate((data_batch, normlized_xyz), axis=1)#XYZRGBxyz
                label_batch = labels[point_idxs].astype(int)
                batch_weight = self.labelweights[label_batch]

                data_room = np.vstack([data_room, data_batch]) if data_room.size else data_batch#在batch维度上进行拼接
                label_room = np.hstack([label_room, label_batch]) if label_room.size else label_batch
                sample_weight = np.hstack([sample_weight, batch_weight]) if label_room.size else batch_weight
                index_room = np.hstack([index_room, point_idxs]) if index_room.size else point_idxs#block里的点在房间中的索引
        data_room = data_room.reshape((-1, self.block_points, data_room.shape[1]))#网格循环结束之后,里面存储的就是每一次移动的block里面的点,是4096的整数倍,是多少倍一个block就有多少个batch
        label_room = label_room.reshape((-1, self.block_points))
        sample_weight = sample_weight.reshape((-1, self.block_points))
        index_room = index_room.reshape((-1, self.block_points))
        return data_room, label_room, sample_weight, index_room

    def __len__(self):
        return len(self.scene_points_list)#返回房间数67

test_semseg.py:

BASE_DIR = os.path.dirname(os.path.abspath(__file__))
ROOT_DIR = BASE_DIR
sys.path.append(os.path.join(ROOT_DIR, 'models'))

classes = ['ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', 'table', 'chair', 'sofa', 'bookcase',
           'board', 'clutter']
class2label = {cls: i for i, cls in enumerate(classes)}
seg_classes = class2label#{类名:索引}
seg_label_to_cat = {}#{索引:类名}
for i, cat in enumerate(seg_classes.keys()):
    seg_label_to_cat[i] = cat


def parse_args():
    '''PARAMETERS'''
    parser = argparse.ArgumentParser('Model')
    parser.add_argument('--batch_size', type=int, default=1, help='batch size in testing [default: 32]')
    parser.add_argument('--gpu', type=str, default='0', help='specify gpu device')
    parser.add_argument('--num_point', type=int, default=4096, help='point number [default: 4096]')
    parser.add_argument('--log_dir', type=str, required=True, help='experiment root')
    parser.add_argument('--visual', action='store_true', default=False, help='visualize result [default: False]')
    parser.add_argument('--test_area', type=int, default=5, help='area for testing, option: 1-6 [default: 5]')
    parser.add_argument('--num_votes', type=int, default=3, help='aggregate segmentation scores with voting [default: 5]')
    return parser.parse_args()


def add_vote(vote_label_pool, point_idx, pred_label, weight):
    B = pred_label.shape[0]
    N = pred_label.shape[1]#4096 每一个block里面的小batch
    for b in range(B):
        for n in range(N):# 遍历每一个点
            if weight[b, n] != 0 and not np.isinf(weight[b, n]):
                vote_label_pool[int(point_idx[b, n]), int(pred_label[b, n])] += 1#每一个点归类到13个类别的统计
    return vote_label_pool


def main(args):
    def log_string(str):
        logger.info(str)
        print(str)

    '''HYPER PARAMETER'''
    os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
    experiment_dir = 'log/sem_seg/' + args.log_dir
    visual_dir = experiment_dir + '/visual/'
    visual_dir = Path(visual_dir)
    visual_dir.mkdir(exist_ok=True)

    '''LOG'''
    args = parse_args()
    logger = logging.getLogger("Model")
    logger.setLevel(logging.INFO)
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    file_handler = logging.FileHandler('%s/eval.txt' % experiment_dir)
    file_handler.setLevel(logging.INFO)
    file_handler.setFormatter(formatter)
    logger.addHandler(file_handler)
    log_string('PARAMETER ...')
    log_string(args)

    NUM_CLASSES = 13
    BATCH_SIZE = args.batch_size
    NUM_POINT = args.num_point

    root = 'data/s3dis/stanford_indoor3d/'

    TEST_DATASET_WHOLE_SCENE = ScannetDatasetWholeScene(root, split='test', test_area=args.test_area, block_points=NUM_POINT)
    log_string("The number of test data is: %d" % len(TEST_DATASET_WHOLE_SCENE))

    '''MODEL LOADING'''#加载模型和预训练文件
    model_name = os.listdir(experiment_dir + '/logs')[0].split('.')[0]
    MODEL = importlib.import_module(model_name)
    classifier = MODEL.get_model(NUM_CLASSES).cuda()
    checkpoint = torch.load(str(experiment_dir) + '/checkpoints/best_model.pth')
    classifier.load_state_dict(checkpoint['model_state_dict'])
    classifier = classifier.eval()

    with torch.no_grad():
        scene_id = TEST_DATASET_WHOLE_SCENE.file_list#67个房间.npy文件
        scene_id = [x[:-4] for x in scene_id]
        num_batches = len(TEST_DATASET_WHOLE_SCENE)#67个房间

        total_seen_class = [0 for _ in range(NUM_CLASSES)]#[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        total_correct_class = [0 for _ in range(NUM_CLASSES)]
        total_iou_deno_class = [0 for _ in range(NUM_CLASSES)]

        log_string('---- EVALUATION WHOLE SCENE----')

        for batch_idx in range(num_batches):
            print("Inference [%d/%d] %s ..." % (batch_idx + 1, num_batches, scene_id[batch_idx]))
            total_seen_class_tmp = [0 for _ in range(NUM_CLASSES)]
            total_correct_class_tmp = [0 for _ in range(NUM_CLASSES)]
            total_iou_deno_class_tmp = [0 for _ in range(NUM_CLASSES)]
            if args.visual:
                fout = open(os.path.join(visual_dir, scene_id[batch_idx] + '_pred.obj'), 'w')
                fout_gt = open(os.path.join(visual_dir, scene_id[batch_idx] + '_gt.obj'), 'w')

            whole_scene_data = TEST_DATASET_WHOLE_SCENE.scene_points_list[batch_idx]
            whole_scene_label = TEST_DATASET_WHOLE_SCENE.semantic_labels_list[batch_idx]
            vote_label_pool = np.zeros((whole_scene_label.shape[0], NUM_CLASSES))#N*13每个点都有13个分数
            for _ in tqdm(range(args.num_votes), total=args.num_votes):
                scene_data, scene_label, scene_smpw, scene_point_index = TEST_DATASET_WHOLE_SCENE[batch_idx]
                num_blocks = scene_data.shape[0]
                s_batch_num = (num_blocks + BATCH_SIZE - 1) // BATCH_SIZE#相当于向上找离num_blocks最近的BATCH_SIZE最近的整数倍
                batch_data = np.zeros((BATCH_SIZE, NUM_POINT, 9))

                batch_label = np.zeros((BATCH_SIZE, NUM_POINT))
                batch_point_index = np.zeros((BATCH_SIZE, NUM_POINT))
                batch_smpw = np.zeros((BATCH_SIZE, NUM_POINT))

                for sbatch in range(s_batch_num):
                    start_idx = sbatch * BATCH_SIZE
                    end_idx = min((sbatch + 1) * BATCH_SIZE, num_blocks)
                    real_batch_size = end_idx - start_idx
                    batch_data[0:real_batch_size, ...] = scene_data[start_idx:end_idx, ...]
                    batch_label[0:real_batch_size, ...] = scene_label[start_idx:end_idx, ...]
                    batch_point_index[0:real_batch_size, ...] = scene_point_index[start_idx:end_idx, ...]
                    batch_smpw[0:real_batch_size, ...] = scene_smpw[start_idx:end_idx, ...]
                    batch_data[:, :, 3:6] /= 1.0

                    torch_data = torch.Tensor(batch_data)
                    torch_data = torch_data.float().cuda()
                    torch_data = torch_data.transpose(2, 1)
                    seg_pred, _ = classifier(torch_data)
                    batch_pred_label = seg_pred.contiguous().cpu().data.max(2)[1].numpy()

                    vote_label_pool = add_vote(vote_label_pool, batch_point_index[0:real_batch_size, ...],
                                               batch_pred_label[0:real_batch_size, ...],
                                               batch_smpw[0:real_batch_size, ...])

            pred_label = np.argmax(vote_label_pool, 1)
            # 统计测试结果
            for l in range(NUM_CLASSES):
                total_seen_class_tmp[l] += np.sum((whole_scene_label == l))
                total_correct_class_tmp[l] += np.sum((pred_label == l) & (whole_scene_label == l))
                total_iou_deno_class_tmp[l] += np.sum(((pred_label == l) | (whole_scene_label == l)))
                total_seen_class[l] += total_seen_class_tmp[l]
                total_correct_class[l] += total_correct_class_tmp[l]
                total_iou_deno_class[l] += total_iou_deno_class_tmp[l]

            iou_map = np.array(total_correct_class_tmp) / (np.array(total_iou_deno_class_tmp, dtype=np.float) + 1e-6)
            print(iou_map)
            arr = np.array(total_seen_class_tmp)
            tmp_iou = np.mean(iou_map[arr != 0])
            log_string('Mean IoU of %s: %.4f' % (scene_id[batch_idx], tmp_iou))
            print('----------------------------')
            #写入文件
            filename = os.path.join(visual_dir, scene_id[batch_idx] + '.txt')
            with open(filename, 'w') as pl_save:
                for i in pred_label:
                    pl_save.write(str(int(i)) + '\n')
                pl_save.close()
            for i in range(whole_scene_label.shape[0]):
                color = g_label2color[pred_label[i]]
                color_gt = g_label2color[whole_scene_label[i]]
                if args.visual:
                    fout.write('v %f %f %f %d %d %d\n' % (#每行前面加‘V’在obj格式中表示顶点
                        whole_scene_data[i, 0], whole_scene_data[i, 1], whole_scene_data[i, 2], color[0], color[1],
                        color[2]))
                    fout_gt.write(
                        'v %f %f %f %d %d %d\n' % (
                            whole_scene_data[i, 0], whole_scene_data[i, 1], whole_scene_data[i, 2], color_gt[0],
                            color_gt[1], color_gt[2]))
            if args.visual:
                fout.close()
                fout_gt.close()

        IoU = np.array(total_correct_class) / (np.array(total_iou_deno_class, dtype=np.float) + 1e-6)
        iou_per_class_str = '------- IoU --------\n'
        for l in range(NUM_CLASSES):
            iou_per_class_str += 'class %s, IoU: %.3f \n' % (
                seg_label_to_cat[l] + ' ' * (14 - len(seg_label_to_cat[l])),
                total_correct_class[l] / float(total_iou_deno_class[l]))
        log_string(iou_per_class_str)
        log_string('eval point avg class IoU: %f' % np.mean(IoU))
        log_string('eval whole scene point avg class acc: %f' % (
            np.mean(np.array(total_correct_class) / (np.array(total_seen_class, dtype=np.float) + 1e-6))))
        log_string('eval whole scene point accuracy: %f' % (
                np.sum(total_correct_class) / float(np.sum(total_seen_class) + 1e-6)))

        print("Done!")

模型继续训练

利用当前训练好的模型继续训练,需要注意的是给出的best_model的epoch已经达到110,所以在train_sem_seg.py文件中需要设置epoch的值(要大于110),这里设置了140(再训30轮)。

def parse_args():
    parser = argparse.ArgumentParser('Model')
    parser.add_argument('--model', type=str, default='pointnet_sem_seg', help='model name [default: pointnet_sem_seg]')
    parser.add_argument('--batch_size', type=int, default=16, help='Batch Size during training [default: 16]')
    parser.add_argument('--epoch', default=140, type=int, help='Epoch to run [default: 32]')
    parser.add_argument('--learning_rate', default=0.001, type=float, help='Initial learning rate [default: 0.001]')
    parser.add_argument('--gpu', type=str, default='0', help='GPU to use [default: GPU 0]')
    parser.add_argument('--optimizer', type=str, default='Adam', help='Adam or SGD [default: Adam]')
    parser.add_argument('--log_dir', type=str, default='pointnet_sem_seg' , help='Log path [default: None]')
    parser.add_argument('--decay_rate', type=float, default=1e-4, help='weight decay [default: 1e-4]')
    parser.add_argument('--npoint', type=int, default=4096, help='Point Number [default: 4096]')
    parser.add_argument('--step_size', type=int, default=10, help='Decay step for lr decay [default: every 10 epochs]')
    parser.add_argument('--lr_decay', type=float, default=0.7, help='Decay rate for lr decay [default: 0.7]')
    parser.add_argument('--test_area', type=int, default=5, help='Which area to use for test, option: 1-6 [default: 5]')

    return parser.parse_args()

由于是在win10下面进行实验,所以dataloader里面的num_worker参数需要改为0,要不然会报错:

trainDataLoader = torch.utils.data.DataLoader(TRAIN_DATASET, batch_size=BATCH_SIZE, shuffle=True, num_workers=0,
                                                  pin_memory=True, drop_last=True,
                                                  worker_init_fn=lambda x: np.random.seed(x + int(time.time())))
testDataLoader = torch.utils.data.DataLoader(TEST_DATASET, batch_size=BATCH_SIZE, shuffle=False, num_workers=0,
                                                 pin_memory=True, drop_last=True)

在IDE中设置运行参数:

--model pointnet_sem_seg --test_area 5 --log_dir pointnet_sem_seg

在我的辣鸡显卡上,训练还是比较耗时的,基本一个小时能完成一个epoch,所以到122的时候就把它停了,训练完之后再做一次测试:3D点云语义分割篇——PointNet

eval point avg class IoU: 0.437896

eval whole scene point avg class acc: 0.531335

eval whole scene point accuracy: 0.784970

补充:从txt构造obj的python代码 

ff = open('./1.obj','w')#自己先创建一个空的obj文件
with open('./conferenceRoom_1.txt','r') as f:#打开数据集里面的任意一个房间的txt文件,
    line = f.readlines()#读取每一行
    for line_list in line:
        line_new = 'v '+line_list
        ff.write(line_new)

其实就是每行前面加个“v 空格",然后后缀名改成obj就可以直接用meshlab打开了.

上一篇:矩阵中任意多个点到某一点最短距离


下一篇:Objects as Points:CenterNet 无锚检测算法的理解