pascal voc2012分割标签处理

因为分割的标签是一个彩色图像,而不是理想中的每一个像素就是他的类别,所以我们需要处理一下

调色板

如果你用pillow读,就会发现他的模式是P,代表着他用调色板把灰度图映射成彩色图了,所以首先要获取调色板

pillow

简单来说就是读取任意标签,然后获取他的调色板

#!/usr/bin/env python
# _*_ coding:utf-8 _*_
import numpy as np
from PIL import Image

if __name__ == '__main__':
    # 任意标签
    label_path ='/data/datasets/VOCtrainval_11-May-2012/VOCdevkit/VOC2012/SegmentationClass/2007_000032.png'
    palette = np.array(Image.open(label_path).getpalette()).reshape((-1, 3))
    print(palette[:21])

官方matlab

在这个链接里可以找到http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar
VOCdevkit_18-May-2011/VOCdevkit/VOCcode/VOClabelcolormap.m

% VOCLABELCOLORMAP Creates a label color map such that adjacent indices have different
% colors.  Useful for reading and writing index images which contain large indices,
% by encoding them as RGB images.
%
% CMAP = VOCLABELCOLORMAP(N) creates a label color map with N entries.
function cmap = labelcolormap(N)

if nargin==0
    N=256
end
cmap = zeros(N,3);
for i=1:N
    id = i-1; r=0;g=0;b=0;
    for j=0:7
        r = bitor(r, bitshift(bitget(id,1),7 - j));
        g = bitor(g, bitshift(bitget(id,2),7 - j));
        b = bitor(b, bitshift(bitget(id,3),7 - j));
        id = bitshift(id,-3);
    end
    cmap(i,1)=r; cmap(i,2)=g; cmap(i,3)=b;
end
cmap = cmap / 255;

结论

这里除了原本的20类+背景,还有一个边界

VOC_COLORMAP = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0],
                [0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128],
                [64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0],
                [64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128],
                [0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0],
                [0, 64, 128], [224, 224, 192]]

VOC_CLASSES = ['background', 'aeroplane', 'bicycle', 'bird', 'boat',
               'bottle', 'bus', 'car', 'cat', 'chair', 'cow',
               'diningtable', 'dog', 'horse', 'motorbike', 'person',
               'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor', 'void']

http://host.robots.ox.ac.uk/pascal/VOC/voc2012/segexamples/index.html
https://zhuanlan.zhihu.com/p/102303256

pascal voc2012分割标签处理

映射

因为python不能把数组映射成数字,所以要把(r,g,b)编码成一个数字,可以考虑
r < < 16 ∣ g < < 8 ∣ b = r ∗ 256 ∗ 256 + g ∗ 256 + b r<<16|g<<8|b=r*256*256+g*256+b r<<16∣g<<8∣b=r∗256∗256+g∗256+b
然后再映射

第一种方法

做一个数组,包含所有的颜色的映射,然后把图片输进去就可以了
这里注意图片一定要转成int(不然貌似会超出uint8)
我这里把边界映射成了0(就是注释掉的21那里)

#!/usr/bin/env python
# _*_ coding:utf-8 _*_
import numpy as np
from cv2 import cv2

VOC_COLORMAP = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0],
                [0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128],
                [64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0],
                [64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128],
                [0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0],
                [0, 64, 128], [224, 224, 192]]

VOC_CLASSES = ['background', 'aeroplane', 'bicycle', 'bird', 'boat',
               'bottle', 'bus', 'car', 'cat', 'chair', 'cow',
               'diningtable', 'dog', 'horse', 'motorbike', 'person',
               'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor', 'void']

if __name__ == '__main__':
    label_path ='/data/datasets/VOCtrainval_11-May-2012/VOCdevkit/VOC2012/SegmentationClass/2007_000032.png'
    label_color_map = np.zeros(1 << 24, dtype=np.uint8)
    for i, (r, g, b) in enumerate(VOC_COLORMAP[:21]):
        # label_color_map[r << 16 | g << 8 | b] = i
        label_color_map[b << 16 | g << 8 | r] = i
    r, g, b = VOC_COLORMAP[-1]
    # label_color_map[r << 16 | g << 8 | b] = 21
    # label_color_map[r << 16 | g << 8 | b] = 0
    # label_color_map[b << 16 | g << 8 | r] = 21
    label_color_map[b << 16 | g << 8 | r] = 0
    img = cv2.imread(label_path).astype(np.int64)
    result = label_color_map[img[..., 0] << 16 | img[..., 1] << 8 | img[..., 2]].astype(np.uint8)
    print(np.unique(result))

第二种方法

其实和第一种差不多,只是用了向量化(其实我也不知道会不会更快)

#!/usr/bin/env python
# _*_ coding:utf-8 _*_
import numpy as np
from cv2 import cv2

VOC_COLORMAP = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0],
                [0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128],
                [64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0],
                [64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128],
                [0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0],
                [0, 64, 128], [224, 224, 192]]

VOC_CLASSES = ['background', 'aeroplane', 'bicycle', 'bird', 'boat',
               'bottle', 'bus', 'car', 'cat', 'chair', 'cow',
               'diningtable', 'dog', 'horse', 'motorbike', 'person',
               'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor', 'void']

if __name__ == '__main__':
    label_path ='/data/datasets/VOCtrainval_11-May-2012/VOCdevkit/VOC2012/SegmentationClass/2007_000032.png'
    label_color_map = {}
    for i, (r, g, b) in enumerate(VOC_COLORMAP[:21]):
        # label_color_map[r << 16 | g << 8 | b] = i
        label_color_map[b << 16 | g << 8 | r] = i
    r, g, b = VOC_COLORMAP[-1]
    # label_color_map[r << 16 | g << 8 | b] = 21
    # label_color_map[r << 16 | g << 8 | b] = 0
    # label_color_map[b << 16 | g << 8 | r] = 21
    label_color_map[b << 16 | g << 8 | r] = 0
    bgr2label = np.vectorize(label_color_map.get)
    img = cv2.imread(label_path).astype(np.int64)
    result = bgr2label(img[..., 0] << 16 | img[..., 1] << 8 | img[..., 2]).astype(np.uint8)
    print(np.unique(result))


上一篇:Haproxy搭建web群集


下一篇:ZeroMQ(一)——error while loading shared libraries: libzmq.so.5: cannot open shared object解决方案