DW_目标检测基础
目标检测基本概念
-
目标检测:需要在识别出图片中目标类别的基础上(图像分类),还要精确定位到目标的具体位置,并用外接矩形框标出。
-
物体的位置:通过滑窗的方式确定众多候选框,罗列图中各种可能的区域,再对候选框进行分类和微调。这样对于图像中每个区域都能得到(class,x1,y1,x2,y2)五个属性,汇总后最终就得到了图中物体的类别和坐标信息。
除此之外,每个框送入到分类网络分类都有一个得分(代表当前框的置信度),那么得分最高的就代表识别的最准确的框,其位置就是最终要检测的目标的位置。 -
目标框定义:目标检测的标签信息有5个,除了类别label以外,需要同时包含目标的位置信息,也就是目标的外接矩形框bounding box。
用来表达bbox的格式通常有两种,(x1, y1, x2, y2) 和 (x_c, y_c, w, h)
两种格式会分别在后续不同场景下更加便于计算。
两种格式互相转换的实现utils.py
def xy_to_cxcy(xy):
"""
Convert bounding boxes from boundary coordinates (x_min, y_min, x_max, y_max) to center-size coordinates (c_x, c_y, w, h).
:param xy: bounding boxes in boundary coordinates, a tensor of size (n_boxes, 4)
:return: bounding boxes in center-size coordinates, a tensor of size (n_boxes, 4)
"""
return torch.cat([(xy[:, 2:] + xy[:, :2]) / 2, # c_x, c_y
xy[:, 2:] - xy[:, :2]], 1) # w, h
def cxcy_to_xy(cxcy):
"""
Convert bounding boxes from center-size coordinates (c_x, c_y, w, h) to boundary coordinates (x_min, y_min, x_max, y_max).
:param cxcy: bounding boxes in center-size coordinates, a tensor of size (n_boxes, 4)
:return: bounding boxes in boundary coordinates, a tensor of size (n_boxes, 4)
"""
return torch.cat([cxcy[:, :2] - (cxcy[:, 2:] / 2), # x_min, y_min
cxcy[:, :2] + (cxcy[:, 2:] / 2)], 1) # x_max, y_max
- 交并比
流程:
1.首先获取两个框的坐标,红框坐标: 左上(red_x1, red_y1), 右下(red_x2, red_y2),绿框坐标: 左上(green_x1, green_y1),右下(green_x2, green_y2)
2.计算两个框左上点的坐标最大值:(max(red_x1, green_x1), max(red_y1, green_y1)), 和右下点坐标最小值:(min(red_x2, green_x2), min(red_y2, green_y2))
3.利用2算出的信息计算黄框面积:yellow_area
4.计算红绿框的面积:red_area 和 green_area
5.iou = yellow_area / (red_area + green_area - yellow_area)
def find_intersection(set_1, set_2):
"""
Find the intersection of every box combination between two sets of boxes that are in boundary coordinates.
:param set_1: set 1, a tensor of dimensions (n1, 4)
:param set_2: set 2, a tensor of dimensions (n2, 4)
:return: intersection of each of the boxes in set 1 with respect to each of the boxes in set 2, a tensor of dimensions (n1, n2)
"""
# PyTorch auto-broadcasts singleton dimensions
lower_bounds = torch.max(set_1[:, :2].unsqueeze(1), set_2[:, :2].unsqueeze(0)) # (n1, n2, 2)
upper_bounds = torch.min(set_1[:, 2:].unsqueeze(1), set_2[:, 2:].unsqueeze(0)) # (n1, n2, 2)
intersection_dims = torch.clamp(upper_bounds - lower_bounds, min=0) # (n1, n2, 2)
return intersection_dims[:, :, 0] * intersection_dims[:, :, 1] # (n1, n2)
def find_jaccard_overlap(set_1, set_2):
"""
Find the Jaccard Overlap (IoU) of every box combination between two sets of boxes that are in boundary coordinates.
:param set_1: set 1, a tensor of dimensions (n1, 4)
:param set_2: set 2, a tensor of dimensions (n2, 4)
:return: Jaccard Overlap of each of the boxes in set 1 with respect to each of the boxes in set 2, a tensor of dimensions (n1, n2)
"""
# Find intersections
intersection = find_intersection(set_1, set_2) # (n1, n2)
# Find areas of each box in both sets
areas_set_1 = (set_1[:, 2] - set_1[:, 0]) * (set_1[:, 3] - set_1[:, 1]) # (n1)
areas_set_2 = (set_2[:, 2] - set_2[:, 0]) * (set_2[:, 3] - set_2[:, 1]) # (n2)
# Find the union
# PyTorch auto-broadcasts singleton dimensions
union = areas_set_1.unsqueeze(1) + areas_set_2.unsqueeze(0) - intersection # (n1, n2)
return intersection / union # (n1, n2)
目标检测数据集VOC
数据集介绍
- VOC数据集在类别上可以分为4大类,20小类,其类别信息如图所示。
VOC数量集图像和目标数量的基本信息
- 数据集说明
1.JPEGImages
这个文件夹中存放所有的图片,包括训练验证测试用到的所有图片。
2.ImageSets
这个文件夹中包含三个子文件夹,Layout、Main、Segmentation
-
Layout文件夹中存放的是train,valid,test和train+valid数据集的文件名
-
Segmentation文件夹中存放的是分割所用train,valid,test和train+valid数据集的文件名
-
Main文件夹中存放的是各个类别所在图片的文件名,比如cow_val,表示valid数据集中,包含有cow类别目标的图片名称。
3.Annotations
Annotation文件夹中存放着每张图片相关的标注信息,以xml格式的文件存储,可以通过记事本或者浏览器打开
filename:图片名称
size:图片宽高,
depth表示图片通道数
object:表示目标,包含下面两部分内容。
-
首先是目标类别name为dog。pose表示目标姿势为left,truncated表示是否是一个被截断的目标,1表示是,0表示不是,在这个例子中,只露出狗头部分,所以truncated为1。difficult为0表示此目标不是一个难以识别的目标。
-
然后就是目标的bbox信息,可以看到,这里是以[xmin,ymin,xmax,ymax]格式进行标注的,分别表示dog目标的左上角和右下角坐标。
dataloader的构建
create_data_lists.py脚本的作用是进行一系列的数据准备工作,主要是提前将记录标注信息的xml文件(Annotations)进行解析,并将信息整理到json文件之中,这样在运行训练脚本时,只需简单的从json文件中读取已经按想要的格式存储好的标签信息即可。
注: 这样的预处理并不是必须的,和算法或数据集本身均无关系,只是取决于开发者的代码习惯,不同检测框架的处理方法也是不一致的。
"""python
create_data_lists
"""
from utils import create_data_lists
if __name__ == '__main__':
# voc07_path,voc12_path为我们训练测试所需要用到的数据集,output_folder为我们生成构建dataloader所需文件的路径
# 参数中涉及的路径以个人实际路径为准,建议将数据集放到dataset目录下,和教程保持一致
create_data_lists(voc07_path='../../../dataset/VOCdevkit/VOC2007',
voc12_path='../../../dataset/VOCdevkit/VOC2012',
output_folder='../../../dataset/VOCdevkit')
设置好对应路径后,我们运行数据集准备脚本:
tiny_detector_demo$ python create_data_lists.py
dataset/VOCdevkit目录下就生成了若干json文件,这些文件会在后面训练中真正被用到。
下面开始介绍构建dataloader的相关代码
:
- 首先了解一下训练的时候在哪里定义了dataloader以及是如何定义的。
#以下是train.py中的部分代码段:
#train_dataset和train_loader的实例化
train_dataset = PascalVOCDataset(data_folder,
split='train',
keep_difficult=keep_difficult)
#首先需要实例化PascalVOCDataset类得到train_dataset,
#将train_dataset传入torch.utils.data.DataLoader,得到train_loader
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True,
collate_fn=train_dataset.collate_fn, num_workers=workers,
pin_memory=True) # note that we're passing the collate function here
- 接下来看一下PascalVOCDataset是如何定义的。
代码位于 datasets.py 脚本中,PascalVOCDataset继承了torch.utils.data.Dataset,然后重写了__init__ , getitem, len 和 collate_fn 四个方法,这也是我们在构建自己的dataset的时候需要经常做的工作。
"""python
PascalVOCDataset具体实现过程
"""
import torch
from torch.utils.data import Dataset
import json
import os
from PIL import Image
from utils import transform
class PascalVOCDataset(Dataset):
"""
A PyTorch Dataset class to be used in a PyTorch DataLoader to create batches.
"""
#初始化相关变量
#读取images和objects标注信息
def __init__(self, data_folder, split, keep_difficult=False):
"""
:param data_folder: folder where data files are stored
:param split: split, one of 'TRAIN' or 'TEST'
:param keep_difficult: keep or discard objects that are considered difficult to detect?
"""
self.split = split.upper() #保证输入为纯大写字母,便于匹配{'TRAIN', 'TEST'}
assert self.split in {'TRAIN', 'TEST'}
self.data_folder = data_folder
self.keep_difficult = keep_difficult
# Read data files
with open(os.path.join(data_folder, self.split + '_images.json'), 'r') as j:
self.images = json.load(j)
with open(os.path.join(data_folder, self.split + '_objects.json'), 'r') as j:
self.objects = json.load(j)
assert len(self.images) == len(self.objects)
#循环读取image及对应objects
#对读取的image及objects进行tranform操作(数据增广)
#返回PIL格式图像,标注框,标注框对应的类别索引,对应的difficult标志(True or False)
def __getitem__(self, i):
# Read image
#*需要注意,在pytorch中,图像的读取要使用Image.open()读取成PIL格式,不能使用opencv
#*由于Image.open()读取的图片是四通道的(RGBA),因此需要.convert('RGB')转换为RGB通道
image = Image.open(self.images[i], mode='r')
image = image.convert('RGB')
# Read objects in this image (bounding boxes, labels, difficulties)
objects = self.objects[i]
boxes = torch.FloatTensor(objects['boxes']) # (n_objects, 4)
labels = torch.LongTensor(objects['labels']) # (n_objects)
difficulties = torch.ByteTensor(objects['difficulties']) # (n_objects)
# Discard difficult objects, if desired
#如果self.keep_difficult为False,即不保留difficult标志为True的目标
#那么这里将对应的目标删去
if not self.keep_difficult:
boxes = boxes[1 - difficulties]
labels = labels[1 - difficulties]
difficulties = difficulties[1 - difficulties]
# Apply transformations
#对读取的图片应用transform
image, boxes, labels, difficulties = transform(image, boxes, labels, difficulties, split=self.split)
return image, boxes, labels, difficulties
#获取图片的总数,用于计算batch数
def __len__(self):
return len(self.images)
#我们知道,我们输入到网络中训练的数据通常是一个batch一起输入,而通过__getitem__我们只读取了一张图片及其objects信息
#如何将读取的一张张图片及其object信息整合成batch的形式呢?
#collate_fn就是做这个事情,
#对于一个batch的images,collate_fn通过torch.stack()将其整合成4维tensor,对应的objects信息分别用一个list存储
def collate_fn(self, batch):
"""
Since each image may have a different number of objects, we need a collate function (to be passed to the DataLoader).
This describes how to combine these tensors of different sizes. We use lists.
Note: this need not be defined in this Class, can be standalone.
:param batch: an iterable of N sets from __getitem__()
:return: a tensor of images, lists of varying-size tensors of bounding boxes, labels, and difficulties
"""
images = list()
boxes = list()
labels = list()
difficulties = list()
for b in batch:
images.append(b[0])
boxes.append(b[1])
labels.append(b[2])
difficulties.append(b[3])
#(3,224,224) -> (N,3,224,224)
images = torch.stack(images, dim=0)
return images, boxes, labels, difficulties # tensor (N, 3, 224, 224), 3 lists of N tensors each
- 关于数据增强
构建dataset中有个很重要的一步是transform操作(数据增强)。
image, boxes, labels, difficulties = transform(image, boxes, labels, difficulties, split=self.split)
- 最后,构建DataLoader
至此,我们已经将VOC数据转换成了dataset,接下来可以用来创建dataloader,这部分pytorch已经帮我们实现好了,我们只需将创建好的dataset送入即可,注意理解相关参数。
"""python
DataLoader
"""
#参数说明:
#在train时一般设置shufle=True打乱数据顺序,增强模型的鲁棒性
#num_worker表示读取数据时的线程数,一般根据自己设备配置确定(如果是windows系统,建议设默认值0,防止出错)
#pin_memory,在计算机内存充足的时候设置为True可以加快内存中的tensor转换到GPU的速度,具体原因可以百度哈~
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True,
collate_fn=train_dataset.collate_fn, num_workers=workers,
pin_memory=True) # note that we're passing the collate function here