Yolov3参数解释以及答疑

目录

参数解析

训练答疑


参数解析



[net]
#Testing
#batch=1         //test:一次一个图片
#subdivisions=1
#Training
 batch=32				 //一次迭代送入网络的图片数
 subdivisions=8  //一次迭代分成subdivisions次前向计算,这里是32/8
width=416        //图片宽高 ,要求width==height, 并且为32的倍数。增大分辨率可以检测到更加细小的物体
height=416
channels=3
momentum=0.9     //影响梯度下降到最优的速度,一般默认0.9。如想深入了解可以学习吴恩达深度学习课,
decay=0.0005     //权重衰减正则项系数,防止过拟合
angle=0          //旋转角度增加训练样本
saturation = 1.5 //增加饱和度增加训练样本
exposure = 1.5	 //增加曝光增加训练样本
hue=.1			 //通过调整色调来增加训练样本

learning_rate=0.0001 //学习率,一般默认为0.001
burn_in=1000         //1000次后学习率的下降采用policy方式
max_batches = 500200  //=最大迭代次数*bathch
policy=steps          //学习率下降方式,exp,steps,constant等
steps=400000,450000	  //到了400000步的时候和450000步的时候学习率会再衰减scales(0.1,0.1)。
scales=.1,.1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky  //leaky_relu 卷基层输入先标准化,后面用非线性激活函数leaky_relu

# Downsample

[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear     //借鉴了resnet网络的shortcut方式可以加深网络

# Downsample

[convolutional]
batch_normalize=1
filters=128
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

# Downsample

[convolutional]
batch_normalize=1
filters=256
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

# Downsample

[convolutional]
batch_normalize=1
filters=512
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

# Downsample

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear

######################

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=18#255 //3*(4+1+classes)
activation=linear

[yolo]
mask = 6,7,8
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=1
num=9
jitter=.3 //通过抖动来防止过拟合,jitter就是crop的参数
ignore_thresh = .7  //参数解释:ignore_thresh 指得是参与计算的IOU阈值大小。当预测的检测框与ground true的IOU大于ignore_thresh的时候,参与loss的计算,否则,检测框的不参与损失计算。
                     参数目的和理解:目的是控制参与loss计算的检测框的规模,当ignore_thresh过于大,接近于1的时候,那么参与检测框回归loss的个数就会比较少,同时也容易造成过拟合;而如果ignore_thresh设置的过于小,那么参与计算的会数量规模就会很大。同时也容易在进行检测框回归的时候造成欠拟合。
                     参数设置:一般选取0.5-0.7之间的一个值,之前的计算基础都是小尺度(13*13)用的是0.7,(26*26)用的是0.5。这次先将0.5更改为0.7。
                     实验结果:AP=0.5121(有明显下降)
truth_thresh = 1    //默认
random=1            //每隔几次迭代后就会微调网络的输入尺寸,如果为1,每次迭代图片大小随机从320到608,步长为32,如果为0,每次训练大小与输入大小一致

[route]
layers = -4

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = -1, 61

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=18#255
activation=linear

[yolo]
mask = 3,4,5
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

[route]
layers = -4

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = -1, 36

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=18#255 3*(4+1+1)
activation=linear

[yolo]
mask = 0,1,2
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326  //anchors是可以事先通过cmd指令计算出来的,是和图片数量,width,height以及cluster(应该就是下面的num的值,
				                                                                     	//即想要使用的anchors的数量)相关的预选框,可以手工挑选,也可以通过k means 从训练样本中学出
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

训练答疑

Yolov3参数解释以及答疑

                                                                    训练一次迭代的log 图1

疑问以及解决方案
如果你看到avg loss =nan 说明训练错误; 某一行的Class=-nan说明目标太大或者太小,某个尺度检测不到,属于正常  
什么时候应该停止训练? 当loss不在下降或者下降极慢的情况可以停止训练,一般loss=0.7左右就可以了
在训练集上测试正确率很高,在其他测试集上测试效果很差,说明过拟合了。 提前停止训练,或者增大样本数量训练
如何提高目标检测正确率包括IOU,分类正确率

设置yolo层 random =1,增加不同的分辨率。或者增大图片本身分辨率。或者根据你自定义的数据集去重新计算anchor尺寸(darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file)

如何增加训练样本?

样本特点尽量多样化,亮度,旋转,背景,目标位置,尺寸

添加没有标注框的图片和其空的txt文件,作为negative数据

训练的图片较小,但是实际检测图片大,怎么检测小目标

1.使在用416*416训练完之后,也可以在cfg文件中设置较大的width和height,增加网络对图像的分辨率,从而更可能检测出图像中的小目标,而不需要重新训练

2.

set `[route] layers = -1, 11`

set ` [upsample] stride=4`

网络模型耗费资源多大?

(我用过就两个)

[yolov3.cfg]  [236MB COCO-91类]  [4GB GPU-RAM]

[yolov3.cfg]  [194MB VOC-20类]  [4GB GPU-RAM]

[yolov3-tiny.cfg]  [34MB COCO-91类]  [1GB GPU-RAM]

[yolov3-tiny.cfg]  [26MB VOC-20类]  [1GB GPU-RAM]

多GPU怎么训练
  1. 首先用一个gpu训练1000次迭代后的网络,
  2. 再用多gpu训练

darknet.exe detector train data/voc.data cfg/yolov3-voc.cfg /backup/yolov3-voc_1000.weights -gpus 0,1,2,3

有哪些命令行来对神经网络进行训练和测试?

1.检测图片: build\darknet\x64\darknet.exe detector test data/coco.data cfg/yolov3.cfg yolov3.weights  -thresh 0.25 xxx.jpg

2.检测视频:将test 改为 demo ; xxx.jpg 改为xxx.mp4

3.调用网络摄像头:将xxx.mp4 改为 http://192.168.0.80:8080/video?dummy=x.mjpg -i 0

4.批量检测:-dont_show -ext_output < data/train.txt >  result.txt

5.手持端网络摄像头:下载mjpeg-stream 软件, xxx.jpg 改为 IP Webcam / Smart WebCam

如何评价模型好坏

build\darknet\x64\darknet.exe detector map data\defect.data cfg\yolov3.cfg backup\yolov3.weights

利用上面命令计算各权重文件,选择具有最高IoU(联合的交集)和mAP(平均精度)的权重文件

上一篇:Unable to find vcvarsall.bat的解决办法


下一篇:爱上MVC3~MVC+ZTree大数据异步树加载