Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net

动机

本文通过一个不深的网络搞定了3D的目标检测/跟踪/预测。采用BEV的方式进行表达。
猜测本文是MP3论文关于Perception部分的原型。
输入:4D张量(X,Y,Z,T)
输出:N张带预测的BEV图

备忘

This can result in catastrophic failures as downstream processes cannot recover from errors that appear at the beginning of the pipeline
– 级联方式下后端处理模块无法修正千点模块的错误

We argue that this is important as tracking and prediction can help object detection. For example, leveraging tracking and prediction information can reduce detection false negatives when dealing with occluded or far away objects. False positives can also be reduced by accumulating evidence overtime.
– 端到端的好处能够互为证明:检测/预测/跟踪

– 本文关于BBOX的预测方式借鉴了:
SSD: Single shot multibox detector

– 本文不使用3D卷积使用2D卷积的方式参考:
Multi-view 3d object detection network for autonomous driving

Voxel Representation

Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
We then assign a binary indicator for each voxel encoding whether the voxel is occupied. Instead, we performed 2D convolutions and treat the height dimension as the channel dimension.

–[cc]比较经典的处理方式

Adding Temporal Information

Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
we take all the 3D points from the past n frames and perform a change of coordinates to represent then in the current vehicle coordinate system.

[cc]按照当前帧对以前的帧进行坐标系变换。如何进行变换论文没讲!个人能想到的简单办法是根据当前车辆状态(6个 维度)得到R/T,去反算前几帧

we can append multiple frames’ along a new temporal dimension to create a 4D tensor.

Model Formulation

4D input tensor and regresses directly to object bounding boxes at different timestamps without u
using region proposals.
Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net

  • Early Fusion
    Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net

[CC]基础网络还是一个VGG16,进行了裁剪.

we first use a 1D convolution with kernel size n on temporal dimension to reduce the temporal dimension from n to 1.

– 使用一个1D Conv将时间序列连接起来了

  • Late Fusion Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
    but instead perform 3D convolution with kernel size 3 × 3 × 3 for 2 layers without padding on temporal dimension, which reduces the temporal dimension from n to 1

Motion forecasting

Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
We then add two branches of convolution layers as shown in Fig. 5. The first one performs binary classification to predict the probability of being a vehicle. The second one predicts the bounding box over the current frame as well as n − 1 frames into the future.

【cc】这里两个分支网络没有描述

Following SSD [17], we use multiple predefined boxes for each feature map location.In total there are 6 predefined boxes per feature map location denoted as a[k,i,j] , where i = 1, …, I, j = 1, …, is the location in the feature map and k = 1, …, K ranges over the predefined boxes.

[CC]早期YOLO浓浓的既视感

Notice that we do not use predefined heading angles

[CC]BBOX的朝向是作为预测值进行回归的

for each predefined box a[k,i,j] , our network predicts the corresponding normalized location offset lˆx,lˆy, log-normalized sizes sˆw, sˆh and heading parameters aˆsin, aˆcos.

When there is overlap between detections from current and past’s future predictions, they are considered to be the same object and their bounding boxes will simply be averaged.

【CC】会把当前帧的预测和历史帧的预测合起来直接取平局作为BBOX的预测位置。感觉有点粗暴,是不是用卡尔曼滤波都好点。论文里面没有看到这一块的处理,不知道是手工实现还是通过NN,看描述应该是NN实现的通过 average pooling???

Loss Function and Training

Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
where t is the current frame and w represents the model parameters.

We employ as classification loss binary cross-entropy computed over all locations and predefined boxes:
Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
here i, j, k are the indices on feature map locations and predefined box identity, q[i,j,k] is the class label (i.e. q[i,j,k] =1 for vehicle and 0 for background) and p[i,j,k] is the predicted probability for vehicle.

【CC】分类损失函数是一个二值交叉熵,用来衡量BBOX分类是否正确

Thus we define the regression targets as
Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
We use a weighted smooth L1 loss over all regression targets where smooth L1 is defined as:Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
or each predicted box, we first find the ground truth box with biggest overlap in terms of IoU. If the IoU is bigger than a fixed threshold (0.4 in practice), we assign this ground truth box as ¯ a[k,i,j] and assign 1 to its corresponding label q[i,j,k]

【cc】这个损失函数即MP3里面提到的Hindge 函数。整个论文对细节写的不太清楚,可能涉及公司的实现

上一篇:【题解】CF830E Perpetual Motion Machine


下一篇:|Boids|鸟群模型|鸟群算法|学习笔记 - 论文学习