根据Apriltag进行角度和距离检测

根据Apriltag进行角度和距离检测

简 介: 根据检测到的Apriltag的图像,可以提取摄像头与Apriltag之间的距离和角度,这些参数可以用于对移动车模的定位。

关键词 Apriltag单应矩阵欧拉角

背景介绍 文章目录 Apriltag检测结果 Apriltag检
测结果参数
检测示例 单应矩阵分解 单应矩阵 单应矩阵分解 Apriltag方位 方向检测 距离检测 总 结

 

§00 景介绍

  在 第十七届全国大学生智能车智能视觉组 中,采用 AprilTag 作为车模场地辅助定位。这其中需要设计到 Apriltag检测 、对 不同视角下完成Apriltag方位参数确定 。通过两个,或者两个以上的Apriltag的检测,确定在同一平面上车模作品所在的位置。

根据Apriltag进行角度和距离检测

▲ 图1 不同系列的Apriltag码

  • 官网:https://april.eecs.umich.edu/software/apriltag.html
  • git仓库地址:https://github.com/AprilRobotics/apriltag

  那么如何利用Apriltag检测方法来获取相机与视觉基准Apriltag之间的方位(距离和夹角)?这需要对Apriltag方法和结果进行讨论。

 

§01 Apriltag检测结果


  装Python下检测Apriltag软件包的方法:

pip install pupil-apriltags

  在Linux下,则直接安装: apriltag

1.1 Apriltag检测结果参数

  检测Apriltag的函数。

from pupil_apriltags import Detector

at_detector = Detector(families='tag36h11',
                       nthreads=1,
                       quad_decimate=1.0,
                       quad_sigma=0.0,
                       refine_edges=1,
                       decode_sharpening=0.25,
                       debug=0)

  在 旋转的Apriltag码 中,利用apriltag检测到图片中的APRILTAG的参数。

  在 APRILTAG详解-PYTHON实现 给出了这些参数的介绍

  • tag_family: 所需要的系列;tag25h9, tag36h11等等
  • tag_id: tag的ID数码
  • hamming:表明检测结果中存在多个bit位错误。如果允许存在一定的错误比特,可能会导致出现错误的正样本概率。
  • goodness:
  • decision_margin:表征了数据bit为的亮度值与判定平均阈值之间的平均差异。值越大,表示图片质量越好。
  • homography:表征了从(-1,1),(1,1),(1,-1),(-1,-1)标准的四方Apriltag码映射到图片中的单应矩阵。
  • center: Apriltag图片中心位置;
  • corners:Apriltag图片的四个角落的位置;

1.1.1 检测函数参数

【表0-0-1 Apriltag检测中的参数】

Option Default Explanation
families ‘tag36h11’ Tag families, separated with a space
nthreads 1 Number of threads
quad decimate 2.0 Detection of quads can be done on a lower-resolution image, improving speed at a cost of pose accuracy and a slight decrease in detection rate. Decoding the binary payload is still done at full resolution. Set this to 1.0 to use the full resolution.
quad sigma 0.0 What Gaussian blur should be applied to the segmented image. Parameter is the standard deviation in pixels. Very noisy images benefit from non-zero values (e.g. 0.8)
refine edges 1 When non-zero, the edges of the each quad are adjusted to “snap to” strong gradients nearby. This is useful when decimation is employed, as it can increase the quality of the initial quad estimate substantially. Generally recommended to be on (1). Very computationally inexpensive. Option is ignored if quad decimate = 1
decode sharpening 0.25 How much sharpening should be done to decoded images? This can help decode small tags but may or may not help in odd lighting conditions or low light conditions
debug 0 If 1, will save debug images. Runs very slow

1.1.2 检测结果含义

【表0-0-2 Apriltag结果参数】

Attribute Explanation
tag family The family of the tag.
tag id The decoded ID of the tag.
hamming How many error bits were corrected? Note: accepting large numbers of corrected errors leads to greatly increased false positive rates. NOTE: As of this implementation, the detector cannot detect tags with a Hamming distance greater than 2.
decision margin A measure of the quality of the binary decoding process: the average difference between the intensity of a data bit versus the decision threshold. Higher numbers roughly indicate better decodes. This is a reasonable measure of detection accuracy only for very small tags-- not effective for larger tags (where we could have sampled anywhere within a bit cell and still gotten a good detection.)
homography The 3x3 homography matrix describing the projection from an “ideal” tag (with corners at (-1,1), (1,1), (1,-1), and (-1, -1)) to pixels in the image.
center The center of the detection in image pixel coordinates.
corners The corners of the tag in image pixel coordinates. These always wrap counter-clock wise around the tag.
pose R* Rotation matrix of the pose estimate.
pose t* Translation of the pose estimate.
pose err* Object-space error of the estimation.

1.2 检测示例

  下面通过一个实际图片中的Apriltag图片检测,演示对Apriltag的检测。

1.2.1 检测过程

(1)检测图片

  这是在 旋转的Apriltag码 拍摄的小型Apriltag定位立方体的照片。模拟在智能视觉组中比赛场地内用于车模位置定位的Apriltag立方体。

根据Apriltag进行角度和距离检测

▲ 图1.2.1 用于检测的图片

img = cv2.imread(procfile)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

plt.clf()
plt.figure(figsize=(12,12))
plt.axis("off")
plt.imshow(gray, cmap=plt.cm.gray)

(2)检测结果

atd = apriltag.Detector(apriltag.DetectorOptions(families='tag36h11 tag25h9'))
tags = atd.detect(gray)

print(tags)

  下面是检测出的结果,可以看到只检测出一个Apriltag的参数。

[Detection(tag_family=b'tag25h9', tag_id=0, hamming=0, goodness=0.0, decision_margin=84.82856750488281, homography=array([[-7.34829223e-01, -2.23934792e-02, -3.21568120e+00],
       [-8.83848839e-03, -6.92308636e-01, -2.31210225e+00],
       [-1.24067453e-05, -2.62518965e-05, -7.28615154e-03]]), center=array([441.3415206 , 317.32832351]), corners=array([[339.21502686, 222.27757263],
       [540.14733887, 223.94987488],
       [542.39001465, 411.37576294],
       [342.91049194, 410.35256958]]))]

  下图给出了检测到的一个Apriltag的关键点(四个角点,一个中心店)的位置。
根据Apriltag进行角度和距离检测

▲ 图1.2.2 检测出一个Apriltag以及它对应的关键点的位置

for tag in tags:
    for c in tag.corners:
        print(c)
        cv2.circle(img, tuple(c.astype(int)), 4, (255, 0, 0), 2)

    cv2.circle(img, tuple(tag.center.astype(int)), 4, (2,180,200), 4)

plt.clf()
plt.figure(figsize=(12,12))
plt.axis("off")
plt.imshow(img)

1.2.2 单应矩阵

  从上面的结果中可以看到单应矩阵的数值。

tags[0].homography: 
[[-7.34829223e-01 -2.23934792e-02 -3.21568120e+00]
 [-8.83848839e-03 -6.92308636e-01 -2.31210225e+00]
 [-1.24067453e-05 -2.62518965e-05 -7.28615154e-03]]

  利用单应矩阵对tags中的corners进行反变换:

O = H − 1 ⋅ C o r n e r s T O = H^{ - 1} \cdot Corners^T O=H−1⋅CornersT

imgpos = ones([4, 3])*0
imgpos[:,:2] = corner
print("imgpos:\n{}".format(imgpos))

invcorner = linalg.inv(homo).dot(imgpos.T)
print("invcorner.T:\n{}".format(invcorner.T))
imgpos:
[[339.21502686 222.27757263   0.        ]
 [540.14733887 223.94987488   0.        ]
 [542.39001465 411.37576294   0.        ]
 [342.91049194 410.35256958   0.        ]]
invcorner.T:
[[-460.32336174 -321.67884815    1.94283559]
 [-735.810068   -322.14963888    2.41362631]
 [-734.82441934 -596.17990575    3.39927498]
 [-461.3157468  -596.64396008    2.93522065]]

  如果将反变换中,Corners的Z分量设置为1,结果如下:

imgpos = ones([4, 3])
imgpos[:,:2] = corner
print("imgpos:\n{}".format(imgpos))

invcorner = linalg.inv(homo).dot(imgpos.T)
print("invcorner.T:\n{}".format(invcorner.T))
imgpos:
[[339.21502686 222.27757263   1.        ]
 [540.14733887 223.94987488   1.        ]
 [542.39001465 411.37576294   1.        ]
 [342.91049194 410.35256958   1.        ]]
invcorner.T:
[[ 137.97874849  137.97874849 -137.97874849]
 [-137.50795777  137.50795777 -137.50795777]
 [-136.52230911 -136.52230911 -136.52230911]
 [ 136.98636343 -136.98636343 -136.98636343]]

  可以看到反变换的四个点都位于 [ ( 1 , 1 ) , ( − 1 , 1 ) , ( − 1 , − 1 ) , ( 1 , − 1 ) ] × 137 \left[ {\left( {1,1} \right),\left( { - 1,1} \right),\left( { - 1, - 1} \right),\left( {1, - 1} \right)} \right] \times 137 [(1,1),(−1,1),(−1,−1),(1,−1)]×137

  因此,这里出现了一个问题,这个 137 是什么东东?

 

§02 应矩阵分解


2.1 单应矩阵

  在相机对于平面图像拍摄时,成像中的点与原始图片之间就是一个单应转换。其中:

H = K [ R ∣ t ] H = K\left[ {R|t} \right] H=K[R∣t]
其中:

  • H: 单应矩阵;
  • K:相机内参;
  • R:旋转矩阵;
  • t:平移矩阵;

2.1.1 计算单应矩阵

  可以通过findHomography()函数来计算两个平面之间的单应矩阵。

pts_src = np.float32(pts_src)
pts_dst = np.float32(pts_dst)
H, status = cv2.findHomography(pts_src, pts_dst)

  其中: pts_src, pts_dst是两个平面对应的点。原则上,至少需要存在 四个点 以及之上的点才可能计算出两个平面之间的单应矩阵。

2.2 单应矩阵分解

  单应矩阵分解过程,就是从 H H H矩阵,分解出 K , R , t K,R,t K,R,t。

num, Rs, Ts, Ns  = cv2.decomposeHomographyMat(H, K)

其中:

  • num: possible solutions will be returned.
  • Rs: contains a list of the rotation matrix.
  • Ts: contains a list of the translation vector.
  • Ns: contains a list of the normal vector of the plane.

  详细解释参见: OpenCV 3.4 - decomposeHomographyMat()
根据Apriltag进行角度和距离检测

▲ 图2.2.1 针孔相机成像模型

  可以看到其中需要使用到相机的内参 K K K。这个参数可以通过棋盘格相机校正中获得。

2.2.1 相机校正

(1)棋盘格

  用于相机校正的棋盘格为 棋盘格氧化铝标定板漫反射不反光12×9方格视觉光学校正板 ,其中包括有两个。

根据Apriltag进行角度和距离检测

▲ 图2.2.2 两块棋盘格视觉光效校正板

【表2-2-1-1 棋盘格的参数】

棋盘格尺寸 长(mm) 宽(mm)
10厘米见方棋盘格 72 54
7厘米见方 60 45

  根据上面参数,可以知道,每个黑白方格边长为: 5mm

(2)相机校正结果

  根据 相机校正与相机内参、外参 中利用小型棋盘格对相机校正的结果,可以知道相机的内参为:

mtx:
[[1.50786300e+04 0.00000000e+00 6.54543821e+02]
 [0.00000000e+00 1.50723843e+04 3.14862050e+02]
 [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

根据Apriltag进行角度和距离检测

▲ 图2.2.3 小型棋盘格角点提取结果

2.2.2 单应矩阵分接

  根据上面的相机的内参,对Apriltag获得单应矩阵进行分解。

num, Rs, Ts, Ns = cv2.decomposeHomographyMat(homo, mtx)

print("num: {}".format(num), "Rs: {}".format(Rs), "Ts: {}".format(Ts), "Ns: {}".format(Ns))
num: 4

Rs: [array([[ 0.99305437, -0.02904778,  0.11401423],
       [ 0.03513882,  0.99804001, -0.05178224],
       [-0.1122866 ,  0.05542891,  0.9921287 ]]), array([[ 0.99305437, -0.02904778,  0.11401423],
       [ 0.03513882,  0.99804001, -0.05178224],
       [-0.1122866 ,  0.05542891,  0.9921287 ]]), array([[ 0.8929123 , -0.19318671, -0.40667741],
       [-0.21364732,  0.61328157, -0.76042129],
       [ 0.39631105,  0.76587507,  0.50633283]]), array([[ 0.8929123 , -0.19318671, -0.40667741],
       [-0.21364732,  0.61328157, -0.76042129],
       [ 0.39631105,  0.76587507,  0.50633283]])]

Ts: [array([[ 0.08321933],
       [-0.08612246],
       [ 1.14462698]]), array([[-0.08321933],
       [ 0.08612246],
       [-1.14462698]]), array([[-0.50971631],
       [-0.8824852 ],
       [ 0.53471732]]), array([[ 0.50971631],
       [ 0.8824852 ],
       [-0.53471732]])]

Ns: [array([[ 0.32757084],
       [ 0.43692192],
       [-0.837733  ]]), array([[-0.32757084],
       [-0.43692192],
       [ 0.837733  ]]), array([[-0.24994745],
       [-0.39335465],
       [-0.88475894]]), array([[0.24994745],
       [0.39335465],
       [0.88475894]])]

  上面给出了四个可能分解结果。

(1)平面的法向量

  下面是第一个法向量:

print("","Ns[0].T: {}".format(Ns[0].T))
Ns[0].T: [[ 0.32757084  0.43692192 -0.837733  ]]

  可以验证它的模是1:

ns1 = Ns[0]
dotn = ns1.T.dot(ns1)
print("dotn: {}".format(dotn))
dotn: [[1.]]

 

§03 Apriltag方位


3.1 方向检测

  利用单应矩阵分解出的旋转矩阵 R s Rs Rs,根据 旋转矩阵与欧拉角之间的转换 将其分解成欧拉角。选择 z轴欧拉角作为Apriltag的发现2与摄像头之间的夹角。

atd = apriltag.Detector(apriltag.DetectorOptions(families='tag36h11 tag25h9'))
count = 0

angledim1 = []
angledim2 = []
angledim3 = []
angledim4 = []

xyzstr = 'xyz'

for imgfile in filedim:
    procfile = os.path.join(apdir,imgfile)
    img = cv2.imread(procfile)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    tags = atd.detect(gray)
    if len(tags) == 0: continue

    homo = tags[0].homography
    num, Rs,Ts,Ns = cv2.decomposeHomographyMat(homo,mtx)

    r = R.from_dcm(Rs[0].T)
    eulerangle = r.as_euler(xyzstr).T*180/pi
    angledim1.append(eulerangle[2])
    r = R.from_dcm(Rs[1].T)
    eulerangle = r.as_euler(xyzstr).T*180/pi
    angledim2.append(eulerangle[2])
    r = R.from_dcm(Rs[2].T)
    eulerangle = r.as_euler(xyzstr).T*180/pi
    angledim3.append(eulerangle[2])
    r = R.from_dcm(Rs[3].T)
    eulerangle = r.as_euler(xyzstr).T*180/pi
    angledim4.append(eulerangle[2])

    count += 1
    if count%20 == 0: print(count)

plt.clf()
plt.figure(figsize=(10,7))

plt.plot(angledim2, label='Rs2')

plt.xlabel("Step")
plt.ylabel("Angule")
plt.grid(True)
plt.legend(loc='upper_right')
plt.tight_layout()
plt.show()

根据Apriltag进行角度和距离检测

▲ 图3.1.1 Apriltag中的欧拉角Z的变化

  可以看到其中在20°到-50°之间的角度变化非常好。

ad2array = array(angledim2)
findid = where((ad2array < 20) & (ad2array > -50))
angle = ad2array[findid]
print("max(angle): {}".format(max(angle)), "min(angle): {}".format(min(angle)))
max(angle): 19.40025056861434

min(angle): -46.731389811080504

根据Apriltag进行角度和距离检测

▲ 图3.1.2 最后一段旋转图片

根据Apriltag进行角度和距离检测

▲ 图3.1.3 对应的角度

3.1.1 挑选角度好的图片

ad2array = array(angledim2)
findid = where((ad2array < 0) & (ad2array > -40))
angle = ad2array[findid]
print("max(angle): {}".format(max(angle)), "min(angle): {}".format(min(angle)))

print(angle)

plt.clf()
plt.figure(figsize=(12,7))
plt.plot(angle)
plt.xlabel("Step")
plt.ylabel("Angle")
plt.grid(True)
plt.tight_layout()

根据Apriltag进行角度和距离检测

▲ 图3.1.4 角度好的数据对应的角度变化

根据Apriltag进行角度和距离检测

▲ 图3.1.5 绘制Apriltag法线

gifpath = '/home/aistudio/GIF'
gifdim = os.listdir(gifpath)
for f in gifdim:
    fn = os.path.join(gifpath, f)
    if os.path.isfile(fn):
        os.remove(fn)

atd = apriltag.Detector(apriltag.DetectorOptions(families='tag36h11 tag25h9'))
count = 0
angledim = []
xyzstr = 'xyz'

count = 0
for id in findid[0][90:]:
    imgfile = filedim[id]
    procfile = os.path.join(apdir,imgfile)
    img = cv2.imread(procfile)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    tags = atd.detect(gray)
    if len(tags) == 0: continue

    homo = tags[0].homography
    num, Rs,Ts,Ns = cv2.decomposeHomographyMat(homo,mtx)

    r = R.from_dcm(Rs[1].T)
    eulerangle = r.as_euler(xyzstr).T*180/pi
    angledim.append(eulerangle[2])

    for i in range(4):
        c = tags[0].corners[i]
        cv2.circle(img, tuple(c.astype(int)), 4, (255,0,0), 2)
    cv2.circle(img, tuple(tags[0].center.astype(int)), 4, (18, 200, 20), 2)

    dirangle = (eulerangle[2]-5)*pi/180*1.8

    ARROW_LENGTH = 120
    deltax = sin(dirangle)*ARROW_LENGTH
    deltay = ARROW_LENGTH / 2 * cos(dirangle)
    newcenter = tags[0].center + array([deltax, deltay])

    cv2.circle(img, tuple(newcenter.astype(int)), 8, (255, 0, 0), 5)
    cv2.line(img, tuple(newcenter.astype(int)), tuple(tags[0].center.astype(int)), (255, 0, 0), 2)

    outfile = os.path.join(gifpath, '%03d.jpg'%count)
    cv2.imwrite(outfile, img)

    count += 1
    if count%10 == 0:
        print(count)

3.2 距离检测

  根据投影关系,可以根据检测到的Apriltag的对角线的长度与摄像头的焦距的几何关系,可以获得物体距离摄像头的距离。

根据Apriltag进行角度和距离检测

▲ 图3.1 Apriltag 距离计算

 

  结 ※


  据检测到的Apriltag的图像,可以提取摄像头与Apriltag之间的距离和角度,这些参数可以用于对移动车模的定位。


■ 相关文献链接:

● 相关图表链接:

上一篇:数字图像与机器视觉基础


下一篇:[Python从零到壹] 三十六.图像处理基础篇之图像算术与逻辑运算详解