[SLAM] opencv-python的3D-2D视觉里程计(参考高老师的视觉slam十四讲)

目录

项目场景

项目参照高老师的《视觉SLAM十四讲》进行实现,主要是为了巩固自己知识,代码仅供参考。

直接上代码

需要注意的是,代码opencv的版本是3.4.2.16(4版本的opencv太新了,好像不兼容),此外,在主函数里面,需要输入自己的图片哦!

import numpy as np
import cv2 as cv

def find_feature_matches(img_1, img_2):
    """寻找特征匹配的点

    Args:
        img_1: pass
        img_2: pass

    Returns:
        kp1: 
        kp2: 
        good_match: 

    """
    orb = cv.ORB_create()

    kp1 = orb.detect(img_1)
    kp2 = orb.detect(img_2)

    kp1, des1 = orb.compute(img_1, kp1)
    kp2, des2 = orb.compute(img_2, kp2)

    bf = cv.BFMatcher(cv.NORM_HAMMING)

    matches = bf.match(des1, des2)

    min_distance = matches[0].distance
    max_distance = matches[0].distance

    for x in matches:
        if x.distance < min_distance:
            min_distance = x.distance
        if x.distance > max_distance:
            max_distance = x.distance
    
    print("Max dist:", max_distance)
    print("Min dist:", min_distance)

    good_match = []

    for x in matches:
        if x.distance <= max(2*min_distance, 30.0):
            good_match.append(x)

    return kp1, kp2, good_match


if __name__ == "__main__":
    img1 = cv.imread("1.png")
    img2 = cv.imread("2.png")
    depth1 = cv.imread("1_depth.png", -1) # 读取深度图像要用-1
    depth2 = cv.imread("2_depth.png", -1)

    # 图像匹配
    keypoints_1, keypoints_2, matches = find_feature_matches(img1, img2)
    print("共计匹配点:", len(matches))

    # 筛选特征点,只记匹配的点,参照上一博客的poes_estimation_2d2d(keypoint_1, keypoint_2, matches)函数
    pts2 = []
    pts1 = []
    for i in range(int(len(matches))):
        pts1.append(keypoints_1[matches[i].queryIdx].pt)
        pts2.append(keypoints_2[matches[i].trainIdx].pt)
    pts1 = np.int32(pts1)
    pts2 = np.int32(pts2)

    # 建立3D点
    # 深度图像为16位无符号数,单通道图像
    K=[[520.9, 0, 325.1], [0, 521.0, 249.7], [0, 0, 1]]
    K=np.array(K)
    pts_3d = []
    pts_2d = []
    for i in range(pts1.shape[0]): # pts1.shape[0]: 匹配点总数  pts1.shape = (len(matches), 2)
        p1 = pts1[i]
        d1 = depth1[p1[1],p1[0]]/1000.0 # 深度距离  以5000为一个单位 高博的是1000.0
        #print(d1)
        if d1 == 0: 
            continue
        p1 = (p1 - (K[0][2],K[1][2]))/(K[0][0],K[1][1])*d1 # 归一化坐标 根据深度单位转为实际坐标
        pts_3d.append([p1[0], p1[1], d1])
        pts_2d.append(pts2[i])

    # 
    print("最终匹配数:", len(pts_3d))
    pts_3d = np.float64(pts_3d)
    pts_2d = np.float64(pts_2d)
    print("3D点:")
    print(pts_3d)

    flag,R,t = cv.solvePnP(pts_3d,pts_2d,K,None)
    R,Jacobian  = cv.Rodrigues(R)
    print("旋转矩阵R:\n", R)
    print("平移矩阵t:\n", t)

实验

最终输出的两个矩阵与上一篇的博客的结果有点区别,旋转矩阵基本上对上了,但是平移矩阵,都有差别,不管是与上一篇博客的还是与高博的
[SLAM] opencv-python的3D-2D视觉里程计(参考高老师的视觉slam十四讲)

上一篇:iOS上架问题


下一篇:ES11中matchAll