0. 简介

之前我们立了一个Flag,就是要对R3LIVE进行详细的分析,当时就提到R3LIVE作为一个非常经典的文章,和LVI-SAM一样都是一种激光—惯性—视觉结合的SLAM算法。对于R3LIVE而言结构还是挺清晰的,比如IMU,相机,激光雷达三个传感器分别的作用。下面我们来梳理一下整个R3LIVE算法的流程以及代码理解。
在这里插入图片描述

1. 前言

我们先来看一下代码多少钱一两博主绘制的节点与话题的绘图。
在这里插入图片描述
我们可以看到在R3LIVE中,node节点还是很清晰的,只有/r3live_LiDAR_front_end/r3live_mapping两个节点。

可以看到/r3live_LiDAR_front_end节点中只用到了雷达的相关信息,即通过FAST-LIO2的激光处理部分,然后发布处理后的点云/laser_cloud,平面点/laser_cloud_flat和角点/laser_cloud_sharp信息

然后另一部分就是/r3live_mapping节点,这个节点可以看到输入的信息不仅仅有雷达的信息(/laser_cloud_flat),还有IMU(/livox/imu)和视觉的信息(/camera/image_color/compressed)。并在这个节点中完成融合,最终完成R3LIVE的稠密建图。

2. 从launch看起的R3LIVE

在R3LIVE中我们可以看到只有这三个launch文件,其中前两个是在线和离线的launch文件,我们这里主要看一下在线的包,离线的包其实就是比在线的少了一些参数配置。
在这里插入图片描述

<launch>
    <!-- Subscribed topics -->
    <param name="/LiDAR_pointcloud_topic" type="string" value= "/laser_cloud_flat" />
    <param name="/IMU_topic" type="string" value= "/os_cloud_node/imu" />
    <param name="/Image_topic" type="string" value= "/NotAvail" />
    <param name="map_output_dir" type="string" value="$(env HOME)/r3live_output" />
    <rosparam command="load" file="$(find r3live)/../config/r3live_config.yaml" />

    <!-- set LiDAR type as ouster-64 spining LiDAR -->
    <param name="/Lidar_front_end/lidar_type" type="int" value= "3" /> 
    <param name="/Lidar_front_end/point_step" type="int" value="1" />
    <param name="r3live_lio/lio_update_point_step" type="int" value="6" />

    <node pkg="r3live" type="r3live_LiDAR_front_end" name="r3live_LiDAR_front_end"  output="screen" required="true"/>
    <node pkg="r3live" type="r3live_mapping" name="r3live_mapping" output="screen" required="true" />

    <arg name="rviz" default="1" />
    <group if="$(arg rviz)">
        <node name="rvizvisualisation" pkg="rviz" type="rviz" output="log" args="-d $(find r3live)/../config/rviz/r3live_rviz_config_ouster.rviz" />
    </group>
 </launch>
  1. 首先第一个部分就是定义了一些R3LIVE处理的雷达话题以及IMU视觉以及config配置文件的导入。
  2. 然后第二个部分就是设置一些对应特定雷达的配置参数
  3. 第三步就是启动r3live_LiDAR_front_end节点和r3live_mapping节点,这也是我们下面需要结合论文仔细阅读的点
  4. 最后就是启动rviz

3. r3live_config文件分析

Lidar_front_end:
   lidar_type: 1   # 1 for Livox-avia, 3 for Ouster-OS1-64
   N_SCANS: 6
   using_raw_point: 1
   point_step: 1

r3live_common:
   if_dump_log: 0                   # If recording ESIKF update log. [default = 0]
   record_offline_map: 1            # If recording offline map. [default = 1]
   pub_pt_minimum_views: 3          # Publish points which have been render up to "pub_pt_minimum_views" time. [default = 3]
   minimum_pts_size: 0.01           # The minimum distance for every two points in Global map (unit in meter). [default = 0.01] 
   image_downsample_ratio: 1        # The downsample ratio of the input image. [default = 1]
   estimate_i2c_extrinsic: 1        # If enable estimate the extrinsic between camera and IMU. [default = 1] 
   estimate_intrinsic: 1            # If enable estimate the online intrinsic calibration of the camera lens. [default = 1] 
   maximum_vio_tracked_pts: 600     # The maximum points for tracking. [default = 600]
   append_global_map_point_step: 4  # The point step of append point to global map. [default = 4]

r3live_vio:
   image_width: 1280
   image_height: 1024
   camera_intrinsic:
      [863.4241, 0.0, 640.6808,
      0.0,  863.4171, 518.3392,
      0.0, 0.0, 1.0 ] 
   camera_dist_coeffs: [-0.1080, 0.1050, -1.2872e-04, 5.7923e-05, -0.0222]  #k1, k2, p1, p2, k3
   # Fine extrinsic value. form camera-LiDAR calibration.
   camera_ext_R:
         [-0.00113207, -0.0158688, 0.999873,
            -0.9999999,  -0.000486594, -0.00113994,
            0.000504622,  -0.999874,  -0.0158682]
   # camera_ext_t: [0.050166, 0.0474116, -0.0312415] 
   camera_ext_t: [0,0,0] 
   # Rough extrinsic value, form CAD model, is not correct enough, but can be online calibrated in our datasets.
   # camera_ext_R:
   #    [0, 0, 1,
   #     -1, 0, 0,
   #     0, -1, 0]
   # camera_ext_t: [0,0,0] 

r3live_lio:        
   lio_update_point_step: 4   # Point step used for LIO update.  
   max_iteration: 2           # Maximum times of LIO esikf.
   lidar_time_delay: 0        # The time-offset between LiDAR and IMU, provided by user. 
   filter_size_corner: 0.30   
   filter_size_surf: 0.30
   filter_size_surf_z: 0.30
   filter_size_map: 0.30

其实这部分和我们之前详谈的FAST-LIO2挺像的(毕竟包含了FAST-LIO2的内容嘛),此外就是加了一些VIO中需要的相机内参和外参的参数。

4. R3LIVE从哪开始阅读

在CmakeList中我们注意到这两块的内容,即我们的两个主要的node节点所依赖的cpp文件。我们从上文也可以知道r3live_LiDAR_front_end 节点会先触发,这也代表我们需要先对src/loam/LiDAR_front_end.cpp进行解析

add_executable(r3live_LiDAR_front_end src/loam/LiDAR_front_end.cpp)
target_link_libraries(r3live_LiDAR_front_end ${catkin_LIBRARIES} ${PCL_LIBRARIES})

add_executable(r3live_mapping src/r3live.cpp 
                src/r3live_lio.cpp
                src/loam/include/kd_tree/ikd_Tree.cpp
                src/loam/include/FOV_Checker/FOV_Checker.cpp 
                src/loam/IMU_Processing.cpp
                src/rgb_map/offline_map_recorder.cpp
                # From VIO
                src/r3live_vio.cpp
                src/optical_flow/lkpyramid.cpp
                src/rgb_map/rgbmap_tracker.cpp
                src/rgb_map/image_frame.cpp
                src/rgb_map/pointcloud_rgbd.cpp
              )
target_link_libraries(r3live_mapping 
                          ${catkin_LIBRARIES}
                          ${Boost_LIBRARIES}
                          ${Boost_FILESYSTEM_LIBRARY}
                          ${Boost_SERIALIZATION_LIBRARY} # serialization
                          ${OpenCV_LIBRARIES}
                          # ${OpenMVS_LIBRARIES}
                          pcl_common 
                          pcl_io)

下面我们就就src/loam/LiDAR_front_end.cpp来开始我们的R3LIVE学习之旅吧。由于这部分内容和FAST-LIO2的内容类似,所以各位可以先学完我们的FAST-LIO2代码解析(一)系列再来进阶的学习R3LIVE系列吧。

激光点云首先在LiDAR_front_end节点中提取特征点,将处理完的信息通过/laser_cloud_flat完成节点的发送出去,与FAST-LIO2相同R3LIVE也只用到了面特征作为ESIKF融合。当中的内容基本不变,所以这里也不耗费大量的篇幅来介绍了。

    n.param< int >( "Lidar_front_end/lidar_type", lidar_type, 0 );
    n.param< double >( "Lidar_front_end/blind", blind, 0.1 );
    n.param< double >( "Lidar_front_end/inf_bound", inf_bound, 4 );
    n.param< int >( "Lidar_front_end/N_SCANS", N_SCANS, 1 );
    n.param< int >( "Lidar_front_end/group_size", group_size, 8 );
    n.param< double >( "Lidar_front_end/disA", disA, 0.01 );
    n.param< double >( "Lidar_front_end/disB", disB, 0.1 );
    n.param< double >( "Lidar_front_end/p2l_ratio", p2l_ratio, 225 );
    n.param< double >( "Lidar_front_end/limit_maxmid", limit_maxmid, 6.25 );
    n.param< double >( "Lidar_front_end/limit_midmin", limit_midmin, 6.25 );
    n.param< double >( "Lidar_front_end/limit_maxmin", limit_maxmin, 3.24 );
    n.param< double >( "Lidar_front_end/jump_up_limit", jump_up_limit, 170.0 );
    n.param< double >( "Lidar_front_end/jump_down_limit", jump_down_limit, 8.0 );
    n.param< double >( "Lidar_front_end/cos160", cos160, 160.0 );
    n.param< double >( "Lidar_front_end/edgea", edgea, 2 );
    n.param< double >( "Lidar_front_end/edgeb", edgeb, 0.1 );
    n.param< double >( "Lidar_front_end/smallp_intersect", smallp_intersect, 172.5 );
    n.param< double >( "Lidar_front_end/smallp_ratio", smallp_ratio, 1.2 );
    n.param< int >( "Lidar_front_end/point_filter_num", point_filter_num, 1 );
    n.param< int >( "Lidar_front_end/point_step", g_LiDAR_sampling_point_step, 3 );
    n.param< int >( "Lidar_front_end/using_raw_point", g_if_using_raw_point, 1 );

    jump_up_limit = cos( jump_up_limit / 180 * M_PI );
    jump_down_limit = cos( jump_down_limit / 180 * M_PI );
    cos160 = cos( cos160 / 180 * M_PI );
    smallp_intersect = cos( smallp_intersect / 180 * M_PI );
    // ..............
    pub_full = n.advertise< sensor_msgs::PointCloud2 >( "/laser_cloud", 100 );
    pub_surf = n.advertise< sensor_msgs::PointCloud2 >( "/laser_cloud_flat", 100 );
    pub_corn = n.advertise< sensor_msgs::PointCloud2 >( "/laser_cloud_sharp", 100 );

5. 参考链接

https://blog.csdn.net/handily_1/article/details/122360134

https://icode.best/i/77363847009772

https://zhuanlan.zhihu.com/p/480275319

https://blog.csdn.net/handily_1/article/details/124230917

https://vincentqin.tech/posts/r3live/

https://www.jianshu.com/p/a35fa8ac0803