入口

该模块的启动是通过融合模块的dag文件而启动的,在Apollo/modules/perception/production/launch中,并没有单独启动radar的launch文件或者单独启动的dag文件。其具体路径为:Apollo/modules/perception/production/dag/dag_streaming_perception.dag

launch文件用来启动,dag文件描述了整个系统的拓扑关系,也定义了每个Component需要订阅的话题

可以看到启动的两个分别是前雷达的detect和后雷达的detect,使用的是同一个雷达检测入口RadarDetectionComponent.cc, 唯独的区别是读取的参数文件不同

  components {
    class_name: "RadarDetectionComponent"
    config {
      name: "FrontRadarDetection"
      config_file_path: "/apollo/modules/perception/production/conf/perception/radar/front_radar_component_conf.pb.txt"
      flag_file_path: "/apollo/modules/perception/production/conf/perception/perception_common.flag"
      readers {
        channel: "/apollo/sensor/radar/front"
      }
    }
  }

  components {
    class_name: "RadarDetectionComponent"
    config {
      name: "RearRadarDetection"
      config_file_path: "/apollo/modules/perception/production/conf/perception/radar/rear_radar_component_conf.pb.txt"
      flag_file_path: "/apollo/modules/perception/production/conf/perception/perception_common.flag"
      readers {
        channel: "/apollo/sensor/radar/rear"
      }
    }
  }


参数文件对比:

radar_name: "radar_front"
tf_child_frame_id: "radar_front"
radar_forward_distance: 200.0
radar_preprocessor_method: "ContiArsPreprocessor"
radar_perception_method: "RadarObstaclePerception"
radar_pipeline_name: "FrontRadarPipeline"
odometry_channel_name: "/apollo/localization/pose"
output_channel_name: "/perception/inner/PrefusedObjects"

radar_name: "radar_rear"
tf_child_frame_id: "radar_rear"
radar_forward_distance: 120.0
radar_preprocessor_method: "ContiArsPreprocessor"
radar_perception_method: "RadarObstaclePerception"
radar_pipeline_name: "RearRadarPipeline"
odometry_channel_name: "/apollo/localization/pose"
output_channel_name: "/perception/inner/PrefusedObjects"

可以看到主要的区别就是前雷达推理距离是200米,后雷达是120米,处理的pipeline也分为了前、后两种,但是具体查看其配置文件发现是一模一样的

先看毫米波雷达检测部分代码,对应文件:modules/perception/onboard/component/radar_detection_component.cc

初始化

RadarDetectionComponent::Init()

bool RadarDetectionComponent::Init() {
  // 读取配置文件,配置文件定义:modules/perception/onboard/proto/radar_component_config.proto
  RadarComponentConfig comp_config;
  if (!GetProtoConfig(&comp_config)) {
    return false;
  }
  AINFO << "Radar Component Configs: " << comp_config.DebugString();

  // To load component configs
  tf_child_frame_id_ = comp_config.tf_child_frame_id();
  radar_forward_distance_ = comp_config.radar_forward_distance();
  preprocessor_method_ = comp_config.radar_preprocessor_method();
  perception_method_ = comp_config.radar_perception_method();
  pipeline_name_ = comp_config.radar_pipeline_name();
  odometry_channel_name_ = comp_config.odometry_channel_name();

  // common::SensorManager::Instance()会返回`SensorManager`的唯一实例,同时调用构造函数,
  // 而构造函数又调用Init()方法,对传感器元数据初始化,会读取`modules/perception/production/data/perception/common/sensor_meta.pt`的包含所有传感器元数据,
  // 并将传感器名字和传感器信息SensorInfo存储在sensor_info_map_字典

  // 根据name获取SensorInfo数据
  if (!common::SensorManager::Instance()->GetSensorInfo(
          comp_config.radar_name(), &radar_info_)) {
    AERROR << "Failed to get sensor info, sensor name: "
           << comp_config.radar_name();
    return false;
  }

  // 发布消息
  writer_ = node_->CreateWriter<SensorFrameMessage>(
      comp_config.output_channel_name());

  // Init algorithm plugin
  ACHECK(InitAlgorithmPlugin()) << "Failed to init algorithm plugin.";
  radar2world_trans_.Init(tf_child_frame_id_);
  radar2novatel_trans_.Init(tf_child_frame_id_);
  localization_subscriber_.Init(
      odometry_channel_name_,
      odometry_channel_name_ + '_' + comp_config.radar_name());
  return true;
}

在调用各模块类的处理逻辑process,先对个模块功能类进行参数初始化,先调用RadarDetectionComponent::Init,后分别分别调用各模块功能类(HDMap初始化,ContiArsPreprocessor预处理,RadarDetectionComponent检测)的init方法

// 各子模块类参数初始化
bool RadarDetectionComponent::InitAlgorithmPlugin() {
  AINFO << "onboard radar_preprocessor: " << preprocessor_method_;
  if (FLAGS_obs_enable_hdmap_input) {
    hdmap_input_ = map::HDMapInput::Instance();
    ACHECK(hdmap_input_->Init()) << "Failed to init hdmap input.";
  }

  // apollo/modules/perception/lib/registerer/registerer.h
  // 父类指针Preprocessor指向子类ContiArsPreprocessor的对象
  radar::BasePreprocessor* preprocessor =radar::BasePreprocessorRegisterer::GetInstanceByName(preprocessor_method_);
  CHECK_NOTNULL(preprocessor);
  radar_preprocessor_.reset(preprocessor);
  ACHECK(radar_preprocessor_->Init()) << "Failed to init radar preprocessor.";
  // 父类 BaseRadarObstaclePerception 指针指向子类 RadarDetectionComponent 的对象
  radar::BaseRadarObstaclePerception* radar_perception =
      radar::BaseRadarObstaclePerceptionRegisterer::GetInstanceByName(
          perception_method_);
  ACHECK(radar_perception != nullptr)
      << "No radar obstacle perception named: " << perception_method_;
  radar_perception_.reset(radar_perception);
  ACHECK(radar_perception_->Init(pipeline_name_))
      << "Failed to init radar perception.";
  AINFO << "Init algorithm plugin successfully.";
  return true;
}

获取传感器信息radar_info_,对应代码:

  // 根据name获取SensorInfo数据
  if (!common::SensorManager::Instance()->GetSensorInfo(
          comp_config.radar_name(), &radar_info_)) {
    AERROR << "Failed to get sensor info, sensor name: "
           << comp_config.radar_name();
    return false;
  }

SensorManager类路径:apollo/modules/perception/common/sensor_manager/sensor_manager.cc

common::SensorManager::Instance()会返回SensorManager的唯一实例,同时调用构造函数,而构造函数又调用Init()方法,对传感器元数据初始化,会读取modules/perception/production/data/perception/common/sensor_meta.pt的包含所有传感器元数据,并将传感器名字和传感器信息SensorInfo存储在sensor_info_map_字典

modules/perception/common/sensor_manager/sensor_manager.cc

      // glog 提供了CHECK()宏帮助我们检查程序的错误,当CHECK()的条件不满足时,它会记录FATAL日志并终止程序
      SensorManager::SensorManager() { CHECK_EQ(this->Init(), true); }

      bool SensorManager::Init()
      {
        std::lock_guard<std::mutex> lock(mutex_);
        if (inited_)
        {
          return true;
        }

        sensor_info_map_.clear();
        distort_model_map_.clear();
        undistort_model_map_.clear();
        // 传感器元数据(名字,传感器型号,摆放位置)文件路径: apollo/modules/perception/production/data/perception/common/sensor_meta.pt
        // 调用gflags库DEFINE_type宏获取传感器元数据文件路径FLAGS_obs_sensor_meta_path
        const std::string file_path = cyber::common::GetAbsolutePath(lib::ConfigManager::Instance()->work_root(), FLAGS_obs_sensor_meta_path);

        MultiSensorMeta sensor_list_proto;
        // 从文件中读取信息存储到sensor_list_proto中
        if (!GetProtoFromASCIIFile(file_path, &sensor_list_proto))
        {
          AERROR << "Invalid MultiSensorMeta file: " << FLAGS_obs_sensor_meta_path;
          return false;
        }

        auto AddSensorInfo = [this](const SensorMeta &sensor_meta_proto)
        {
          SensorInfo sensor_info;
          sensor_info.name = sensor_meta_proto.name();
          sensor_info.type = static_cast<SensorType>(sensor_meta_proto.type());
          sensor_info.orientation =
              static_cast<SensorOrientation>(sensor_meta_proto.orientation());
          sensor_info.frame_id = sensor_meta_proto.name();
          // sensor_info_map_字典存储传感器名字和传感器信息SensorInfo
          // SensorInfo 结构体类型 位于apollo/modules/perception/base/sensor_meta.h
          auto pair = sensor_info_map_.insert(
              make_pair(sensor_meta_proto.name(), sensor_info));
          if (!pair.second)
          {
            AERROR << "Duplicate sensor name error.";
            return false;
          }

        for (const SensorMeta &sensor_meta_proto : sensor_list_proto.sensor_meta())
        {
          if (!AddSensorInfo(sensor_meta_proto))
          {
            AERROR << "Failed to add sensor_info: " << sensor_meta_proto.name();
            return false;
          }
        }

        inited_ = true;
        AINFO << "Init sensor_manager success.";
        return true;
      }

      // 根据传感器名获取传感器信息SensorInfo
      bool SensorManager::GetSensorInfo(const std::string &name,
                                        SensorInfo *sensor_info) const
      {
        if (sensor_info == nullptr)
        {
          AERROR << "Nullptr error.";
          return false;
        }

        const auto &itr = sensor_info_map_.find(name);
        if (itr == sensor_info_map_.end())
        {
          return false;
        }

        *sensor_info = itr->second;
        return true;
      }

SensorInfo类型,位于位于modules/perception/base/sensor_meta.h

struct SensorInfo {
  std::string name = "UNKNONW_SENSOR";
  SensorType type = SensorType::UNKNOWN_SENSOR_TYPE;
  SensorOrientation orientation = SensorOrientation::FRONT;
  std::string frame_id = "UNKNOWN_FRAME_ID";
  void Reset() {
    name = "UNKNONW_SENSOR";
    type = SensorType::UNKNOWN_SENSOR_TYPE;
    orientation = SensorOrientation::FRONT;
    frame_id = "UNKNOWN_FRAME_ID";
  }
};

主逻辑

初始化函数剖析完了,接下来看组件RadarDetectionComponentProc函数,通过调用InternalProc()函数调用各子模块类处理逻辑

bool RadarDetectionComponent::Proc(const std::shared_ptr<ContiRadar>& message) {
  AINFO << "Enter radar preprocess, message timestamp: "
        << message->header().timestamp_sec() << " current timestamp "
        << Clock::NowInSeconds();
  auto out_message = std::make_shared<SensorFrameMessage>();

  if (!InternalProc(message, out_message)) {
    return false;
  }
  writer_->Write(out_message);
  AINFO << "Send radar processing output message.";
  return true;
}

输入:从ContiRadar(大陆雷达)获取的原始radar信息

modules/common_msgs/sensor_msgs/conti_radar.proto

message ContiRadarObs {
  //                x axis  ^
  //                        | longitude_dist
  //                        |
  //                        |
  //                        |
  //          lateral_dist  |
  //          y axis        |
  //        <----------------
  //        ooooooooooooo   //radar front surface

  optional apollo.common.Header header = 1;
  optional bool clusterortrack = 2;  // 0 = track, 1 = cluster
  optional int32 obstacle_id = 3;    // obstacle Id
  // longitude distance to the radar; (+) = forward; unit = m
  optional double longitude_dist = 4;
  // lateral distance to the radar; (+) = left; unit = m
  optional double lateral_dist = 5;
  // longitude velocity to the radar; (+) = forward; unit = m/s
  optional double longitude_vel = 6;
  // lateral velocity to the radar; (+) = left; unit = m/s
  optional double lateral_vel = 7;
  // obstacle Radar Cross-Section; unit = dBsm
  optional double rcs = 8;
  // 0 = moving, 1 = stationary, 2 = oncoming, 3 = stationary candidate
  // 4 = unknown, 5 = crossing stationary, 6 = crossing moving, 7 = stopped
  optional int32 dynprop = 9;
  // longitude distance standard deviation to the radar; (+) = forward; unit = m
  optional double longitude_dist_rms = 10;
  // lateral distance standard deviation to the radar; (+) = left; unit = m
  optional double lateral_dist_rms = 11;
  // longitude velocity standard deviation to the radar; (+) = forward; unit =
  // m/s
  optional double longitude_vel_rms = 12;
  // lateral velocity standard deviation to the radar; (+) = left; unit = m/s
  optional double lateral_vel_rms = 13;
  // obstacle probability of existence
  optional double probexist = 14;

  // The following is only valid for the track object message
  // 0 = deleted, 1 = new, 2 = measured, 3 = predicted, 4 = deleted for, 5 = new
  // from merge
  optional int32 meas_state = 15;
  // longitude acceleration to the radar; (+) = forward; unit = m/s2
  optional double longitude_accel = 16;
  // lateral acceleration to the radar; (+) = left; unit = m/s2
  optional double lateral_accel = 17;
  // orientation angle to the radar; (+) = counterclockwise; unit = degree
  optional double oritation_angle = 18;
  // longitude acceleration standard deviation to the radar; (+) = forward; unit
  // = m/s2
  optional double longitude_accel_rms = 19;
  // lateral acceleration standard deviation to the radar; (+) = left; unit =
  // m/s2
  optional double lateral_accel_rms = 20;
  // orientation angle standard deviation to the radar; (+) = counterclockwise;
  // unit = degree
  optional double oritation_angle_rms = 21;
  optional double length = 22;  // obstacle length; unit = m
  optional double width = 23;   // obstacle width; unit = m
  // 0: point; 1: car; 2: truck; 3: pedestrian; 4: motorcycle; 5: bicycle; 6:
  // wide; 7: unknown
  optional int32 obstacle_class = 24;
}

message ContiRadar {
  optional apollo.common.Header header = 1;
  repeated ContiRadarObs contiobs = 2;  // conti radar obstacle array  // 大陆毫米波原始输出障碍物目标数组
  optional RadarState_201 radar_state = 3;
  optional ClusterListStatus_600 cluster_list_status = 4;
  optional ObjectListStatus_60A object_list_status = 5;
}

输出:SensorFrameMessage ,即为最终的radar感知结果

modules/perception/onboard/inner_component_messages/inner_component_messages.h

class SensorFrameMessage {
 public:
  SensorFrameMessage() { type_name_ = "SensorFrameMessage"; }
  ~SensorFrameMessage() = default;
  std::string GetTypeName() { return type_name_; }
  SensorFrameMessage* New() const { return new SensorFrameMessage; }

 public:
  apollo::common::ErrorCode error_code_ = apollo::common::ErrorCode::OK;

  std::string sensor_id_;
  double timestamp_ = 0.0;
  uint64_t lidar_timestamp_ = 0;
  uint32_t seq_num_ = 0;
  std::string type_name_;
  base::HdmapStructConstPtr hdmap_;

  base::FramePtr frame_;

  ProcessStage process_stage_ = ProcessStage::UNKNOWN_STAGE;
};

其核心流程主要分为两步, 雷达数据前处理流程Preprocess,及后续的感知流程Perceive

bool RadarDetectionComponent::InternalProc(
    const std::shared_ptr<ContiRadar>& in_message,
    std::shared_ptr<SensorFrameMessage> out_message) {
  PERF_FUNCTION_WITH_INDICATOR(radar_info_.name);
  ContiRadar raw_obstacles = *in_message;
  {
    std::unique_lock<std::mutex> lock(_mutex);
    ++seq_num_;
  }
  double timestamp = in_message->header().timestamp_sec();
  const double cur_time = Clock::NowInSeconds();
  const double start_latency = (cur_time - timestamp) * 1e3;
  AINFO << "FRAME_STATISTICS:Radar:Start:msg_time[" << timestamp
        << "]:cur_time[" << cur_time << "]:cur_latency[" << start_latency
        << "]";
  PERF_BLOCK_START();
  // Init preprocessor_options
  radar::PreprocessorOptions preprocessor_options;
  ContiRadar corrected_obstacles;
  // 预处理的具体流程 
  radar_preprocessor_->Preprocess(raw_obstacles, preprocessor_options,
                                  &corrected_obstacles);
  PERF_BLOCK_END_WITH_INDICATOR(radar_info_.name, "radar_preprocessor");
  timestamp = corrected_obstacles.header().timestamp_sec();

  out_message->timestamp_ = timestamp;
  out_message->seq_num_ = seq_num_;
  out_message->process_stage_ = ProcessStage::LONG_RANGE_RADAR_DETECTION;
  out_message->sensor_id_ = radar_info_.name;

  // //初始化一系列的参数,如radar2world转换矩阵,radar2novatel转换矩阵,自车线速度,角速度 
  radar::RadarPerceptionOptions options;
  options.sensor_name = radar_info_.name;
  // Init detector_options
  Eigen::Affine3d radar_trans;
  // radar2novatel_trans_ 雷达到世界坐标系的tf 
  if (!radar2world_trans_.GetSensor2worldTrans(timestamp, &radar_trans)) {
    out_message->error_code_ = apollo::common::ErrorCode::PERCEPTION_ERROR_TF;
    AERROR << "Failed to get pose at time: " << timestamp;
    return true;
  }
  Eigen::Affine3d radar2novatel_trans;
  // radar2novatel_trans_ 雷达到自车的tf
  if (!radar2novatel_trans_.GetTrans(timestamp, &radar2novatel_trans, "novatel",tf_child_frame_id_)) {
    out_message->error_code_ = apollo::common::ErrorCode::PERCEPTION_ERROR_TF;
    AERROR << "Failed to get radar2novatel trans at time: " << timestamp;
    return true;
  }

  PERF_BLOCK_END_WITH_INDICATOR(radar_info_.name, "GetSensor2worldTrans");
  // 雷达到世界坐标系的转换矩阵 
  Eigen::Matrix4d radar2world_pose = radar_trans.matrix();
  options.detector_options.radar2world_pose = &radar2world_pose;
  // 雷达到自车的转换矩阵 
  Eigen::Matrix4d radar2novatel_trans_m = radar2novatel_trans.matrix();
  options.detector_options.radar2novatel_trans = &radar2novatel_trans_m;
  // 获取自车车速 
  if (!GetCarLocalizationSpeed(timestamp,
                               &(options.detector_options.car_linear_speed),
                               &(options.detector_options.car_angular_speed))) {
    AERROR << "Failed to call get_car_speed. [timestamp: " << timestamp;
    // return false;
  }
  PERF_BLOCK_END_WITH_INDICATOR(radar_info_.name, "GetCarSpeed");
  // Init roi_filter_options
  // Radar2world 的矩阵偏移 
  base::PointD position;
  position.x = radar_trans(0, 3);
  position.y = radar_trans(1, 3);
  position.z = radar_trans(2, 3);
  options.roi_filter_options.roi.reset(new base::HdmapStruct());
  if (FLAGS_obs_enable_hdmap_input) {
    hdmap_input_->GetRoiHDMapStruct(position, radar_forward_distance_,
                                    options.roi_filter_options.roi);
  }
  PERF_BLOCK_END_WITH_INDICATOR(radar_info_.name, "GetRoiHDMapStruct");
  // Init object_filter_options
  // Init track_options
  // Init object_builder_options
  std::vector<base::ObjectPtr> radar_objects;
  // 感知主要流程 
  if (!radar_perception_->Perceive(corrected_obstacles, options,
                                   &radar_objects)) {
    out_message->error_code_ =
        apollo::common::ErrorCode::PERCEPTION_ERROR_PROCESS;
    AERROR << "RadarDetector Proc failed.";
    return true;
  }
  out_message->frame_.reset(new base::Frame());
  out_message->frame_->sensor_info = radar_info_;
  out_message->frame_->timestamp = timestamp;
  out_message->frame_->sensor2world_pose = radar_trans;
  out_message->frame_->objects = radar_objects;

  const double end_timestamp = Clock::NowInSeconds();
  const double end_latency =
      (end_timestamp - in_message->header().timestamp_sec()) * 1e3;
  AINFO << "FRAME_STATISTICS:Radar:End:msg_time["
        << in_message->header().timestamp_sec() << "]:cur_time["
        << end_timestamp << "]:cur_latency[" << end_latency << "]";
  PERF_BLOCK_END_WITH_INDICATOR(radar_info_.name, "radar_perception");

  return true;
}

看下RadarPerceptionOptions modules/perception/radar/lib/interface/base_radar_obstacle_perception.h

struct RadarPerceptionOptions {
  DetectorOptions detector_options;
  RoiFilterOptions roi_filter_options;
  TrackerOptions track_options;
  std::string sensor_name;
};

预处理

modules/perception/radar/lib/preprocessor/conti_ars_preprocessor/conti_ars_preprocessor.cc

ContiArsPreprocessor::Preprocess()雷达预处理主要包括了以下三个主要步骤:

  • 跳过不在时间范围内的目标
  • 重新分配ID
  • 修正时间戳
    这个预处理是跟传感器数据的特征有关的,不同传感器有不同的预处理方法。

Radar感知

调用Perceive进毫米波雷达感知主要逻辑

  std::vector<base::ObjectPtr> radar_objects;
  // 感知主要流程 
  if (!radar_perception_->Perceive(corrected_obstacles, options,
                                   &radar_objects)) {
    out_message->error_code_ =
        apollo::common::ErrorCode::PERCEPTION_ERROR_PROCESS;
    AERROR << "RadarDetector Proc failed.";
    return true;
  }

检测
modules/perception/radar/app/radar_obstacle_perception.cc

RadarObstaclePerception::Perceive()核心处理函数,包括检测,ROI过滤,跟踪三个步骤。

接下来我们便进入了Radar Perception的核心处理逻辑,其包含了 Detect -> ROI Filter -> Track 这三个核心部分

首先看检测:modules/perception/radar/lib/detector/conti_ars_detector/conti_ars_detector.cc

bool ContiArsDetector::Detect(const drivers::ContiRadar& corrected_obstacles,
                              const DetectorOptions& options,
                              base::FramePtr radar_frame) {
  RawObs2Frame(corrected_obstacles, options, radar_frame);
  radar_frame->timestamp = corrected_obstacles.header().timestamp_sec();
  radar_frame->sensor2world_pose = *(options.radar2world_pose);
  return true;
}

std::string ContiArsDetector::Name() const { return "ContiArsDetector"; }

void ContiArsDetector::RawObs2Frame(
    const drivers::ContiRadar& corrected_obstacles,
    const DetectorOptions& options, base::FramePtr radar_frame) {
  // / radar2world转换矩阵
  const Eigen::Matrix4d& radar2world = *(options.radar2world_pose);
  // radar到自车转换矩阵
  const Eigen::Matrix4d& radar2novatel = *(options.radar2novatel_trans);
  // 自车角速度,应该是xyz三个方向上的角速度,应该只有转弯时的yawrate
  const Eigen::Vector3f& angular_speed = options.car_angular_speed;
  Eigen::Matrix3d rotation_novatel;
  rotation_novatel << 0, -angular_speed(2), angular_speed(1), angular_speed(2),
      0, -angular_speed(0), -angular_speed(1), angular_speed(0), 0;
  // 补偿自车转弯旋转时的速度变化。
  Eigen::Matrix3d rotation_radar = radar2novatel.topLeftCorner(3, 3).inverse() *
                                   rotation_novatel *
                                   radar2novatel.topLeftCorner(3, 3);
  Eigen::Matrix3d radar2world_rotate = radar2world.block<3, 3>(0, 0);
  Eigen::Matrix3d radar2world_rotate_t = radar2world_rotate.transpose();
  // Eigen::Vector3d radar2world_translation = radar2world.block<3, 1>(0, 3);
  ADEBUG << "radar2novatel: " << radar2novatel;
  ADEBUG << "angular_speed: " << angular_speed;
  ADEBUG << "rotation_radar: " << rotation_radar;
  for (const auto radar_obs : corrected_obstacles.contiobs()) {
    base::ObjectPtr radar_object(new base::Object);
    radar_object->id = radar_obs.obstacle_id();
    radar_object->track_id = radar_obs.obstacle_id();
    // 局部位姿
    Eigen::Vector4d local_loc(radar_obs.longitude_dist(),
                              radar_obs.lateral_dist(), 0, 1);
    // 世界位姿
    Eigen::Vector4d world_loc =
        static_cast<Eigen::Matrix<double, 4, 1, 0, 4, 1>>(radar2world *
                                                        local_loc);
    // 世界坐标系下的xy,z=0
    radar_object->center = world_loc.block<3, 1>(0, 0);
    radar_object->anchor_point = radar_object->center;
    // 相对速度
    Eigen::Vector3d local_vel(radar_obs.longitude_vel(),
                              radar_obs.lateral_vel(), 0);
    // 补偿自车转弯旋转时的速度变化。雷达的相对速度的xy分量,加减自车转弯的xy速度分量
    Eigen::Vector3d angular_trans_speed =
        rotation_radar * local_loc.topLeftCorner(3, 1);
    Eigen::Vector3d world_vel =
        static_cast<Eigen::Matrix<double, 3, 1, 0, 3, 1>>(
            radar2world_rotate * (local_vel + angular_trans_speed));
    // 绝对速度
    Eigen::Vector3d vel_temp =
        world_vel + options.car_linear_speed.cast<double>();
    radar_object->velocity = vel_temp.cast<float>();

    Eigen::Matrix3d dist_rms;
    dist_rms.setZero();
    Eigen::Matrix3d vel_rms;
    vel_rms.setZero();
    // rms是均方根
    dist_rms(0, 0) = radar_obs.longitude_dist_rms();
    dist_rms(1, 1) = radar_obs.lateral_dist_rms();
    vel_rms(0, 0) = radar_obs.longitude_vel_rms();
    vel_rms(1, 1) = radar_obs.lateral_vel_rms();
    // 计算位置不确定性
    radar_object->center_uncertainty =
        (radar2world_rotate * dist_rms * dist_rms.transpose() *
         radar2world_rotate_t)
            .cast<float>();
    // 计算速度不确定性
    radar_object->velocity_uncertainty =
        (radar2world_rotate * vel_rms * vel_rms.transpose() *
         radar2world_rotate_t)
            .cast<float>();
    double local_obj_theta = radar_obs.oritation_angle() / 180.0 * PI;
    Eigen::Vector3f direction(static_cast<float>(cos(local_obj_theta)),
                              static_cast<float>(sin(local_obj_theta)), 0.0f);
    // 方向和角度
    direction = radar2world_rotate.cast<float>() * direction;
    radar_object->direction = direction;
    radar_object->theta = std::atan2(direction(1), direction(0));
    radar_object->theta_variance =
        static_cast<float>(radar_obs.oritation_angle_rms() / 180.0 * PI);
    radar_object->confidence = static_cast<float>(radar_obs.probexist());
    // 运动状态:运动、静止、未知
    // 目标置信度
    int motion_state = radar_obs.dynprop();
    double prob_target = radar_obs.probexist();
    if ((prob_target > MIN_PROBEXIST) &&
        (motion_state == CONTI_MOVING || motion_state == CONTI_ONCOMING ||
         motion_state == CONTI_CROSSING_MOVING)) {
      radar_object->motion_state = base::MotionState::MOVING;
    } else if (motion_state == CONTI_DYNAMIC_UNKNOWN) {
      radar_object->motion_state = base::MotionState::UNKNOWN;
    } else {
      radar_object->motion_state = base::MotionState::STATIONARY;
      radar_object->velocity.setZero();
    }
    // 类别
    int cls = radar_obs.obstacle_class();
    if (cls == CONTI_CAR || cls == CONTI_TRUCK) {
      radar_object->type = base::ObjectType::VEHICLE;
    } else if (cls == CONTI_PEDESTRIAN) {
      radar_object->type = base::ObjectType::PEDESTRIAN;
    } else if (cls == CONTI_MOTOCYCLE || cls == CONTI_BICYCLE) {
      radar_object->type = base::ObjectType::BICYCLE;
    } else {
      radar_object->type = base::ObjectType::UNKNOWN;
    }
    // 长宽高
    radar_object->size(0) = static_cast<float>(radar_obs.length());
    radar_object->size(1) = static_cast<float>(radar_obs.width());
    radar_object->size(2) = 2.0f;  // vehicle template (pnc required)
    if (cls == CONTI_POINT) {
      radar_object->size(0) = 1.0f;
      radar_object->size(1) = 1.0f;
    }
    // extreme case protection
    if (radar_object->size(0) * radar_object->size(1) < 1.0e-4) {
      if (cls == CONTI_CAR || cls == CONTI_TRUCK) {
        radar_object->size(0) = 4.0f;
        radar_object->size(1) = 1.6f;  // vehicle template
      } else {
        radar_object->size(0) = 1.0f;
        radar_object->size(1) = 1.0f;
      }
    }
    MockRadarPolygon(radar_object);

    float local_range = static_cast<float>(local_loc.head(2).norm());
    float local_angle =
        static_cast<float>(std::atan2(local_loc(1), local_loc(0)));
    radar_object->radar_supplement.range = local_range;
    radar_object->radar_supplement.angle = local_angle;

    radar_frame->objects.push_back(radar_object);

    ADEBUG << "obs_id: " << radar_obs.obstacle_id() << ", "
           << "long_dist: " << radar_obs.longitude_dist() << ", "
           << "lateral_dist: " << radar_obs.lateral_dist() << ", "
           << "long_vel: " << radar_obs.longitude_vel() << ", "
           << "latera_vel: " << radar_obs.lateral_vel() << ", "
           << "rcs: " << radar_obs.rcs() << ", "
           << "meas_state: " << radar_obs.meas_state();
  }
}

在检测这个部分,Apollo直接使用了conti雷达的检测结果,输出了obstacle粒度的目标,这一部分的核心则在于conti雷达数据的一些后处理,其中包括了

  • radar2world和radar2egovechile的坐标系转换,之前已在外层函数RadarDetectionComponent::InternalProc()中实现,这里直接拿来用,
  • 相对速度和绝对速度的转换,需要注意的是转换速度时,需要补偿自车转弯旋转导致的速度变化,就是在radar速度的xy分量上补偿掉自车角速度带来的速度。
  • 运动状态、类别等属性的赋值
  • 从conti的输出格式转化为apollo的通用障碍物格式

RoiFilter
modules/perception/radar/lib/roi_filter/hdmap_radar_roi_filter/hdmap_radar_roi_filter.cc

和lidar的处理一样,通过引入高精地图模块,过滤掉除ROI区域的目标,如人行道上的物体。

bool HdmapRadarRoiFilter::RoiFilter(const RoiFilterOptions& options,
                                    base::FramePtr radar_frame) {
  std::vector<base::ObjectPtr> origin_objects = radar_frame->objects;
  return common::ObjectInRoiCheck(options.roi, origin_objects,
                                  &radar_frame->objects);
}

Track
modules/perception/radar/lib/tracker/conti_ars_tracker/conti_ars_tracker.cc

跟踪也就是经典的关联,匹配,更新。 关联值只使用了距离计算,匹配使用了ID匹配和匈牙利匹配,更新则是直接更新,没有使用滤波

bool ContiArsTracker::Track(const base::Frame &detected_frame,
                            const TrackerOptions &options,
                            base::FramePtr tracked_frame) {
  TrackObjects(detected_frame);
  CollectTrackedFrame(tracked_frame);
  return true;
}

ContiArsTracker::TrackObjects()

void ContiArsTracker::TrackObjects(const base::Frame &radar_frame) {
  std::vector<TrackObjectPair> assignments;
  std::vector<size_t> unassigned_tracks;
  std::vector<size_t> unassigned_objects;
  TrackObjectMatcherOptions matcher_options;
  const auto &radar_tracks = track_manager_->GetTracks();
  // 关联匹配
  matcher_->Match(radar_tracks, radar_frame, matcher_options, &assignments,
                  &unassigned_tracks, &unassigned_objects);
  // 更新四步骤:更新匹配的,更新未匹配的,删除丢失的,创建新的
  UpdateAssignedTracks(radar_frame, assignments);
  // 超过0.06秒设置为死亡
  UpdateUnassignedTracks(radar_frame, unassigned_tracks);
  //删除尚未匹配的track 
  DeleteLostTracks();
  //创建新的track 
  CreateNewTracks(radar_frame, unassigned_objects);
}

matcher_->Match
假设第一帧数据提供了5个物体,也在上一步里新建了5个track,那么此时第一帧数据处理完毕,等待第二帧数据进来,也就进入了匹配的过程。匹配的过程分为两步:

modules/perception/radar/lib/tracker/matcher/hm_matcher.cc

bool HMMatcher::Match(const std::vector<RadarTrackPtr> &radar_tracks,
                      const base::Frame &radar_frame,
                      const TrackObjectMatcherOptions &options,
                      std::vector<TrackObjectPair> *assignments,
                      std::vector<size_t> *unassigned_tracks,
                      std::vector<size_t> *unassigned_objects) {
  IDMatch(radar_tracks, radar_frame, assignments, unassigned_tracks,
          unassigned_objects);
  TrackObjectPropertyMatch(radar_tracks, radar_frame, assignments,
                           unassigned_tracks, unassigned_objects);
  return true;
}

第一步,IDMatch。先检查track中物体的id和此时这帧传感器数据的物体id是否一样,如果一样,那么该物体就与该track匹配上了,并进栈到assiment_track中(匹配过的track)。那这一步,其实是默认利用了conti雷达对物体的追踪,从而完成了ID匹配。

IDMatch
modules/perception/radar/lib/interface/base_matcher.cc

// 先检查track中物体的id和此时这帧传感器数据的物体id是否一样,且x,y向量的二范叔小于阈值,如果一样,那么该物体就与该track匹配上了,并进栈到 assignments 中
// 未匹配的观测目标放入 unassigned_objects
// 未匹配的跟踪目标放入 unassigned_tracks
void BaseMatcher::IDMatch(const std::vector<RadarTrackPtr> &radar_tracks, // 跟踪信息
                          const base::Frame &radar_frame, // 测量信息
                          std::vector<TrackObjectPair> *assignments,
                          std::vector<size_t> *unassigned_tracks,
                          std::vector<size_t> *unassigned_objects) {
  size_t num_track = radar_tracks.size();
  const auto &objects = radar_frame.objects;
  double object_timestamp = radar_frame.timestamp;
  size_t num_obj = objects.size();
  if (num_track == 0 || num_obj == 0) {
    unassigned_tracks->resize(num_track);
    unassigned_objects->resize(num_obj);
    // std::iota的第三个参数是初值,后面的依次将++value赋值给相应的元素。
    std::iota(unassigned_tracks->begin(), unassigned_tracks->end(), 0);
    std::iota(unassigned_objects->begin(), unassigned_objects->end(), 0);
    return;
  }
  std::vector<bool> track_used(num_track, false);
  std::vector<bool> object_used(num_obj, false);
  for (size_t i = 0; i < num_track; ++i) {
    // 获取rader track中物体
    const auto &track_object = radar_tracks[i]->GetObsRadar();
    // 获取rader track中物体时间戳
    double track_timestamp = radar_tracks[i]->GetTimestamp();
    if (track_object.get() == nullptr) {
      AERROR << "track_object is not available";
      continue;
    }
    // 观测值目标的id
    int track_object_track_id = track_object->track_id; 
    for (size_t j = 0; j < num_obj; ++j) {
      int object_track_id = objects[j]->track_id;
      // RefinedTrack:计算 track_object 与 观测objects之间距离
      if (track_object_track_id == object_track_id && RefinedTrack(track_object, track_timestamp, objects[j],object_timestamp)) {
        assignments->push_back(std::pair<size_t, size_t>(i, j));
        track_used[i] = true;
        object_used[j] = true;
      }
    }
  }
  for (size_t i = 0; i < track_used.size(); ++i) {
    if (!track_used[i]) {
      unassigned_tracks->push_back(i);
    }
  }
  for (size_t i = 0; i < object_used.size(); ++i) {
    if (!object_used[i]) {
      unassigned_objects->push_back(i);
    }
  }
}

第二步,TrackObjectPropertyMatch。这一步本质上的目的其实是,在雷达自身匹配的同时,也利用一些雷达数据的性质,扩充一点额外的匹配。具体的匹配方法是,使用一个二维矩阵来计算radar物体和track物体的关联值。关联值是通过两者的距离来计算的,也很好理解,此时观测到离轨迹中保存物体最近的那个就属于这个轨迹。

TrackObjectPropertyMatch
modules/perception/radar/lib/tracker/matcher/hm_matcher.cc

void HMMatcher::TrackObjectPropertyMatch(
    const std::vector<RadarTrackPtr> &radar_tracks,
    const base::Frame &radar_frame, std::vector<TrackObjectPair> *assignments,
    std::vector<size_t> *unassigned_tracks,
    std::vector<size_t> *unassigned_objects) {
  if (unassigned_tracks->empty() || unassigned_objects->empty()) {
    return;
  }
  std::vector<std::vector<double>> association_mat(unassigned_tracks->size());
  for (size_t i = 0; i < association_mat.size(); ++i) {
    association_mat[i].resize(unassigned_objects->size(), 0);
  }
  ComputeAssociationMat(radar_tracks, radar_frame, *unassigned_tracks,
                        *unassigned_objects, &association_mat);

  // from perception-common
  common::SecureMat<double> *global_costs =
      hungarian_matcher_.mutable_global_costs();
  global_costs->Resize(unassigned_tracks->size(), unassigned_objects->size());
  for (size_t i = 0; i < unassigned_tracks->size(); ++i) {
    for (size_t j = 0; j < unassigned_objects->size(); ++j) {
      (*global_costs)(i, j) = association_mat[i][j];
    }
  }
  std::vector<TrackObjectPair> property_assignments;
  std::vector<size_t> property_unassigned_tracks;
  std::vector<size_t> property_unassigned_objects;
  hungarian_matcher_.Match(
      BaseMatcher::GetMaxMatchDistance(), BaseMatcher::GetBoundMatchDistance(),
      common::GatedHungarianMatcher<double>::OptimizeFlag::OPTMIN,
      &property_assignments, &property_unassigned_tracks,
      &property_unassigned_objects);

  for (size_t i = 0; i < property_assignments.size(); ++i) {
    size_t gt_idx = unassigned_tracks->at(property_assignments[i].first);
    size_t go_idx = unassigned_objects->at(property_assignments[i].second);
    assignments->push_back(std::pair<size_t, size_t>(gt_idx, go_idx));
  }
  std::vector<size_t> temp_unassigned_tracks;
  std::vector<size_t> temp_unassigned_objects;
  for (size_t i = 0; i < property_unassigned_tracks.size(); ++i) {
    size_t gt_idx = unassigned_tracks->at(property_unassigned_tracks[i]);
    temp_unassigned_tracks.push_back(gt_idx);
  }
  for (size_t i = 0; i < property_unassigned_objects.size(); ++i) {
    size_t go_idx = unassigned_objects->at(property_unassigned_objects[i]);
    temp_unassigned_objects.push_back(go_idx);
  }
  *unassigned_tracks = temp_unassigned_tracks;
  *unassigned_objects = temp_unassigned_objects;
}

radar通过对关联值的距离计算了两次,然后取了平均值。分别使用了上一帧和当前帧的速度做预测。

double distance_forward = DistanceBetweenObs( 
          track_object, track_timestamp, frame_object, frame_timestamp); 
      double distance_backward = DistanceBetweenObs( 
          frame_object, frame_timestamp, track_object, track_timestamp); 
      association_mat->at(i).at(j) = 
          0.5 * distance_forward + 0.5 * distance_backward;

UpdateAssignedTracks && UpdateUNAssignedTracks
这里的更新track,我们可以理解为上一步已经把最新一帧的传感器数据放入了它属于的某个物体轨迹中,这一步就要对轨迹一些性质进行有效更新。

进一步阅读代码我们可以发现,这里并没有使用卡尔曼滤波器,而是直接使用Radar的观测量做了更新。

对每一个匹配到的结果,把里面的观测量拿出来用于更新轨迹,里面涉及到的主要就是物体中心,物体速度,以及相应的不确定性。

DeleteLostTracks()
删除不用的track,逻辑很简单,check所有的track,如果有dead的,则删掉。

modules/perception/radar/lib/tracker/common/radar_track_manager.cc

// void ContiArsTracker::DeleteLostTracks() { track_manager_->RemoveLostTracks(); }
int RadarTrackManager::RemoveLostTracks() {
  size_t track_count = 0;
  for (size_t i = 0; i < tracks_.size(); ++i) {
    if (!tracks_[i]->IsDead()) {
      if (i != track_count) {
        tracks_[track_count] = tracks_[i];
      }
      ++track_count;
    }
  }
  int removed_count = static_cast<int>(tracks_.size() - track_count);
  ADEBUG << "Remove " << removed_count << " tracks";
  tracks_.resize(track_count);
  return static_cast<int>(track_count);
}

CreateNewTracks
从percieve模块开始的时候,就是一帧一帧的Radar数据送入的,所以此时我们拿到的也是一帧雷达数据(经过很多重新封装的过程之后的)。

第一帧雷达数据进入后,由于此时还没有track,所以我们首先跳过了前三步,进入了新建track的这个函数。其代码逻辑就是将尚未被分配到track的物体,依次新建一个track。track_manager_的addtrack方法涉及到了如何把物体信息转换到track信息中