接着上述的工作,在将近一个月的摸索实践中,终于完成了抓取的工作。其中包含tf坐标系的转换以及最后位置的调试工作。


这一篇先讲述如果进行根据camera_link以及base_link之间的转换关系完成一个点、一组点的进阶坐标转换。接着之前的手眼标定,连接起相机以及机械手臂的tf_tree.

#rosrun rqt_tf_tree rqt_tf_tree

这里要注意在camera_link中包含的子树camera_rgb_optical_frame以及camera_rgb_frame两个坐标系是不同的,从rviz里面来看其position大致重合,但是orentation是不一样的,而且相机检测到的可乐罐的位置坐标所属是在camera_rgb_optical_frame下的,所以在转换过程中(TF点的transform)一定要注意函数里面的参数是什么意思,坐标系代表的是哪一个坐标系,不然不清不楚在后面的位置标定过程中会出现很多错误。


  开始tf转换过程的总结。首先当然是参考的我们官网教程,不可否认,虽然在他人的csdn博客可以学到或者拿到想看的内容,但是相比之下还是官网内容比较详细,如果某句程序、函数不懂可以很快的理解并修改。下面是核心的subscriber以及publisher(c++版本以及python版本)

http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 

http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29

以及依然可以替代参考的几篇csdn博客

https://blog.csdn.net/Will_Ye/article/details/79485964

https://blog.csdn.net/start_from_scratch/article/details/50762293/

https://blog.csdn.net/Will_Ye/article/details/79996311

由开发流程来看,首先要查看所要得到的物体位置信息(如何挖数据),接着要转换一个点从camera_link到base_link下,接着便是将物体的实时位置从camera_rgb_optical_frame到base_link坐标系下面。

这里插一个在pyhon里面查看所用到的函数:

ipython()
help(tf.TransformBroadcaster.sendTransform)
可以查看该函数的参数及相关信息。

                   -----------------------------------------------------1---------------------------------------------------------

1.查看物体位置信息:

之前已经提到RecognizedObjectArray这个数组存放的是物体的位置信息,包括position orentation 以及header等等。这个数组是linemod算法在识别到物体位置以后将位置存储到这里(并且以话题的形式publish消息,所以我们能在rviz里面看到物体的标定位置)

下面是之前定义存放识别到物体信息存放的数组位置listener.py(写的一个简单的判断打印物体信息的位置)

#!/usr/bin/env python
#coding=utf-8
import rospy
 
# from std_msgs.msg import String  
from object_recognition_msgs.msg import RecognizedObjectArray
 
def callback(data):
    if len(data.objects)>0:
        print "coke detected~~~~~~~~~~~~~~~~~~~~"
        rospy.loginfo(rospy.get_caller_id() + 'heard %s', data.objects[0].pose.pose.pose)
        # rospy.loginfo(rospy.get_caller_id() + 'heard %s', data.objects[0].pose)
    else:
        print "nothing detected................."
 
def listener():
    # In ROS, nodes are uniquely named. If two nodes with the same
    # name are launched, the previous one is kicked off. The
    # anonymous=True flag means that rospy will choose a unique
    # name for our 'listener' node so that multiple listeners can
    # run simultaneously.
 
    rospy.init_node('listener', anonymous=True)
    rospy.Subscriber('recognized_object_array', RecognizedObjectArray, callback)
    # spin() simply keeps python from exiting until this node is stopped
    rospy.spin()
 
if __name__ == '__main__':
    listener()

 ---------------------------------------关于import-----------------------------------------

from A模块 import B函数=import A 使用A.B(参数)。要注意的是虽然使用from import以后不需要在函数名加前缀,但是B函数只默认是A模块的函数(如果同名的话),这样同名但是分属不同模块 不同功能的函数就会造成冲突。所以优先使用import ,之后使用 A模块.B函数(C参数)的形式。

------------------------------------------------------------------------------------------------

这个程序只是打印物体的位置信息并且加了一个简单的判断,让我们来看为什么要这样写(怎么来挖数据)。挖数据核心主要是rospy.Subscriber('recognized_object_array', RecognizedObjectArray, callback)用来订阅RecognizedObjectArray话题并且一旦接收到消息就调用回调函数callback。而另一句打印 rospy.loginfo(rospy.get_caller_id() + 'heard %s', data.objects[0].pose.pose.pose) ,前面的id没什么用,后面的回调函数先判断是不是空的内容(是否识别到物体),data指代了RecognizedObjectArray的所有信息,也就是一个形参。用下面的命令来查看数据:

#rosmsg show RecognizedObjectArray

用来查看发布的消息的类型以及包含的内容

这样来看data=RecognizedObjectArray,然后data.object[]=RecognizedObjectArray[],也就是指向消息的第二个变量object[]数组,如果用object[0]代表取当前检测到的第一个物体位置信息的数组信息。继续挖,data.object[]=RecognizedObjectArray[]包含了几个对象,比如header、type、confidence、pose(未截到图)等等以及他们对应的数据类型。运行listener来继续看

#rosrun ur10_rg2_moveit_config listener.py打印识别到的物体位置

上面是只挖到data.object[].pose。这样来看可以看到,data.object[].pose.pose.pose即指到只有posetion和orientation数据处。这样我们就可以挖到指定的数据,并且对挖数据有一个直观的学习了解。

                   -----------------------------------------------------2---------------------------------------------------------

2.将一个点从camera_link到base_link:

首先说明之前标定两坐标系以后以及发布了一个关系:在创建的tf_broadcaster广播两坐标关系并且放置在tf::transform
函数中。

#rosrun robot_setup_tf tf_broadcaster
#include <ros/ros.h>
#include <tf/transform_broadcaster.h>
 
int main(int argc, char** argv){
  ros::init(argc, argv, "robot_tf_publisher");
  ros::NodeHandle n;
 
  ros::Rate r(100);
 
  tf::TransformBroadcaster broadcaster;
 
  while(n.ok()){
    broadcaster.sendTransform(
      tf::StampedTransform(
        tf::Transform(tf::Quaternion(0.0144077350003, -0.0295559697343, -0.0183787903043, 0.999290289101), tf::Vector3(0.244332058703, -0.0155924006203, -0.0434545849503)),
        ros::Time::now(),"base_link", "camera_link"));
    r.sleep();
  }
}

编写了一个tf_listener_OnePosetransform.cpp 用于单独发布一个点,这个以后在位置标定也有作用:

#rosrun robot_setup_tf tf_listener_OnePosetransform
#include <ros/ros.h>
// #include <geometry_msgs/PointStamped.h>
#include "geometry_msgs/PoseStamped.h"
#include "geometry_msgs/Pose.h"
#include <tf/transform_listener.h>
#include <tf/transform_broadcaster.h>
#include <visualization_msgs/MarkerArray.h> 
 
geometry_msgs::PoseStamped real_pose;
geometry_msgs::PoseStamped transed_pose;
 
void transformPose(const tf::TransformListener& listener){
  //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
  // geometry_msgs::PointStamped laser_point;
  // geometry_msgs::PoseStamped msg;
  // laser_point.header.frame_id = "camera_link";
  // real_pose.header.frame_id = "camera_link";
  real_pose.header.frame_id = "camera_rgb_optical_frame";
  // real_pose.header.frame_id = "camera_rgb_frame";
  //we'll just use the most recent transform available for our simple example
  // laser_point.header.stamp = ros::Time();
  real_pose.header.stamp = ros::Time();
  real_pose.pose.position.x = -0.0521919690073; 
  real_pose.pose.position.y = 0.0947469994426; 
  real_pose.pose.position.z = 0.928996682167;   
  real_pose.pose.orientation.w = 0.487851679325; 
  real_pose.pose.orientation.x = 0.518789410591; 
  real_pose.pose.orientation.y = -0.516209602356; 
  real_pose.pose.orientation.z = 0.47580036521; 
  //just an arbitrary point in space
 
  try{
    // geometry_msgs::PointStamped base_point;
    // listener.transformPoint("base_link", laser_point, base_point);
    listener.transformPose("base_link", real_pose, transed_pose);
 
    ROS_INFO("camera_rgb_frame: (%.2f, %.2f. %.2f, %.2f, %.2f, %.2f, %.2f) -----> base_link: (%.4f, %.4f, %.4f, %.4f, %.4f, %.4f, %.4f) at time %.2f",
        real_pose.pose.position.x, real_pose.pose.position.y, real_pose.pose.position.z, real_pose.pose.orientation.x, real_pose.pose.orientation.y, real_pose.pose.orientation.z, real_pose.pose.orientation.w,
        transed_pose.pose.position.x, transed_pose.pose.position.y, transed_pose.pose.position.z, transed_pose.pose.orientation.x, transed_pose.pose.orientation.y, transed_pose.pose.orientation.z, transed_pose.pose.orientation.w, transed_pose.header.stamp.toSec());
  }
  catch(tf::TransformException& ex){
    ROS_ERROR("Received an exception trying to transform a point from \"camera_rgb_optical_frame\" to \"base_link\": %s", ex.what());
  }
}
 
int main(int argc, char** argv){
  ros::init(argc, argv, "tf_listener_Posetransform");
  ros::NodeHandle n;
 
  tf::TransformListener listener(ros::Duration(10));
 
  //we'll transform a point once every second
  ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPose, boost::ref(listener)));
  ros::Publisher my_publisher_realPose = n.advertise<geometry_msgs::PoseStamped>("real_pose", 1000);
  ros::Publisher my_publisher_transedPose = n.advertise<geometry_msgs::PoseStamped>("transed_pose", 1000);
  ros::Rate loop_rate(10);
  while (ros::ok())
      {
        my_publisher_transedPose.publish(transed_pose);
        my_publisher_realPose.publish(real_pose);
 
        ros::spinOnce();
        loop_rate.sleep();
      }
  ros::spin();
  return 0;
}

这里的点是之后自己检测到物体的点自己写进去的,实际中可以修改来帮助自己查看点的位置变化。这里的real_pose以及transed_pose都是geometry_msgs::PoseStamped消息类型的,以后查看转换信息也要在这里查看。tf_listener_OnePosetransform文件中只定义了一个节点,但是它同时完成了订阅(subscriber)以及发布(publish)的工作。listener.transformPose("base_link", real_pose, transed_pose);监听之前发布的两坐标系关系,利用之前的transformPose函数即可将camera_link下的一个坐标位置(自定义的real_pose坐标)转换到base_link下面。另外一定要注意坐标所属的坐标系real_pose.header.frame_id = "camera_rgb_optical_frame"。%.2f表示f为浮点型数据,2表示只展示到小数点后两位,所以可以改为4位。1000代表缓存消息的最大内存。

----------------------------------关于错误 输出位置结果 nan nan nan nan--------------------------------

如果在输出过程中,位置结果出现nan nan nan nan ,表示位置信息错误,可能的原因是节点发送的消息 信息并不符合规范,nan表示not a number。所以应该检查检点发送的信息 用rosmsg echo /path topic 来检查。

----------------------------------------------------------------------------------------------------------------------------

接着就是要将这两个点发布出去

ros::Publisher my_publisher_realPose = n.advertise<geometry_msgs::PoseStamped>("real_pose", 1000);

my_publisher_realPose是自己命名的消息名,消息类型为geometry_msgs::PoseStamped,话题名为real_pose(这样就可以在rviz里面add poseStamped类型,选择相应话题即可在rviz里面显示观察。)。

上面那一行程序只是说要发布这个消息,但是具体的发布操作却是下一步:

my_publisher_realPose.publish(real_pose);

这样我们就完成了一个自定义点的从camera_rgb_optical_frame到base_link的转换。

用rosmsg即可查看: 要挖的话也是一样的道理。

rosmsg show geometry_msgs/PoseStamped 打印poseStamped的消息

                   -----------------------------------------------------3---------------------------------------------------------

3.将物体位置从camera_rgb_optical_frame到base_link转换

#rosrun robot_setup_tf tf_listener

创建tf_listener.cpp(包括所需的依赖项 创建过程在官网的编写subscriber以及publisher都有提到,读者可以自行到文章最前面查看)

(原谅我 在tf这块参考代码比较多的是cpp文件,所以在修改完程序以后每次都得cmake重新编译,而且订阅和发布都是cpp来完成的,不过在最后的控制机械臂运行的python程序中又变成在python里面订阅这个消息,所以读者要注意博主在这块的限制(更熟悉cpp一点))。

#include <ros/ros.h>
#include "geometry_msgs/PoseStamped.h"
#include "geometry_msgs/Pose.h"
#include <tf/transform_listener.h>
#include <tf/transform_broadcaster.h>
// #include "std_msgs/String.h"
#include <object_recognition_msgs/RecognizedObjectArray.h> //using ORK
#include <visualization_msgs/MarkerArray.h> 
 
//实时转换采集到的可乐罐的位置点转换
 
std::string Object_id = "";
double Object_assurance = 0;
geometry_msgs::PoseStamped Object_pose;
geometry_msgs::PoseStamped transed_pose;
bool firstCB = false;
 
void myCallback(const object_recognition_msgs::RecognizedObjectArray objects_msg) //回调函数 
{ 
  double confident = 0;
  int id = -1;
  
  if (firstCB == false && (int)objects_msg.objects.size() == 1) {
        Object_id.assign(objects_msg.objects[0].type.key.c_str());
        firstCB = true;
    }
  
  for (int i = 0; i < objects_msg.objects.size(); ++i) {
    if (Object_id.compare(objects_msg.objects[i].type.key.c_str()) == 0) {
      if (objects_msg.objects[i].confidence > confident) {
          
          confident = objects_msg.objects[i].confidence;
          id = i;
          }
        }
    }
 
  if (id >= 0) {
        Object_pose.pose = objects_msg.objects[id].pose.pose.pose;
        Object_assurance = objects_msg.objects[id].confidence;
    } else {
        confident = 0;
    }
}
 
void transformPose(const tf::TransformListener& listener){
  //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
  // geometry_msgs::PointStamped laser_point;
  // geometry_msgs::PoseStamped transed_pose;
  // laser_point.header.frame_id = "camera_link";
  // msg.header.frame_id = "camera_link";
 
  // Object_pose.header.frame_id = "camera_rgb_frame";
  Object_pose.header.frame_id = "camera_rgb_optical_frame";
  //we'll just use the most recent transform available for our simple example
  // laser_point.header.stamp = ros::Time();
  ROS_INFO("Waiting for the object");
  if (Object_assurance > 0.8) {
      ROS_INFO("Best Similarity = %f ", Object_assurance);
      ROS_INFO("pose x is: %f", Object_pose.pose.position.x);
      ROS_INFO("pose y is: %f", Object_pose.pose.position.y);
      ROS_INFO("pose z is: %f", Object_pose.pose.position.z);
 
      bool tferr = true;
      while (tferr) {
        tferr = false;
      
      try{  
        listener.transformPose("base_link", Object_pose, transed_pose);
        }
      catch (tf::TransformException &exception) {
          ROS_ERROR("Received an exception trying to transform a point from \"camera_rgb_frame\" to \"base_link\": %s", exception.what());
          tferr = true;
          ros::Duration(0.1).sleep(); 
          ros::spinOnce();
        }
 
      } //while
 
    ROS_INFO("camera_rgb_optical_frame: (%.2f, %.2f. %.2f, %.2f, %.2f, %.2f, %.2f) -----> base_link: (%.2f, %.2f, %.2f, %.2f, %.2f, %.2f, %.2f) at time %.2f",
        Object_pose.pose.position.x, Object_pose.pose.position.y, Object_pose.pose.position.z, Object_pose.pose.orientation.x, Object_pose.pose.orientation.y, Object_pose.pose.orientation.z, Object_pose.pose.orientation.w,
        transed_pose.pose.position.x, transed_pose.pose.position.y, transed_pose.pose.position.z, transed_pose.pose.orientation.x, transed_pose.pose.orientation.y, transed_pose.pose.orientation.z, transed_pose.pose.orientation.w, transed_pose.header.stamp.toSec());
  }  //if
} //void
 
int main(int argc, char** argv){
  ros::init(argc, argv, "robot_tf_listener");
  ros::NodeHandle n;
 
  ros::Subscriber my_subscriber_object= n.subscribe("recognized_object_array",1000,myCallback);
  tf::TransformListener listener(ros::Duration(10));
 
  //we'll transform a point once every second
  ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPose, boost::ref(listener)));
  ros::Publisher my_publisher_transPose = n.advertise<geometry_msgs::PoseStamped>("transformPose", 1000);
 
  ros::Publisher my_publisher_ObjectPose = n.advertise<geometry_msgs::PoseStamped>("ObjectPose", 1000);
 
  ros::Rate loop_rate(10);
  while (ros::ok())
    {
      // std_msgs::String msg;
      // msg.data = ss.str(); 
      // ROS_INFO("=====transformedPose======: (%.2f, %.2f, %.2f, %.2f, %.2f, %.2f, %.2f)", transed_pose.pose.position.x, transed_pose.pose.position.y, transed_pose.pose.position.z, transed_pose.pose.orientation.x, transed_pose.pose.orientation.y, transed_pose.pose.orientation.z, transed_pose.pose.orientation.w);
      my_publisher_transPose.publish(transed_pose);
      my_publisher_ObjectPose.publish(Object_pose);
 
      ros::spinOnce();
      loop_rate.sleep();
    }
    ros::spin();
    return 0;
}

这个就是最终的tf变换过程所需要的内容程序。这个程序中完成了三件事:

(1)物体信息的订阅、简单判断以及发布

(2)将物体位置从camera_rgb_optical_frame到base_link转换

  (3)    将转换后的位置信息发布出去。

这段程序和之前一个点的转换过程类似所以不再赘述,值得注意的是:

geometry_msgs::PoseStamped Object_pose;
geometry_msgs::PoseStamped transed_pose;

物体的位置以及转换后的位置要声明为全局变量。


其他:

终于在十月份的尾巴完成了这个小项目的工作任务,当然今天只是对tf这一块儿做个简单总结,明天继续总结完剩下的调整标定工作,记录怎样分析错误并一步步解决最终使机械臂抓取到物体。像下面就是之前一直卡的一个情况,下一篇博客也是最为重要也是最完整的找自己出现的错误,修改程序并抓到物体,也可以发自己的测试视频。目前来看这个算法在抓取任务方面还是蛮靠谱的,虽然稳定性还是不是很好,尤其在边远的位置处检测效果不好(很容易卡主没有检测结果),当然只要有结果就能抓起来证明自己在其他的程序并没有问题,算法的问题也是今后论文创新的关键。

记录自己的一月,完成明天的最后内容的总结就可以继续下一步的工作了,继续努力,↖(^ω^)↗加油!


附:还有一个可以直接运行得到当前机械手位置的python程序 算作一个插件

#rosrun ur10_rg2_moveit_config my_pose_print.py打印当前机械手位置

#!/usr/bin/env python
 
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, SRI International
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of SRI International nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Author: Acorn Pooley, Mike Lautman
 
## BEGIN_SUB_TUTORIAL imports
##
## To use the Python MoveIt! interfaces, we will import the `moveit_commander`_ namespace.
## This namespace provides us with a `MoveGroupCommander`_ class, a `PlanningSceneInterface`_ class,
## and a `RobotCommander`_ class. (More on these below)
##
## We also import `rospy`_ and some messages that we will use:
##
import math
import tf
import sys
import copy
import rospy
import moveit_commander
import moveit_msgs.msg
import geometry_msgs.msg
from math import pi
from std_msgs.msg import String
from moveit_commander.conversions import pose_to_list
from ur_control.srv import *
import random
 
 
 
class MoveGroupPythonIntefaceTutorial(object):
    """MoveGroupPythonIntefaceTutorial"""
    def __init__(self):
        super(MoveGroupPythonIntefaceTutorial, self).__init__()
 
        ## BEGIN_SUB_TUTORIAL setup
        ##
        ## First initialize `moveit_commander`_ and a `rospy`_ node:
        moveit_commander.roscpp_initialize(sys.argv)
        rospy.init_node('end_effector_pose_print',
                        anonymous=True)
 
        ## Instantiate a `RobotCommander`_ object. This object is the outer-level interface to
        ## the robot:
        robot = moveit_commander.RobotCommander()
 
        ## Instantiate a `PlanningSceneInterface`_ object.  This object is an interface
        ## to the world surrounding the robot:
        scene = moveit_commander.PlanningSceneInterface()
 
        ## Instantiate a `MoveGroupCommander`_ object.  This object is an interface
        ## to one group of joints.  In this case the group is the joints in the Panda
        ## arm so we set ``group_name = panda_arm``. If you are using a different robot,
        ## you should change this value to the name of your robot arm planning group.
        ## This interface can be used to plan and execute motions on the Panda:
        group_name = "manipulator"
        group = moveit_commander.MoveGroupCommander(group_name)
 
        ## We create a `DisplayTrajectory`_ publisher which is used later to publish
        ## trajectories for RViz to visualize:
        display_trajectory_publisher = rospy.Publisher('/move_group/display_planned_path',
                                                       moveit_msgs.msg.DisplayTrajectory,
                                                       queue_size=20)
 
        ## END_SUB_TUTORIAL
 
        ## BEGIN_SUB_TUTORIAL basic_info
        ##
        ## Getting Basic Information
        ## ^^^^^^^^^^^^^^^^^^^^^^^^^
        # We can get the name of the reference frame for this robot:
        planning_frame = group.get_planning_frame()
        print "============ Reference frame: %s" % planning_frame
 
        # We can also print the name of the end-effector link for this group:
        eef_link = group.get_end_effector_link()
        print "============ End effector: %s" % eef_link
 
        # We can get a list of all the groups in the robot:
        group_names = robot.get_group_names()
        print "============ Robot Groups:", robot.get_group_names()
 
        # Sometimes for debugging it is useful to print the entire state of the
        # robot:
        print "============ Printing robot state"
        print robot.get_current_state()
        print ""
        ## END_SUB_TUTORIAL
 
        # Misc variables
        self.box_name = ''
        self.robot = robot
        self.scene = scene
        self.group = group
        self.display_trajectory_publisher = display_trajectory_publisher
        self.planning_frame = planning_frame
        self.eef_link = eef_link
        self.group_names = group_names
 
 
    def print_current_endeffector_pose(self):
        print self.group.get_current_pose()
        pass
 
 
def main():
  try:
    print "============ Press `Enter` to begin to pose endeffector_pose"
    # raw_input()
    tutorial = MoveGroupPythonIntefaceTutorial()
 
    rospy.sleep(1.0)
    while not rospy.is_shutdown():
        print("---")
        tutorial.print_current_endeffector_pose()
        rospy.sleep(0.5)
        pass
 
    print "endeffector_pose complete!"
  except rospy.ROSInterruptException:
    return
  except KeyboardInterrupt:
    return
 
if __name__ == '__main__':
  main()

--------------------------------------------------更新2019.7.19--------------------------------------------------------------

因为最近半年都在准备发论文以及准备雅思,所以都没有登csdn回复。希望8.24雅思顺利,继续努力把。之后准备好雅思及申请好后,会总结一下今年前两个半月paper做的3D 物体识别、6D位姿估计以及重叠物体的检测等内容。

另外,有很多人通过邮箱联系到了我我也给出了我的解答。如有问题请发邮件到: 1701210376@pku.edu.cn 最近依然不会回复csdn。

如果有人感兴趣想加入的话,自动化所招实习生啦!!!

【自动化所类脑中心-类脑机器人实习生招聘】
类脑认知机器人算法及应用实习生:主要负责基于ROS的认知机器人开发,要求自动控制及相关专业(或个人十分感兴趣),具有机器人ROS开发经验/python编程基础者优先,最好能实习6个月以上。我们将提供Baxter机器人、Robotnik机器底盘、NAO机器人等作为硬件支撑实习生的研究。
有意者请联系tielin.zhang@ia.ac.cn

--------------------------------------------------------7.19--------------------------------------------------------------------------------------

-------------------------------------------------------------------------------2020.3.2更新-------------------------------------------------------------

最近在准备面试,有问题可以联系上面的邮箱。最终的文章代码链接以及原文、实现步骤已经在系列终结篇给出。

另外本人画的场景设计画在站酷上,欢迎交流www.zcool.com.cn/u/20178144