前言

接上篇博客:ros机器人编程实践(12.1)- 用turtlebot仿真巡线机器人

巡线设计

用自定义的图片设计gazebo的地板

首先我们需要设计一个gazebo的地板,这里感谢前辈的回答:参考问答   第一步:在home目录下ctrl+h显示隐藏文件     第二步:在.gazebo文件夹创建如下文件夹  
mkdir ~/.gazebo/models/my_ground_plane

mkdir -p ~/.gazebo/models/my_ground_plane/materials/textures 

mkdir -p ~/.gazebo/models/my_ground_plane/materials/scripts
  第三步:创建材料文件  
cd ~/.gazebo/models/my_ground_plane/materials/scripts
vi my_ground_plane.material
  my_ground_plane.material文件如下:  
material MyGroundPlane/Image
        {
          receive_shadows on
          technique
          {
            pass
            {
              ambient 0.5 0.5 0.5 1.0
              texture_unit
              {
                texture MyImage.png
              }
            }
          }
        }
  第四步:在textures下存我们想要用的地板图片MyImage.png     如图,将它放到~/.gazebo/models/my_ground_plane/materials/textures下面 可以使用命令:  
cp 你的图片路径/MyImage.png ~/.gazebo/models/my_ground_plane/materials/textures/
  第五步:在my_ground_plane文件夹下,创建文件model.sdf  
cd ~/.gazebo/models/my_ground_plane
vi model.sdf
  model.sdf如下:  
<?xml version="1.0" encoding="UTF-8"?>
<sdf version="1.4">
   <model name="my_ground_plane">
      <static>true</static>
      <link name="link">
         <collision name="collision">
            <geometry>
               <plane>
                  <normal>0 0 1</normal>
                  <size>15 15</size>
               </plane>
            </geometry>
            <surface>
               <friction>
                  <ode>
                     <mu>100</mu>
                     <mu2>50</mu2>
                  </ode>
               </friction>
            </surface>
         </collision>
         <visual name="visual">
            <cast_shadows>false</cast_shadows>
            <geometry>
               <plane>
                  <normal>0 0 1</normal>
                  <size>15 15</size>
               </plane>
            </geometry>
            <material>
               <script>
                  <uri>model://my_ground_plane/materials/scripts</uri>
                  <uri>model://my_ground_plane/materials/textures/</uri>
                  <name>MyGroundPlane/Image</name>
               </script>
            </material>
         </visual>
      </link>
   </model>
</sdf>
  第六步:在my_ground_plane文件夹下,创建文件model.config,内容如下:  
<?xml version="1.0" encoding="UTF-8"?>
<model>
   <name>My Ground Plane</name>
   <version>1.0</version>
   <sdf version="1.4">model.sdf</sdf>
   <description>My textured ground plane.</description>
</model>
 

在gazebo中导入自己的地板模型

打开gazebo:  
roslaunch turtlebot_gazebo turtlebot_world.launch 
  在另一个终端输入下面命令打开rviz:  
roslaunch turtlebot_rviz_launchers view_robot.launch --screen
  点击gazebo左上角的insert插入刚才建的模型:     选择My Ground Plane:     左键点击一下然后移动鼠标拖进去:     右键models列表里删除除了自己建的地板和机器人以外的其他家具:     用gazebo的移动工具,将机器人放到线上:       用gazebo的旋转工具将机器人的摄像机对准黄线:       旋转蓝色那条,让机器人绕z轴旋转,并对准:    

写脚本控制机器人巡线

创建ros工作区间

 
mkdir -p ~/turtlebot_ws/src
cd ~/turtlebot_ws
catkin_init_workspace
cd src
catkin_create_pkg turtlebot1 rospy geometry_msgs sensor_msgs
cd ..
catkin_make
source ./devel/setup.bash
 

写过滤黄线的脚本

  这里使用hsv提取黄线 先百度搜索黄色的hsv最大最小值:     在turtlebot1包的src下写过滤脚本follower_color_filter.py  
cd src/turtlebot1/src
vi follower_color_filter.py
rosrun turtlebot1 follower_color_filter.py
  follower_color_filter.py如下:  
#!/usr/bin/env python
# BEGIN ALL
import rospy, cv2, cv_bridge, numpy
from sensor_msgs.msg import Image

class Follower:
  def __init__(self):
    self.bridge = cv_bridge.CvBridge()
    cv2.namedWindow("window", 1)
    self.image_sub = rospy.Subscriber('camera/rgb/image_raw', 
                                      Image, self.image_callback)
  def image_callback(self, msg):
    # BEGIN BRIDGE
    image = self.bridge.imgmsg_to_cv2(msg)
    cv2.imshow("ori", image )
    # END BRIDGE
    # BEGIN HSV
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    cv2.imshow("hsv", hsv )
    # END HSV
    # BEGIN FILTER
    lower_yellow = numpy.array([ 26,  43, 46])
    upper_yellow = numpy.array([34, 255, 255])
    mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
    # END FILTER
    masked = cv2.bitwise_and(image, image, mask=mask)
    cv2.imshow("window2", mask ) 
    cv2.waitKey(3)

rospy.init_node('follower')
follower = Follower()
rospy.spin()
# END ALL
  运行效果:     可以看到very nice!!!  

写巡线脚本

 
vi follower_line.py
rosrun turtlebot1 follower_line.py
 
#!/usr/bin/env python
# BEGIN ALL
import rospy, cv2, cv_bridge, numpy
from sensor_msgs.msg import Image
from geometry_msgs.msg import Twist
 
class Follower:
  def __init__(self):
    self.bridge = cv_bridge.CvBridge()
    cv2.namedWindow("window", 1)
    self.image_sub = rospy.Subscriber('camera/rgb/image_raw', 
                                      Image, self.image_callback)
    self.cmd_vel_pub = rospy.Publisher('cmd_vel_mux/input/teleop',
                                       Twist, queue_size=1)
    self.twist = Twist()
  def image_callback(self, msg):
    image = self.bridge.imgmsg_to_cv2(msg,desired_encoding='bgr8')
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    lower_yellow = numpy.array([ 26,  43, 46])
    upper_yellow = numpy.array([34, 255, 255])
    mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
     
    h, w, d = image.shape
    search_top = 3*h/4
    search_bot = 3*h/4 + 20
    mask[0:search_top, 0:w] = 0
    mask[search_bot:h, 0:w] = 0
    M = cv2.moments(mask)
    if M['m00'] > 0:
      cx = int(M['m10']/M['m00'])
      cy = int(M['m01']/M['m00'])
      cv2.circle(image, (cx, cy), 20, (0,0,255), -1)
      # BEGIN CONTROL
      err = cx - w/2
      self.twist.linear.x = 0.2
      self.twist.angular.z = -float(err) / 100
      self.cmd_vel_pub.publish(self.twist)
      # END CONTROL
    cv2.imshow("window", image)
    cv2.waitKey(3)
 
rospy.init_node('follower')
follower = Follower()
rospy.spin()
# END ALL

  运行脚本:    

总结

  挺好玩的吧,可以仿真下巡线的算法哈哈哈哈,做过智能车的小伙伴应该不陌生哦~ enjoy it~