Skip to content

Project #5: Line Follower and Maze Solving robots

Muhammad Luqman edited this page Jul 25, 2023 · 5 revisions

Issue Link #5

Project Overview

This project consists of two key tasks: maze solving and line following, both performed by a Turtlebot3 robot in a simulated environment. The tasks are carried out using ROS2 (Robot Operating System 2) for communication between different components, and OpenCV for image processing in the line-following task.

Installing Dependencies

  • sudo apt install ros-humble-gazebo-ros-pkgs
  • sudo apt-get install ros-humble-turtlebot3-gazebo
  • sudo apt install python3-opencv

Running the Project

  • Clone the repository into your ROS2 workspace (if not done already) using:
    git clone -b running_projects https://github.com/Robotisim/mobile_robotics_ROS2.git
    
  • Build your workspace and source it by running the following command (assuming you are in the workspace root directory):
    colcon build && source install/setup.bash
    
  • Export turtlebot3 model for each terminal
    export TURTLEBOT3_MODEL=waffle_pi
    
  • Run Maze Solving
    ros2 launch drive_tb3 p5_a_maze_solve.launch.py
    
  • Run Line Following
    ros2 launch drive_tb3 p5_b_line_following.launch.py
    
  • Line following is not going to work directly as there is a path that you have to change according to your system in the file mobile_robotics_ROS2/drive_tb3/worlds/line_following.world where ever you find (4 times) uri-/home/luqman/robotisim_ws/src/mobile_robotics_ROS2/drive_tb3/models/meshes/base.dae -uri .
  • Replace the path according to your system path

Nodes

This project includes a node for driving the TurtleBot3 robots.

  • p5_a_lidar_data_sub.cpp : This node subscribes to the lidar data published by the robot. It processes the data to identify distances to obstacles on the right, front, and left of the robot.

  • p5_b_maze_solving.cpp : This node uses the data provided by the lidar_data_sub node to navigate the robot through the maze. It implements a basic decision-making algorithm to choose the robot's actions based on the proximity of obstacles. The robot moves straight by default, turns left or right if there's an obstacle in front, and stops if it has exited the maze.

  • p5_c_camera_data_sub.cpp : This node subscribes to the camera data published by the robot. It converts the raw image data to grayscale and displays the image.

  • p5_d_line_following.cpp : This node uses the data provided by the camera_data_sub node to make the robot follow a line. It applies the Canny edge detection algorithm to the grayscale image, finds the edges of the line, and calculates the midpoint of the line. The robot then adjusts its direction based on the position of the midpoint relative to the center of the image.

Launch Files

  • p5_maze_solve.launch.py : This file is responsible for launching the maze-solving task. It starts the Gazebo server and client, spawns the robot in the maze world, and runs the maze-solving node which uses lidar data to navigate through the maze.

  • p5_b_line_following.launch.py : This file is responsible for launching the line-following task. It starts the Gazebo server and client, spawns the robot in the line-following world, and runs the line-following node which uses camera data to follow a line on the floor.

Learning Outcomes

  • Processing Sensor Data of Camera and Lidar
  • Data to robot motion pipeline
  • 3D worlds in gazebo