This is a ROS 2 package for Boston Dynamics' Spot. The package contains all necessary topics, services and actions to teleoperate or navigate Spot. This package is derived from this ROS 1 package. This package currently corresponds to version 4.1.0 of the spot-sdk.
This package is tested for Ubuntu 22.04 and ROS 2 Humble, which can be installed following this guide.
In your ROS 2 workspace src
directory, clone the repository:
git clone https://github.com/bdaiinstitute/spot_ros2.git
and initialize and install the submodules:
cd spot_ros2
git submodule init
git submodule update
Then run the install script to install the necessary Boston Dynamics and ROS dependencies. The install script takes the optional argument --arm64
; it otherwise defaults to an AMD64 install. Run the correct command based on your system:
cd <path to spot_ros2>
./install_spot_ros2.sh
or
./install_spot_ros2.sh --arm64
From here, build and source the ROS 2 workspace:
cd <ros2 ws>
source /opt/ros/humble/setup.bash
colcon build --symlink-install --packages-ignore proto2ros_tests
source install/local_setup.bash
We suggest ignoring the proto2ros_tests
package in the build as it is not necessary for running the driver. If you choose to build it, you will see a number of error messages from testing the failure paths.
Alternatively, a Dockerfile is available that prepares a ready-to-run ROS2 Humble install with the Spot driver installed.
No special precautions or actions need to be taken to build the image. Just clone the repository and run docker build
in the root of the repo to build the image.
No special actions need to be taken to run the image either. However, depending on what you intend to do with the driver in your project, the following flags may be useful:
Flag | Use |
---|---|
--runtime nvidia + --gpus all |
Use the NVIDIA Container Runtime to run the container with GPU acceleration |
-e DISPLAY |
Bind your display to the container in order to run GUI apps. Note that you will need to allow the Docker container to connect to your X11 server, which can be done in a number of ways ranging from disabling X11 authentication entirely, or by allowing the Docker daemon specifically to access your display server. |
--network host |
Use the host network directly. May help resolve issues connecting to Spot Wifi |
The image does not have the proto2ros_tests
package built. You'll need to build it yourself inside the container if you need to use it.
The Spot driver contains all of the necessary topics, services, and actions for controlling Spot over ROS 2. To launch the driver, run:
ros2 launch spot_driver spot_driver.launch.py [config_file:=<path/to/config.yaml>] [spot_name:=<Robot Name>] [publish_point_clouds:=<True|False>] [launch_rviz:=<True|False>] [uncompress_images:=<True|False>] [publish_compressed_images:=<True|False>]
The Spot login data hostname, username and password can be specified either as ROS parameters or as environment variables. If using ROS parameters, see spot_driver/config/spot_ros_example.yaml
for an example of what your file could look like. If using environment variables, define BOSDYN_CLIENT_USERNAME
, BOSDYN_CLIENT_PASSWORD
, and SPOT_IP
.
Many simple robot commands can be called as services from the command line once the driver is running. For example:
ros2 service call /<Robot Name>/sit std_srvs/srv/Trigger
ros2 service call /<Robot Name>/stand std_srvs/srv/Trigger
ros2 service call /<Robot Name>/undock std_srvs/srv/Trigger
ros2 service call /<Robot Name>/power_off std_srvs/srv/Trigger
If your Spot has an arm, some additional helpful services are exposed:
ros2 service call /<Robot Name>/arm_stow std_srvs/srv/Trigger
ros2 service call /<Robot Name>/arm_unstow std_srvs/srv/Trigger
ros2 service call /<Robot Name>/arm_carry std_srvs/srv/Trigger
ros2 service call /<Robot Name>/open_gripper std_srvs/srv/Trigger
ros2 service call /<Robot Name>/close_gripper std_srvs/srv/Trigger
The full list of interfaces provided by the driver can be explored via ros2 topic list
, ros2 service list
, and ros2 action list
. For more information about the custom message types used in this package, run ros2 interface show <interface_type>
.
See spot_examples
for some more complex examples of using the ROS 2 driver to control Spot, which typically use the action servers provided by the driver.
Perception data from Spot is handled through the spot_image_publishers.launch.py
launchfile, which is launched by default from the driver. If you want to only view images from Spot, without bringing up any of the nodes to control the robot, you can also choose to run this launchfile independently.
By default, the driver will publish RGB images as well as depth maps from the frontleft
, frontright
, left
, right
, and back
cameras on Spot (plus hand
if your Spot has an arm). You can customize the cameras that are streamed from by adding the cameras_used
parameter to your config yaml. (For example, to stream from only the front left and front right cameras, you can add cameras_used: ["frontleft", "frontright"]
). Additionally, if your Spot has greyscale cameras, you will need to set rgb_cameras: False
in your configuration YAML file, or you will not receive any image data.
By default, the driver does not publish point clouds. To enable this, launch the driver with publish_point_clouds:=True
.
The driver can publish both compressed images (under /<Robot Name>/camera/<camera location>/compressed
) and uncompressed images (under /<Robot Name>/camera/<camera location>/image
). By default, it will only publish the uncompressed images. You can turn (un)compressed images on/off by launching the driver with the flags uncompress_images:=<True|False>
and publish_compressed_images:=<True|False>
.
The driver also has the option to publish a stitched image created from Spot's front left and front right cameras (similar to what is seen on the tablet). If you wish to enable this, launch the driver with stitch_front_images:=True
, and the image will be published under /<Robot Name>/camera/frontmiddle_virtual/image
. In order to receive meaningful stitched images, you will have to specify the parameters virtual_camera_intrinsics
, virtual_camera_projection_plane
, virtual_camera_plane_distance
, and stitched_image_row_padding
(see spot_driver/config/spot_ros_example.yaml
for some default values).
NOTE:
If your image publishing rate is very slow, you can try
- connecting to your robot via ethernet cable
- exporting a custom DDS profile we have provided by running the following in the same terminal your driver will run in, or adding to your
.bashrc
:export=FASTRTPS_DEFAULT_PROFILES_FILE=<path_to_file>/custom_dds_profile.xml
An optional custom Automatic Eye-in-Hand Stereo Calibration Routine for the arm is available for use in the spot_wrapper
submodule, where the
output results can be used with ROS 2 for improved Depth to RGB correspondence for the hand cameras.
See the readme at /spot_wrapper/spot_wrapper/calibration/README.md
for
target setup and relevant information.
First, collect a calibration with spot_wrapper/spot_wrapper/calibrate_spot_hand_camera_cli.py
.
Make sure to use the default --stereo_pairs
configuration, and the default tag configuration (--tag default
).
For the robot and target setup described in /spot_wrapper/spot_wrapper/calibration/README.md
, the default viewpoint ranges should suffice.
python3 spot_wrapper/spot_wrapper/calibrate_spot_hand_camera_cli.py --ip <IP> -u user -pw <SECRET> --data_path ~/my_collection/ \
--save_data --result_path ~/my_collection/calibrated.yaml --photo_utilization_ratio 1 --stereo_pairs "[(1,0)]" \
--spot_rgb_photo_width=640 --spot_rgb_photo_height=480 --tag default --legacy_charuco_pattern True
Then, you can run a publisher to transform the depth image into the rgb images frame with the same image
dimensions, so that finding the 3D location of a feature found in rgb can be as easy as passing
the image feature pixel coordinates to the registered depth image, and extracting the 3D location.
For all possible command line arguments, run ros2 run spot_driver calibated_reregistered_hand_camera_depth_publisher.py -h
ros2 run spot_driver calibrated_reregistered_hand_camera_depth_publisher.py --tag=default --calibration_path <SAVED_CAL> --robot_name <ROBOT_NAMESPACE> --topic_name /depth_registered/hand_custom_cal/image
You can treat the reregistered topic, (in the above example, <ROBOT_NAME>/depth_registered/hand_custom_cal/image
)
as a drop in replacement by the registered image published by the default spot driver
(<ROBOT_NAME>/depth_registered/hand/image
). The registered depth can be easily used in tools
like downstream, like Open3d, (see creating RGBD Images and creating color point clouds from RGBD Images), due to matching image dimensions and registration
to a shared frame.
You can compare the new calibration to the old calibration through comparing visualizing the colored point cloud from a bag in RViz. See RViz setup below the bagging instructions.
First, collect a bag where there is a an object of a clearly different color in the foreground then that of the background.
ROBOT_NAME=<ROBOT_NAME> && \
ros2 bag record --output drop_in_test --topics /tf /tf_static \
/${ROBOT_NAME}/depth/hand/image /${ROBOT_NAME}/camera/hand/camera_info \
/${ROBOT_NAME}/joint_states /${ROBOT_NAME}/camera/hand/image \
/${ROBOT_NAME}/depth_registered/hand/image
To see what the default calibration looks like:
# In seperate terminals
ros2 bag play drop_in_test --loop
ROBOT_NAME=<ROBOT_NAME> && \
ros2 launch spot_driver point_cloud_xyzrgb.launch.py spot_name:=${ROBOT_NAME} camera:=hand
To see what the new calibration looks like:
# In seperate terminals
ROBOT_NAME=<ROBOT_NAME> && \
ros2 bag play drop_in_test --loop --topics /${ROBOT_NAME}/depth/hand/image \
/${ROBOT_NAME}/camera/hand/camera_info /${ROBOT_NAME}/joint_states \
/${ROBOT_NAME}/camera/hand/image /tf /tf_static
ROBOT_NAME=<ROBOT_NAME> && \
CALIBRATION_PATH=<CALIBRATION_PATH> && \
ros2 run spot_driver calibrated_reregistered_hand_camera_depth_publisher.py --robot_name ${ROBOT_NAME} \
--calibration_path ${CALIBRATION_PATH} --topic depth_registered/hand/image
ROBOT_NAME=<ROBOT_NAME> && \
ros2 launch spot_driver point_cloud_xyzrgb.launch.py spot_name:=${ROBOT_NAME} camera:=hand
Set global frame to be /<ROBOT_NAME>/hand
Add (bottom left) -> by topic ->
/<ROBOT_NAME>/depth_registered/hand/points
-> ok
On the left pane, expand the PointCloud2 message. Expand Topic. Set History Policy to be Keep Last, Reliability Policy to be Best Effort, and Durability policy to be Volatile (select these from the dropdowns).
Due to known issues with the Spot CAM, it is disabled by default. To enable publishing and usage over the driver, add the following command in your configuration YAML file:
initialize_spot_cam: True
The Spot CAM payload has known issues with the SSL certification process in https. If you get the following errors:
non-existing PPS 0 referenced
decode_slice_header error
no frame!
Then you want to log into the Spot CAM over the browser. In your browser, type in:
https://<ip_address_of_spot>:<sdp_port>/h264.sdp.html
The default port for SDP is 31102 for the Spot CAM. Once inside, you will be prompted to log in using your username and password. Do so and the WebRTC frames should begin to properly stream.
This repository also provides a ROS 2 control package designed to interface with Spot's joint control API. Further documentation and examples can be found in spot_ros2_control
.
The bosdyn_msgs
package is installed as a debian package as part of the install_spot_ros2
script because it's very large. It can be checked out from source here and then built as a normal ROS 2 package if that is preferred (compilation takes about 15 minutes).
If you encounter problems when using this repository, feel free to open an issue or PR.
If you encounter ModuleNotFoundErrors
with bosdyn
packages upon running the driver, it is likely that the necessary Boston Dynamics API packages did not get installed with install_spot_ros2.sh
. To check this, you can run the following command. Note that all versions should be 4.1.0
.
$ pip list | grep bosdyn
bosdyn-api 4.1.0
bosdyn-api-msgs 4.1.0
bosdyn-auto-return-api-msgs 4.1.0
bosdyn-autowalk-api-msgs 4.1.0
bosdyn-choreography-client 4.1.0
bosdyn-client 4.1.0
bosdyn-core 4.1.0
bosdyn-graph-nav-api-msgs 4.1.0
bosdyn-keepalive-api-msgs 4.1.0
bosdyn-log-status-api-msgs 4.1.0
bosdyn-metrics-logging-api-msgs 4.1.0
bosdyn-mission 4.1.0
bosdyn-mission-api-msgs 4.1.0
bosdyn-msgs 4.1.0
bosdyn-spot-api-msgs 4.1.0
bosdyn-spot-cam-api-msgs 4.1.0
If these packages were not installed correctly on your system, you can try manually installing them following Boston Dynamics' guide.
The above command verifies the installation of the bosdyn
packages from Boston Dynamics and the generated protobuf to ROS messages in the bosdyn_msgs
package (these have msgs
in the name). You can also verify the bosdyn_msgs
installation was correct with the following command:
$ ros2 pkg xml bosdyn_msgs -t version
4.1.0
Finally, you can verify the installation of the spot-cpp-sdk
with the following command:
$ dpkg -l spot-cpp-sdk
||/ Name Version Architecture Description
+++-==============-============-============-=================================
ii spot-cpp-sdk 4.1.0 amd64 Boston Dynamics Spot C++ SDK
MIT license - parts of the code developed specifically for ROS 2. BSD3 license - parts of the code derived from the Clearpath Robotics ROS 1 driver.
To contribute, install pre-commit
via pip, run pre-commit install
and then run pre-commit run --all-files
to
verify that your code will pass inspection.
git clone https://github.com/bdaiinstitute/spot_ros2.git
cd spot_ros2
pip3 install pre-commit
pre-commit install
pre-commit run --all-files
Now whenever you commit code to this repository, it will be checked against our pre-commit
hooks. You can also run
git commit --no-verify
if you wish to commit without checking against the hooks.
This project is a collaboration between the Mobile Autonomous Systems & Cognitive Robotics Institute (MASKOR) at FH Aachen and the Boston Dynamics AI Institute.
MASKOR contributors:
- Maximillian Kirsch
- Shubham Pawar
- Christoph Gollok
- Stefan Schiffer
- Alexander Ferrein
Boston Dynamics AI Institute contributors:
- Jenny Barry
- Daniel Gonzalez
- Tao Pang
- David Surovik
- Jiuguang Wang
- David Watkins
Linköping University contributors:
- Tommy Persson