-
Notifications
You must be signed in to change notification settings - Fork 15
Home
Welcome to the sot-torque-control wiki! This repository contains a set of dynamic-graph entities for the implementation of sensor-feedback control of legged robots. In this wiki you can find a description of these entities, which you can see in the image below.
All the entities in sot-torque-control have a command init
that needs to be called to initialize the entity. Typically the init command takes as input the control loop period dt
and other necessary information. Once you are connected to the python terminal on the robot, you can start creating entities and plugging signals. Once your computational graph is ready you can start the computation by using the appropriate script start_dynamic_graph
.
All entities have an extremely useful command to retrieve the list of signals:
displaySignals()
and another command to retrieve the list of commands:
commands()
These will be your best friends for the first few weeks! The documentation of a command can be seen by using the help
python function. For instance, to get the documentation of the command move
of the entity traj_gen
you would type:
help(traj_gen.command)
The device entity represents the robot. Even if it does not belong to sot-torque-control
---but to sot-core
---we describe it here because it is fundamental to understand the whole system. The output signals of the device are the sensor measurements (e.g. encoders, IMU, force/torque sensors). The input signal of the device are the motor commands (typically motor currents). When connecting to the python interpreter on the robot the device entity already exists and you can access it through robot.device
.
The main role of this entity is to handle the fact that not all of the robot joints may be controlled by the same controller. For instance, small joints such as wrists, grippers and neck are typically position controlled, whereas the other joints are torque controlled. The input signals of the Control Manager are the control commands computed by all the available controllers (i.e. position and torque controller in the figure above). The output signal is the control command to send to the device. To change the control mode of a joint you can use the command setCtrlMode
(see also commands getCtrlMode
, addCtrlMode
and ctrlModes
). When changing control mode of a joint, to avoid discontinuities, the Control Manager performs a linear interpolation of 1 second between the old and the new control signal. Another key feature of the Control Manager is a safety check: in case the control commands are too large it prints an error message and starts commanding zero current to the device. The maximum control can be changed through the input signal max_current
. The stop of the control can also be triggered by another entity through the input signal emergencyStop
.
This entity implements a simple Proportional-Derivative (PD) joint position control. The input signals are the encoder measurements, the reference joint positions and velocities and the PD gains. The output signal contains the desired motor currents.
This entity implements the joint torque control using a simple linear model for the relationship between motor current, joint torque and joint velocity. This model accounts for:
- viscous friction (proportional to joint velocity)
- Coulomb friction (proportional to the sign of joint velocity)
- a linear relationship between motor current and joint torque (i.e. we neglect the electric pole of the motor transfer function because it is typically much faster than the mechanical motor dynamics)
The main input signals of this entity are the desired joint torques, the estimated joint torques, and the estimated joint velocities. The main output signal contains the desired motor currents.
This entity implements a balance control for the robot based on inverse dynamics. Currently the controller has three tasks:
- Center of Mass (CoM) trajectory tracking
- Joint posture trajectory tracking
- Minimization of tangential forces and moments at the feet
Each task has a weight that determines its (soft) priority with respect to the others. The controller also takes care of having contact forces inside the friction cones and satisfying the torque limits.
The main input signals of this entity are:
- q: robot configuration (including floating base)
- v: robot velocities (including floating base)
- CoM reference trajectory (along with its first two time derivatives)
- joint reference trajectory (along with its first two time derivatives)
Its main output signal contains the desired joint torques.
The main role of this entity is to estimate the joint torques. To do that, it uses the dynamic model of the robot, together with an estimation of the joint positions, velocities, accelerations, and the external contact wrenches (i.e. forces and moments). The joint velocities and accelerations are estimated using a derivative filter, while the contact wrenches are measured by the robot 6-axis F/T sensors.
One of the main parameters of this entity is the estimation delay, which is specified when calling the init
command. A larger delay results in smoother estimations.
This general-purpose entity can generate arbitrary Nd trajectories. For instance, in the figure above it is used to generate the 3d CoM trajectory. Currently this entity has the following commands to generate different kinds of trajectories (but more can be added):
-
move
: minimum-jerk point to point -
startSinusoid
: sinusoid -
startTriangle
: triangular trajectory -
startConstAcc
: piecewise-constant acceleration -
startLinChirp
: chirp (i.e. sinusoid with linearly varying frequency)
The entity also provides the first two derivatives of the generated trajectory.
This entity is a version of NdTrajectoryGenerator dedicated to the joint trajectories. The main differences with respect to NdTrajectoryGenerator are:
- The joints are identified by their short name rather than by index; joint short names for HRP-2 are defined in the include file hrp2_common.hh, e.g. 'rsp'=right shoulder pitch, 'lk'=left knee.
- The joint limits are taken into account so as to avoid generating a trajectory that leads the joint to a collision with the limit.
- The generated trajectories start form the current joint positions
This entity computes the pose and velocity of the free flyer (i.e. the robot's floating base) assuming that the pose of the feet is known. In particular, for each leg, the forward kinematics is used to compute the pose of the base. Then the two estimations (from the left and the right foot) are averaged. Note that this computation assumes a rigid kinematic structure (which is not the case for HRP-2). Similarly, the base velocity is computed from the joint velocities assuming that the feet do not move, averaging the two estimations obtained from the two feet.
The output signals of this entity are:
-
base6dFromFoot_encoders
: the robot positions, including first the free-flyer (where orientation is represented using roll-pitch-yaw), and then the joints -
freeflyer_aa
: the free-flyer pose, where orientation is represented using the angle-axis convention -
v
: the robot velocities, including first the base and then the joint velocities
This entity belongs to the repository sot-state-observation, and it implements an Extended Kalman Filter dedicated to the estimation of the state of the flexibility in HRP-2's feet.
This entity belongs to the repository sot-state-observation. It takes as input a frame, and it gives as output the same frame after applying the transformation due to the flexibility. For instance, in the figure above, it takes as input the floating-base frame (as computed by FreeFlyerLocator, which neglects the flexibility in HRP-2's feet) and it gives as output the floating-base frame considering the deformation due to the flexibility. It does also the same thing for the velocity.