Creating Defined Robotics, continued

In Creating Defined Robotics, I explained the “Why”. Now in this blog, let’s explain “How”

Our goal is to create an architecture which abstracts the complexity of ROS, Gazebo, Linux, and eventually hardware but still provides a framework that can leverage all of this, for providing a seamless experience for the user. Here I am talking about someone who might not even understand what a websocket is but still wants to build a robot that acts as a free-moving CCTV camera.

If you have seen The Big Bang Theory, you have already seen this

Breaking down the problem

Say you want to actually implement such a device, a robot which goes around the house and takes pictures and videos every 5 hours.

How will you describe this problem as a pseudo code?

  1. start
  2. understand current situation (where am I, what time it is, what are my surroundings)
  3. navigate to location A
  4. take a picture
  5. navigate to location B
  6. …
  7. navigate to last location F
  8. go back to dock
  9. sleep until next 5th hour.
  10. repeat

Of course, here I am ignoring the startup and configuration, but we will come to that later. You can already see that, on the surface, it doesn’t look so different from an actual program pseudo code.

Robot Description Framework (RDF)

As is very evident from the name, the file rdf.yml describes your robot. But not (only) in terms of physical features but in terms of capabilities (side note: we might rename this in the future so it is not confused with other similarly named projects)

A simple RDF will look like this (from the starter repo: botcrate/robots/minimal.rdf.yaml at trunk · Defined-Robotics/botcrate)

# Minimal robot — differential drive only (no LiDAR, no camera)
# This robot demonstrates capability gating: tasks requiring lidar_2d
# or rgb_camera will be rejected at compile time.
name: minimal_bot
version: "0.0.1"
description: >
  Bare-minimum differential-drive robot with no sensors.
  Use this to see how capability gating works — try running
  patrol (needs lidar_2d) and watch it fail with a clear error.
 
modules:
  - name: drive_base
    type: motor_controller
    capabilities:
      - name: differential_drive
        type: differential_drive
        parameters:
          wheel_separation: 0.160
          wheel_radius: 0.033
          max_linear_velocity: 0.22
          max_angular_velocity: 2.84

As you can see, this robot has a simple diff drive with parameters derived from the robot’s motors and physics. An advanced robot might have camera, a LiDAR, or 4WD. You can define all of that here. Not only that, but you can also include a complete physical description of where your components are rdf/examples/defined_mvp.physical.yaml at trunk · Defined-Robotics/rdf

For those who have worked with Kconfig Language — The Linux Kernel documentation before, this might be a deja vu.

Verbs

In Defined Robotics stack, verbs act as the basic entity. They determine how something must happen. Let’s take an example of the verb Explore This is how it is defined

# Verb definition: explore
# Autonomously explore and map the environment using frontier exploration
name: explore
description: "Autonomously explore and map the environment using frontier exploration"
required_capabilities:
  - differential_drive
  - lidar_2d
template: explore.xml.j2
primitives:
  - navigate_to_pose
parameters:
  timeout:
    type: float
    required: false
    default: 300.0
    description: "Max exploration time in seconds"
 
# Layer 3: Software dependencies
dependencies:
  ...
 
# Layer 4: Runtime configuration
runtime:
  ...
 
# Layer 5: Interface contract
interface:
  ...

We have our name and description. Then we have capabilities, which describe what physical capabilities are necessary for this verb to work. As you can see, explore requires to have at least a 2D LIDAR. This means our minimal robot will not suffice for this task. You can define more items necessary for this verb to function, for example, if it depends on any particular SW package (ROS or otherwise) or it provides any interface to interact.

Going back to the moving CCTV, you can now describe our problem in just simple verbs that we define and their parameters.

<robot A>
{explore}
{go_to} [point A]
{take_picture}
...
{go_to} [dock]
{sleep} [5hr]

CLI / TUI

How to transform this problem plan into something that works? For that we need to use Defined-Robotics/defined-cli: Defined Robotics platform orchestrator. It is a rich TUI application that can launch, run, monitor, control and report about the robot, all in one.

TUI

The UI is still work in progress, but you can see how it gives a mission control for your robot. You can start, stop missions, see if it is completing them as expected, send new goals etc. Before reaching here, however, the CLI, with the help of other modules, will validate the capabilities of robot, gate the capabilities of verb, and also run diagnostics on whether it actually ran the expected mission or not.

The result is something as follows

Robot Exploring

Shoulders of Giants

You might have seen this XKCD comic before:

Right. We don’t want to reinvent the wheel here. For now I am using ROS2, the amazing Nav2 and robo-friends/m-explore-ros2: Explore_lite port to ROS2 in this demo. For the sim, I am building the ROS2 image based on our config and then running a container. And, because it is easier on Mac, Foxglove Studio to see the output of our sim.

But, the design of the system is intentionally in such a way that if wanted, you can swap out ROS2 middleware for something else. You can have your own custom binary running your own exploration algorithm and it will still work with the framework.

All the code to make this possible can be found in different repos under Defined Robotics in Apache 2.0 license. Each repo has it’s own structure, purpose, documentation, unit- integration tests, and CICD workflows. Most of the code is in Python and some in cpp. If you want to dive deeper into how the code is written, you can take a look at Architecture Overview

What’s next

As I said, this is a work in progress. I am trying to make this robust enough that you can start using it to create different scenarios, algorithms etc. But for that, I want your help.

If you are interested to test it, please give it a try. You can clone the starter repo Defined-Robotics/botcrate: Your first robot, defined in YAML. Clone → edit → run. and give me feedback on what works. At any point, if you encounter any bugs, please raise an issue on github repos. On the other hand, if you want to build more verbs or create your own custom robots, please check out Defined Robotics Docs and send me a pull request on the repos. If you face any difficulties or questions, feel free to ping me.

The next step for me is to create an equally competent and modular hardware that can actually perform these tasks. If you also have some bright ideas and want to shape Defined Robotics, ping me on LinkedIn.