Creating Defined Robotics

Around 8 years ago in 2018, I was first introduced to the world of robotics as a part of a lab course at the TU Berlin. I got to know what ROS is, how to work with Gazebo simulation and how to write software that makes robots do what they should: navigate, plan paths, explore their surroundings and so on.. For someone with an electronics and embedded background, it had a steep learning curve. But I fell in love with the idea so much that I went on to do my masters thesis in Multi Robot SLAM at Fraunhofer and Google Summer of Code in nav2 at JdeRobot.

However, two things I kept realising again and again:

  • The chasm between how things work in simulation and real life robots
  • The high bar of entry for a maker or even a software engineer before they can leverage the robots. Not just skills but also the cost.

ROS2 has a strong ecosystem. It has grown from a research focussed framework into a reliable system that you can use to power commercial robots. There are so many companies actively using Nav2 in their production code. Many are using ros like abstractions and middlewares so they can still get benefits of the ecosystem and open source, without compromising their custom applications. There are more and more contributions to new and existing packages. But still, it is not that easy for a beginner to just start writing a custom robotics application. And if you want to build reliable physical applications, you need to buy quality hardware, which might easily cost upwards of 1000 euros.

That was my inspiration in building a framework, I call it Defined, that makes making robotic applications easy. So instead of spending time with ROS2 and fiddling with Gazebo parameters, you spend time in building scenarios that solve your problems.

Interesting… but why?

Let’s take an example so we can compare how it works right now. Say you own a small grocery store that needs periodic checks to ensure the aisles are well stocked. You could do this process manually, but you want to automate this. What are your options currently?

  • You spend a lot of money on really expensive robots that can scan your shelves Tally. They are probably also going to charge you operating fees on top of the robots. For a small - medium shop, you can forget it.
  • You can go and build your own scanning robot. You can buy a TurtleBot which has excellent ROS support. This way, you can leverage ros and tb libraries for navigation, charging, camera etc. But you still have to setup everything by yourself. And then, on top of that write your own code for your application logic: how many rounds per day, where to scan, where to update etc. And then, what happens if you want to add a system to stock the shelves as well?

This was just one example, but you can probably see that there are a lot of individuals and companies that are reinventing the wheel, while that time could rather be used in improving application logic. Just like Android / iOS app developer doesn’t have to bother (for the most part) about how memory allocation works on OS side, or when kernel developer reuses Kconfig Language — The Linux Kernel documentation to use preexisting modules to write their Qt GUI app, wouldn’t you like to do the same for a robot?

But robots are already here

There is a crop of new (and also old) companies in robotics space. Humanoids, robot dogs, drones, you can buy all of that in various sizes, shapes and from big and small companies alike. But they all do their own things in their own bubble. Who makes sure that the application you wrote on a Unitree AS2 robot dog will work the same on Boston Dynamics Spot? And what if you want to just add a custom sensor that is not offered by the companies?

Ah! But what about AI?

In 2026, we must talk about AI if we want to do anything Robotics. AI is here to stay, and contrary to my initial observations, quite helpful in a lot of fields.

Talking from robotics AI standpoint, the world’s best companies are trying to build Vision Language Models (What are Vision-Language Models? | NVIDIA Glossary) with huge compute and varying degrees of success. You can even use LLMs available right now, to run ROS robots and even to fly a model plane: Can Claude Fly a Plane?

We can split this problem into 3 points.

  1. We want to control all those tiny joints, motors, monitor obstacles, and make less-than-100ms decisions to protect humans.
  2. Then we need to tackle slightly bigger tasks, such as planning path from point A to B, following a person, pulling on a door handle etc
  3. Finally we have jobs for robots which actually make them useful. For example, open all the windows in a house for 5 mins everyday (For those who are not aware of this German ritual: What to Know About LĂĽften, the German Practice of Airing Out Your Home Year Round )

In my opinion, 1 and 2 are tasks which require high degree of determinism, which LLMs and VLMs lack (Vision Language Models Cannot Reason About Physical Transformation ) But they could be incredibly useful in point 3, if we can make sure that other two are deterministic enough. Which is what Defined Robotics focuses on. Moreover, I am convinced that if we make something easier for humans to be onboarded on, it will be equally easy for AI to understand and navigate.

So, how does Defined Robotics actually solve this?

That’s a great question which I would like to answer in a separate blog, with an open source release of the whole stack. Until then, stay tuned :)