Changes to Robots – How the New Framework Addresses Autonomous Systems – Part 4
23 Oct 2025
Responsible Robotics – Who Owns the Risk When Machines Change?
This is the final blog in a four-part series exploring how the EU’s evolving regulatory framework is reshaping the way we design, deploy, and manage robotics and autonomous systems.
One area that’s often underestimated is what happens after a machine has been delivered, installed, and put into use. In reality, that’s often where the most interesting changes happen, particularly with autonomous systems and robotics. Whether it's tweaking parameters, swapping out software, or integrating the robot into a more complex environment or system, changes are common. But with the new EU Machinery Regulation, they now come with real responsibility.
What I find important and a bit challenging is that any significant post-delivery change that affects safety, function, or behaviour could reassign the role of “manufacturer.”
For example, if you modify an intelligent robot, add AI-driven decision-making, or retrain a vision model that impacts how it moves or reacts, then you might be the one responsible for compliance.
This is a big shift. In many industries, system integrators, solution providers, or even end users make changes routinely. But now, there’s a clear expectation that if those changes alter safety-related aspects, then the full set of obligations applies such as updated risk assessment, technical documentation, possibly even third-party involvement again.
It gets even trickier with autonomous systems. These machines may evolve on their own, either by design (learning from data) or through updates pushed remotely. How do we define when a machine has changed “enough” to trigger re-compliance? It will be hard to manage without clearer standards or digital tools to track versioning and behaviour over time.
From a safety compliance point of view, this introduces new headaches. We need better approaches to monitor what has been changed, how behaviour has shifted, and whether safety controls still apply under the new conditions. It’s not just mechanical changes anymore. It is software, data inputs, cloud logic, and remote tuning. That’s a lot to stay on top of.
And let’s not forget the human side. I’ve seen many skilled teams modify systems to improve performance without fully considering how those changes interact with compliance. The risk here is silent, everything still works until it doesn’t. The new regulation is trying to close that gap before something goes wrong.
To me, this reinforces the need for strong communication between OEMs, integrators, and operators. Everyone touching the system must understand the boundaries such as what is allowed, what needs to be re-evaluated, and who carries the risk assessment. If those boundaries are not clear, the project might look compliant but fall short in practice.
In short, we cannot treat robotics and AI like static tools anymore. They move, learn, adaptive, and so does the responsibility. If you change the system, you need to own the outcome.