
AI
EU AI Act Machinery Directive: Integrator Liability Guide

System integrators deploying AI-enabled robotic cells face a strict dual regulatory reality. You must simultaneously satisfy the mechanical safety requirements of the Machinery Directive 2023/1230 and the algorithmic traceability mandates of the EU AI Act. This guide details the exact technical documentation, liability frameworks, and per-prediction reliability proofs you need to secure before your client signs the Site Acceptance Test (SAT). At TrustalAI, we provide the reliability layer that helps you meet these exact requirements.
Machinery Directive and EU AI Act: Two regulations, one liability framework
An integrator deploying an AI-enabled robotic cell must simultaneously satisfy the mechanical safety requirements of the Machinery Directive 2023/1230 and the algorithmic traceability requirements of Regulation EU 2024/1689 (EU AI Act). Neither text replaces the other; they form a cumulative compliance framework. As highlighted by the European Business Review, preparatory obligations for these overlapping regulatory frameworks are already in force today, requiring immediate action from engineering teams.
What the Machinery Directive 2023/1230 covers
The Machinery Directive 2023/1230 imposes a strict result obligation on the safety of the delivered machine as a whole. As soon as a robotic cell integrates an AI module whose probabilistic behavior cannot be proven reliable, the integrator bears full legal liability, including for AI perception errors in a module they did not develop.
Consider a concrete example: a vision-guided robotic picking cell where the AI module produces an undetected misclassification leading to a high-speed collision. Because the integrator certified the system as a whole, they are therefore liable for the resulting damage. As noted by Robotics & Automation News, there is a fundamental shift from a "freeze" approach to "continuous assurance" under EU Regulation 2023/1230. Qualification does not stop at delivery. The machinery regulation demands that the machine remains safe throughout its operational lifecycle.
What the EU AI Act adds to the compliance equation
The Machinery Directive validates the mechanical and electrical safety of the installation. The EU AI Act adds a separate layer: the documentation of the AI component's own reliability, prediction by prediction, in real time. Where the Machinery Directive requires the machine to stop on hardware failure, the EU AI Act requires understanding why the artificial intelligence model made a specific decision under a given environmental variation, such as a lighting change, an out-of-distribution part, tool wear, or model drift.
Concretely, for Annex III high-risk systems, the law mandates specific technical requirements:
Automatic logs per inference with a confidence score (Art. 12)
Documented and activatable human oversight mechanism (Art. 14)
Continuous risk management throughout the lifecycle (Art. 9)
Technical documentation per real deployment context (Art. 11 + Annex IV)
This algorithmic traceability layer is structurally absent from most integrators today, and the Machinery Directive alone did not require it.
The integrator as "deployer" under both frameworks
An integrator deploying a high-risk AI system is considered a deployer under Regulation EU 2024/1689. They bear their own documentation obligations, independent of the AI supplier, and cannot transfer this responsibility by contract.
The deployer status attaches to the act of deployment in the client's environment, not to the commercial agreement with the AI supplier. Specific deployer obligations are listed at Articles 16, 17, and 26. Article 26 explicitly requires retaining the automatically generated logs for an appropriate duration to allow post market control. As referenced by Datenschutz-Notizen, the concrete logging infrastructure required for high-risk systems deployers must be robust, immutable, and immediately accessible for conformity assessment.
What the integrator must prove before sign-off
Before the acceptance of a robotic cell integrating AI, the integrator must prove: (1) that the AI system has been evaluated in its real deployment context (Art. 9), (2) that every AI decision is traced with its exact confidence level at the moment of inference (Art. 12), (3) that a documented human oversight mechanism is activatable if confidence falls below a defined threshold (Art. 14), and (4) that this documentation is maintained continuously, not only produced at delivery.
Per-prediction traceability: The most overlooked obligation
Article 12 requires that every decision made by the artificial intelligence is traced with its exact confidence score at inference time. This creates a continuous reliability record, not just a standard error log. As stated by The Recursive: "If you can't reconstruct a decision, you can't ship." TrustalAI automatically generates timestamped per-prediction logs required by Art. 12 in real-time, processing in under 100ms, which provides full compliance without slowing down your production cycle.
Human oversight: Proven, not declared
Article 14 demands a documented AND activatable escalation process. Simply stating that "an operator monitors the machine" is insufficient under the new regulation. According to DevDiscourse, fully automated AI decisions without reliability proof are illegal. TrustalAI provides precise confidence thresholds for automatic human oversight escalation per Art. 14 requirements, allowing the system to hand over control to a human operator when the model's certainty drops below the acceptable safety margin.
Continuous risk management: Not a delivery report
Article 9 mandates iterative risk management throughout the product lifecycle. A classic mistake is treating the conformity assessment as a one-time delivery report that is then closed. The EU AI Act requires active post-deployment monitoring. TrustalAI detects model drift and out-of-distribution situations without requiring model modification, which means your continuous risk management obligations are met automatically as the operational environment changes.
The contractual risk integrators fail to anticipate
Legal risk from AI deployment is systematically underestimated because integrators wrongly assume liability belongs exclusively to the AI supplier. On the ground, this misunderstanding creates critical situations at production launch. The core threat is entirely operational: an accident on a delivered system, or a client refusing to sign the Site Acceptance Test (SAT). Both scenarios have direct, measurable financial consequences that directly impact project profitability and the relationship with the end client.
When the client refuses the SAT
At the Site Acceptance Test (SAT), the client demands proof that the AI module embedded in the robotic cell is reliable, not just that the machine is mechanically safe. Without per-prediction reliability logs and documented supervision thresholds, the integrator cannot respond to this legitimate request.
The immediate consequences are severe:
Blocked acceptance and last payment milestone frozen (typically 20 to 30% of total project value)
Activation of contractual delay penalties
Client relationship severely damaged
Potential performance bond at risk
This refusal scenario is becoming increasingly common as industrial clients become aware of their own EU AI Act obligations as end users of high-risk AI systems. They require hard data to validate the safety of the models they are purchasing.
Compliance does not transfer by contract
Even if the AI supplier certifies their module and provides complete technical documentation, the integrator who deploys it in a robotic cell carries their own obligations as deployer under Articles 16, 17 and 26 of Regulation EU 2024/1689. A subcontracting clause or a liability indemnification agreement does not transfer regulatory obligations. They remain attached to the act of deployment and commissioning in the client's final environment.
Compliance is acquired through rigorous documentation of the system's behavior in its real deployment context, not through a commercial agreement. You deliver a machine that knows when it doesn't know, and you can prove it to your client.
What TrustalAI produces for the acceptance file
To successfully pass the SAT and satisfy market surveillance authorities, integrators must provide specific technical documentation. TrustalAI delivers three core deliverables: timestamped logs, confidence metrics, and oversight thresholds for EU AI Act compliance. Our plug-and-play solution requires no model modification and integrates directly with your existing computer vision architecture.
The table below outlines the specific deliverables TrustalAI provides for your acceptance file:
Deliverable | EU AI Act requirement | Technical specification | Operational benefit |
|---|---|---|---|
Timestamped per-prediction logs | Article 12 | Under 100ms latency logging of every inference | Full auditability and reconstructed decision pathways |
Real-time confidence metrics | Article 13 | Probabilistic scoring per bounding box/classification | Proves the AI system's reliability in the real deployment context |
Oversight escalation thresholds | Article 14 | Automated trigger for human intervention | Satisfies the requirement for an activatable human oversight mechanism |
By implementing these deliverables, TrustalAI data shows a -40% reduction in perception incidents in industrial robotics applications. We provide the exact data required to prove that your machine operates safely within its defined parameters.
Navigating AI liability: A path to proactive compliance
Before acceptance, integrators must prove the AI system was evaluated in its real context, trace every decision with a confidence level, implement activatable human oversight, and maintain continuous documentation. The Machinery Directive 2023/1230 and EU AI Act Regulation EU 2024/1689 are strictly cumulative, one does not replace the other. By integrating TrustalAI, integrators achieve a proven -40% reduction in perception incidents in industrial robotics applications (TrustalAI data).
With less than 5 months to August 2, 2026, assembling the technical acceptance file is an immediate priority for any integrator active on robotics and industrial vision AI projects. TrustalAI covers the technical documentation layer; the legal drafting remains with the operator. Reduce your contractual risk on your next AI project, let's talk.
EU AI Act and integrator liability
Does the Machinery Directive cover AI components in a robotic cell?
Not fully. The Machinery Directive 2023/1230 covers the mechanical safety of the machine as a whole. It does not provide the methodological framework to audit per-prediction AI decision-making. The EU AI Act adds the missing layer: per-prediction logs, confidence metrics, and an activatable human oversight mechanism, none of which were required by the Machinery Directive at delivery.
To understand the boundary between these regulations, we must look at the nature of the control systems involved. An in-scope example is a vision-guided robotic cell utilizing machine learning classification for quality inspection or dynamic part picking. Because the ML model relies on probabilistic training data rather than hardcoded rules, its behavior cannot be guaranteed through traditional mechanical safety relays. The EU AI Act steps in to regulate this probabilistic uncertainty, requiring the integrator to prove that the artificial intelligence operates within acceptable risk thresholds.
Conversely, an out-of-scope example is a deterministic PLC-controlled machine with no AI layer. If a robotic arm moves between fixed coordinates based purely on standard programmable logic controllers, it falls entirely under the machinery regulation. There is no probabilistic decision-making, no training data involved, and therefore no requirement for Article 12 logging or Article 14 human oversight. The distinction lies entirely in how the machine processes information to execute its physical actions.
Can an integrator transfer EU AI Act liability to the AI supplier?
No. An integrator deploying a high-risk AI system carries their own documentation obligations as deployer under Articles 16, 17 and 26 of Regulation EU 2024/1689, independently of what the AI supplier delivered or contractually guaranteed. Article 26 in particular requires the deployer to retain automatically generated logs for an appropriate duration.
If an incident occurs on the production line, market surveillance authorities will demand proof that the system was correctly supervised in its final deployment environment. A commercial indemnification clause does not satisfy this regulatory obligation. The law explicitly separates the responsibilities of providers (those who develop the AI models) and deployers (those who put the AI models into service in a specific professional context). When you integrate a third-party vision model into a robotic cell and install it at a client's facility, you are executing the act of deployment.
The regulatory framework dictates that the environment itself introduces new variables, such as ambient lighting, specific dust levels, or unique part geometries, that the original provider could not have fully anticipated during the initial conformity assessment. Therefore, the obligation to monitor the system's performance, maintain the logs, and activate human oversight remains firmly with the integrator. We strongly direct readers to specialized legal counsel for case-specific contractual risk analysis, as attempting to bypass these obligations through standard subcontracting agreements will leave your firm exposed to severe regulatory penalties.
What happens if the client refuses the SAT?
The client can legitimately block the Site Acceptance Test if the integrator cannot produce the AI reliability documentation required by the EU AI Act. Immediate consequences: last payment milestone frozen (typically 20-30% of project value), contractual delay penalties applied, performance bond potentially at risk. Without per-prediction logs (Art. 12) and documented supervision thresholds (Art. 14), the integrator cannot prove the embedded AI module is compliant.
This situation can take months to resolve without a plug-and-play per-prediction traceability solution. When a client refuses the SAT, the financial impact cascades through the integrator's entire business model. A frozen 30% milestone often represents the entirety of the project's profit margin. Furthermore, as the system sits idle on the client's floor awaiting compliance verification, the integrator incurs ongoing engineering costs trying to retroactively build a conformity assessment.
Industrial clients are increasingly educated on their own liabilities. If they accept a non-compliant high-risk AI system, they assume the operational risk. Therefore, their quality and legal departments will rigorously audit your technical documentation before signing off. If you cannot produce the exact confidence score of the model during the test runs, the client has a legal basis to claim the machine is not fit for purpose. We recommend consulting specialized legal counsel for case-specific contractual risk assessment.
How long to produce AI reliability evidence before acceptance?
TrustalAI delivers first documentation elements within 2 weeks on real production data. Because our solution operates as an independent reliability layer, it generates the required evidence for Articles 12, 13, and 14 without requiring any model modification or process change.
Traditional approaches to achieving algorithmic compliance often involve retraining models, altering the core architecture, or attempting to build custom logging infrastructure from scratch. These methods can delay a project by several months and introduce new points of failure into the system. By utilizing a black-box compatible approach, the integration timeline is drastically compressed.
Within the first 14 days of deployment on the client's site, the system captures the necessary baseline data to establish confidence metrics and define the thresholds for human oversight. This rapid turnaround means that when the SAT date arrives, you have a complete, automatically generated technical file ready for review. The ability to produce this evidence quickly not only secures your final payment milestone but also demonstrates a high level of technical maturity and regulatory competence to your end client, solidifying your position as a trusted system integrator in the era of regulated artificial intelligence.
Share
Related articles





