The EU’s Digital Omnibus: what it is and why it matters for AI
On 19 November 2025, the European Commission unveiled the “Digital Omnibus” – a
twin package of proposals to recalibrate Europe’s flagship digital laws. One proposal
focuses on data and cyber legislation (including the GDPR, ePrivacy Directive, Data
Act, Data Governance Act and NIS2), the other on targeted amendments to the EU AI
Act. Together, they aim to simplify overlapping rules, cut compliance costs and respond
to concerns that the EU’s regulatory framework is slowing down innovation and
competitiveness.
The political motivation is clear. After a decade of intense law-making, businesses
complain of overlapping obligations, multiple reporting channels and legal uncertainty,
while policymakers worry about Europe falling behind in data-driven innovation and AI.
Unsurprisingly, the package has split opinion: privacy advocates fear an erosion of
safeguards, while many AI developers and SaaS providers see long-awaited
pragmatism. At the same time, privacy and civil society groups are warning against
“watering down” hard-won protections. The Digital Omnibus lands right in the middle of
that debate.
The most politically sensitive element is the AI Omnibus proposal, which re-opens the
EU AI Act only months after it entered into force. The message from Brussels is clear:
the architecture of the AI Act stays, but the knobs and dials need adjusting.
Key AI Act changes under the Digital Omnibus
1. More time to comply with high-risk rules
First, the timelines for high-risk AI are pushed back. Obligations for high-risk AI
systems, currently due from 2 August 2026, would only start to apply 6–12 months after
the relevant technical standards and “support tools” are in place, and in any event no
later than 2 December 2027 and 2 August 2028 for product-safety based systems. That
gives providers and deployers more realistic room to design governance frameworks
and is intended to avoid companies having to design compliance programmes before
the rulebook’s technical details are even finished.
2. Lighter AI literacy and documentation duties
Second, the much-discussed AI literacy duty is softened. Rather than placing a hard legal obligation on every provider and deployer to run formal AI literacy programmes, the responsibility to promote awareness shifts primarily to the EU institutions and Member States. Organisations are encouraged to build internal capability, and high-risk systems must still be overseen by appropriately trained staff, but the blanket requirement is scaled back. SMEs would also benefit from simplified technicaldocumentation and more proportionate penalties, narrowing the gap between hyperscalers and growth-stage players.
3. Targeted flexibility on data and bias mitigation
The AI Act is tied more closely to the data reforms. The Digital Omnibus clarifies that
personal data can be used to train AI models on the basis of legitimate interests
(subject to safeguards and a right to object where required) and introduces a narrow
exemption for residual special category data in training datasets, provided it is strictly
controlled and not allowed to surface in outputs.
The proposals also introduce extra flexibility for special category data. Limited use of
sensitive data would be allowed for developing and operating AI systems where
removing it would be disproportionate, provided robust technical and organisational
controls are in place. This acknowledges a practical reality: you cannot meaningfully
test for discrimination without, in some form, touching sensitive attributes. Separately,
biometric data used purely for on-device identity verification under the user’s control
would get its own legal basis.
4. Streamlined oversight and registration
The EU AI Office would gain a stronger role in supervising certain systems based on
general-purpose AI models, particularly where the model and system are built by the
same provider. Centralising enforcement for the most complex systems is meant to
avoid 27 different interpretations of the same rules and give cross-border providers a
single point of reference. At the same time, providers who classify certain systems as
low-risk (for example, purely internal, ancillary tools) would no longer have to register
them in the EU AI database, removing what many saw as “paperwork without benefit”.
5. Grace period for watermarking and transparency
Finally, there is a more pragmatic approach to transparency and fairness. Providers
placing AI systems on the market before August 2026 benefit from a six-month grace
period to meet watermarking and content-labelling obligations, pushing the effective
deadline to February 2027. In parallel with the GDPR changes, the AI framework
explicitly recognises the need to process some sensitive data for bias detection and
mitigation, under tight safeguards.
What should organisations take away?
The Digital Omnibus does not rewrite the AI Act, but it does make it more manageable.
High-risk rules are still coming, but oversight of the most powerful systems is more
centralized, and the law will be more explicit about when and how data, including
sensitive data, can be used to build fairer systems. For businesses, this is a window of
opportunity: to map AI use cases against Annex I and III, revisit data and model-training
strategies in light of the proposed legitimate-interest and special-category data
flexibilities, and design phased compliance plans that assume the Omnibus will land in
some form – even if the details shift in trilogue.
Author: Adina Ponta