In the past few weeks alone, Anthropic shipped Claude Opus 4.6, which includes a million-token context and autonomous agent teams. OpenAI launched a desktop Codex app that manages fleets of coding agents. GPT-5.2 is optimized for industry-specific tasks. DeepSeek and Moonshot continue to churn out open-source models from China that match or exceed Western frontier labs at a fraction of the cost. Grok 4 is climbing the benchmarks. Arcee AI — a 30-person San Francisco startup — trained a 400B-parameter open-source model from scratch in six months for $20 million and released it under Apache 2.0. The gap between "research preview" and "deployed in production" has collapsed to days.
And while all of this is happening, the European Union is arguing with itself about the rules it wrote eighteen months ago.

The Omnibus That Isn’t Simple
Last November, the Commission published its Digital Omnibus proposal (COM(2025) 836). “Omnibus” is Latin for “for all.” In EU-speak, it means “we’re going to amend a bunch of laws at once and hope nobody reads the details.”
The stated goal is to simplify the AI Act, reduce administrative burden, and give companies breathing room. The actual content: delay deadlines, the Commission failed to prepare for; weaken transparency requirements; gut registration obligations; downgrade mandatory AI literacy to voluntary recommendations; and expand the ability to process sensitive personal data with fewer safeguards. All wrapped in the soothing language of “pragmatism” and “competitiveness.”
To be fair, some of it is genuinely reasonable. The August 2026 deadline for compliance with high-risk AI systems was always unrealistic. Harmonised standards aren’t ready, and most member states haven’t even designated their national enforcement authorities. Delaying that deadline isn’t deregulation, it’s acknowledging reality.
But burying loopholes inside a “simplification” package? That’s a different game entirely. And the corporate fingerprints are all over it. Digital industry lobby spending in Brussels jumped 33.6% from €113 million in 2023 to €151 million in 2025, according to Corporate Europe Observatory and LobbyControl. Trade associations like DigitalEurope and CCIA (Google, Apple, Meta, Amazon, Uber) had their position papers mapped almost word-for-word to the Commission’s proposed text.
Nobody should be surprised. This is how Brussels works.
The Machine Responds
So the Commission drops its Omnibus. What happens next? Does Europe’s legislature review it, vote on it, and move on?
Of course not. This is the EU.
The file arrives at the European Parliament, where 720 MEPs across eight political groups must process it. Two committees get lead responsibility: IMCO (Internal Market) and LIBE (Civil Liberties). Then other committees weigh in with “opinions.” Meanwhile, the Council of the EU (representing member state governments) works on its own position. Eventually, all three institutions — Commission, Parliament, Council — enter “trilogue” negotiations to finalize a text.
This process will take at least 12 months. More likely eighteen to twenty-four.
Let that sink in. The Commission published a proposal tosimplifyAI regulation, and the simplification itself will take one to two years of institutional grinding before it becomes law. By which time, the AI systems it governs will be as relevant as regulating the horse-drawn carriage.
JURI Pushes Back
To the Parliament’s credit, not everyone is willing to wave this through.
On 2 February 2026, the Legal Affairs Committee (JURI) published a draft opinion with 34 amendments that systematically push back against the Commission’s weakest proposals. Delete the literacy downgrade. Restore the registration obligations for self-assessed AI systems. Keep the “strictly necessary” threshold for processing sensitive data. Preserve binding codes of practice. Block uncontrolled real-world testing of high-risk AI in medical devices and vehicles. Add a definition for AI agents. Ban AI-generated non-consensual sexual imagery, something the Commission had conveniently ignored even after Grok generated an estimated 3 million sexualised images in 11 days, including 23,000 of children.
Will it become law as written? No. It’s an opinion from an opinion-giving committee. It feeds into the lead committees’ reports. Then it gets negotiated. Then amended. Then negotiated again. JURI votes on the draft on February 24. The feedback period on the Commission’s proposals closes around March 9. IMCO and LIBE draft their reports in Q1-Q2. The Council aims for a general approach by April. Trilogue follows. Final adoption? Late 2026 if you’re optimistic. Mid-2027 if you’re realistic.
The Real Problem
The AI Act was passed in 2024 as the most ambitious technology regulation in a generation. Eighteen months later, before most of it has even taken effect, the Commission is already rewriting it, partly because the implementation infrastructure doesn’t exist, and partly because industry lobbying made the original timelines politically inconvenient… and the suitcases of cash impossible to resist, I’m sure.
The Parliament is pushing back, as it should. But the pushback itself takes a year or two to process through the institutional machinery. By the time the amended Omnibus becomes law, we’ll be living in a world of autonomous AI agents operating at a scale and speed that the original Act never imagined, let alone the Omnibus amendments.
This is the fundamental mismatch. The EU legislates on geological timescales. AI moves on internet time. The Commission writes rules. Misses its own deadlines. Proposes to weaken the rules that it couldn’t implement. Parliament fights to preserve them. Council negotiates. Trilogue grinds. And the models these rules are supposed to govern become obsolete before the ink dries.
Meanwhile, the US ships. China ships. The Gulf states ship. They’re not spending eighteen months debating whether “strictly necessary” or just “necessary” is the right threshold for processing biometric data in bias detection systems. They’re deploying.
How can we trust this circus to govern AI? The Commission writes rules it can’t implement, then rewrites them before they take effect. Parliament fights to preserve rules that nobody can yet comply with. The Council horse-trades behind closed doors. And the whole process takes longer than the technology’s shelf life.
JURI votes on February 24. Trilogue won’t wrap until 2027. By then, we’ll be on GPT-7 and autonomous agent swarms, and Brussels will still be debating the definition of “high-risk.”
Europe doesn’t need another Omnibus. It needs a revolution.
Julien
References
Primary Sources
European Commission Digital Omnibus on AI — COM(2025) 836— The Commission’s original proposal (19 November 2025)
JURI Draft Opinion — PE784.179— Sergey Lagodinsky’s 34 amendments (2 February 2026)
EDPB-EDPS Joint Opinion 1/2026— Data protection regulators’ assessment (21 January 2026)
Parliamentary Proceedings
LIBE Hearing on the Digital Omnibus— IAPP coverage of the 26 January 2026 hearing
JURI Committee — Members— Full list of current members
Analysis and Commentary
The European Union Changes Course on Digital Legislation— Lawfare’s legal analysis (15 December 2025)
EU Digital Omnibus Package: A Practical Guide— Kennedys Law explainer
Why Europe Needs Conversational Liability for AI Harms— Lagodinsky on AI chatbot liability (TechPolicy.Press, February 2026)
Grok Nudification Crisis
Grok Created Three Million Sexualized Images, Research Says— CCDH research findings (22 January 2026)
Grok Scandal Prompts MEP Move to Ban Non-Consensual AI Porn— EUobserver (6 February 2026)
Lobbying
Digital Industry Lobby Spending — Corporate Europe Observatory / LobbyControl— 33.6% increase: €113M (2023) → €151M (2025)