A Conversation with Helen Dixon on GDPR, AI, and the Future of EU Regulation
A grounded look at gaps in GDPR enforcement, what the SRB ruling really leaves unresolved, and why the next phase of European tech regulation depends on fixing foundational problems.
6 minutes
Jan 20, 2026

At the Eyes-Off Data Summit 2025, former Irish Data Protection Commissioner Helen Dixon joined the stage for a candid Q&A with Yves-Alexandre de Montjoye, Professor of Applied Mathematics and Computer Science at Imperial College London, to discuss the realities of EU digital regulation.
The discussion exposed the systemic constraints regulators work within: decentralised enforcement across national authorities, limited case law to anchor difficult questions, inconsistent implementation of GDPR, and a forthcoming AI regulatory landscape that may replicate these weaknesses.
These dynamics clarify why enforcement lags, why anonymisation law remains contested, and which structural changes are needed for Europe to govern AI and high-risk systems more effectively. Read this article to discover all her insights from the Summit.
The Limits of the One-Stop-Shop Governance Model
The session opened with a question that has shadowed the GDPR since 2018: does the one-stop-shop model actually work?
Dixon explained that the model’s shortcomings stem less from its design on paper and more from how it has been operationalised. National authorities arrived at the GDPR with vastly different resources, expertise and priorities. Coordinating 40+ regulators on every cross-border case created friction, delay and, at times, institutional standoffs.

She contrasted the current system with the EU Commission’s original 2012 proposal, where the lead authority would have had genuine decision-making power, subject, of course, to judicial review and limited checks. That model, she noted, could have delivered faster, clearer outcomes without requiring consensus from dozens of authorities for routine or repeat complaints.
The political compromise that produced today’s structure prioritised independence and national control, but in practice, it has led to inefficiencies that undermine the timely protection of rights.
Dixon emphasised that the proximity to individual complainants, while valuable in theory, cannot compensate for a system that forces regulators to rerun the same procedural gauntlet for every platform-level issue.
Anonymisation Still Lacks a Clear Legal Boundary
Much of the discussion turned to the SRB ruling and the broader uncertainty around anonymisation.
The Court confirmed that pseudonymised data is not automatically personal data for every recipient: in SRB, Deloitte received pseudonymised comments without access to the information needed to reidentify anyone, so the data may not have been personal from its perspective. But for the SRB as controller, the data was personal at the point of collection, meaning all information obligations still applied.
Dixon acknowledged that the SRB ruling restated Recital 26 rather than resolving the practical ambiguity technologists face. It confirmed, once again, that identifiability depends on “means reasonably likely to be used”, a test that incorporates context, capability, cost and auxiliary data. But it did not specify what suffices in borderline cases.
What is needed, she argued, is a complete factual case referred to the Court of Justice, one that allows the CJEU to draw an operational line rather than deferring key questions back to national courts. Without that, SRB provides a useful steer but not the standard practitioners want.
She also emphasised a point sometimes lost in technical debates: anonymisation will never have a single threshold. The bar shifts with the data type, the actors involved, the incentives, the reidentification risk, and the surrounding ecosystem. A categorical answer is unlikely, and perhaps impossible.
Still, she noted, this period of uncertainty is also an opportunity for technical experts to help define the boundary through evidence, testing and realistic threat modelling.
AI Act Enforcement Will Be Harder Than GDPR
A question from the audience turned the discussion to the upcoming AI Act and whether its decentralised enforcement structure risks repeating the GDPR’s failures.
Dixon did not deny that the challenges will be significant.
Where GDPR already strains the capacity of national authorities, the AI Act introduces multiple national competent authorities within each country, some of whom may encounter only one qualifying case per year.

Expertise will be thinly distributed. Coordination will be harder. And unlike GDPR, the AI Act includes market surveillance bodies whose regulatory cadence is typically infrequent.
Ireland, for example, will have a central coordination authority that may not itself be an AI regulator, yet it will represent the country at the European level. Managing information flows, technical knowledge and consistent decision-making across such a system will be far from straightforward.
The larger concern Dixon raised was structural: The AI Act rests on GDPR as its legal foundation. If GDPR enforcement is unstable, inconsistent or procedurally overburdened, the AI framework will inherit those weaknesses.
A recalibration of GDPR enforcement, she argued, is urgently needed, not only to improve data protection, but to stabilise the regulatory architecture that future EU laws depend on.
Individual Rights vs. Systemic Problems
One of the most interesting points in the discussion was about whether data protection law should evolve beyond its individual-rights focus. Helen said it should. Many of today’s risks arise not from isolated violations but from sector-wide practices and shared infrastructures. These are issues that individual complaints cannot meaningfully capture or resolve.
She pointed to two recurring examples from her tenure: international data transfers and adtech. Both issues affect millions of people simultaneously, yet regulators are forced to approach them through narrow, individual complaints about a single company.
The structural problems, opaque tracking chains, systemic profiling, shared dependencies across platforms, are inherently collective, but the law only gives regulators tools designed for one data subject at a time.
The result, as she described it, is a mismatch between the scale of modern data practices and the procedural pathways available to regulate them. Entire sectors can rely on the same high-risk processing operations, yet enforcement must unfold case-by-case, complaint-by-complaint, even when the underlying issue is identical.
For Dixon, this gap is no longer sustainable: meaningful protection requires a framework that can address ecosystem-level behaviour, not just individual grievances.

The Right to Explanation and Complex AI
The session closed with a question on how the GDPR’s rights, particularly explanation and safeguards against automated decisions, apply to complex machine-learning systems.
Dixon noted that these questions mirror earlier debates around blockchain: when technology doesn’t fit neatly within the GDPR’s conceptual schema, the law shows its seams. She did not offer a simple answer, in part because none exists yet, but highlighted the need to interpret requirements through their purpose rather than their literal mapping onto each technical architecture.
The AI Act will force this conversation to mature.
The Bottom Line
The conversation highlighted a regulatory landscape that is both evolving and uneven. GDPR enforcement remains hampered by structural inefficiencies. Anonymisation law still lacks a firm operational standard. The AI Act risks inheriting the same challenges unless foundational issues are addressed. And regulators need new tools to confront ecosystem-level risks that individual complaints cannot capture.
Yet Dixon’s comments also pointed to a path forward: clearer case law, procedural reform, better coordination across regulators, and a more realistic alignment between legal concepts and technical realities.
For organisations building in this space, the message is consistent with the broader themes of the Summit: Regulatory uncertainty is not a reason to delay innovation, but it is a reason to build with evidence, context, and defensible reasoning, because the law is moving toward questions that demand it.
Key Takeaways from the Q&A with Helen Dixon
The one-stop-shop GDPR model struggles in practice.Coordination across dozens of authorities can slow even routine cases, leaving cross-border enforcement fragmented and inefficient.Anonymisation law still lacks a clear boundary.The SRB ruling reiterates principles but does not give technologists a clear test for when data is truly anonymous in practice.The AI Act faces the same structural weaknesses.Decentralised authorities, uneven expertise, and unclear coordination channels will make consistent enforcement difficult from day one.Individual-rights pathways are no longer enough.Sector-wide systems like adtech, international transfers, and platform infrastructures create collective risks that cannot be addressed one complaint at a time.Regulators need stronger procedural foundations.Without clearer pathways, better coordination, and more consistent guidance, both GDPR enforcement and AI governance will continue to strain under real-world complexity.
eyes-off data summit
eods2025
data privacy
privacy enhancing technologies
european union