What Does ‘Anonymous’ Really Mean? Inside the ICO’s Guidance with Paul Comerford and Rory O’Keeffe
An in-depth look at how the ICO interprets anonymity, why context matters for identifiability, and what organisations must do to assess and justify anonymisation in practice.
5 minutes
Nov 19, 2025

At the Eyes-Off Data Summit 2025, Paul Comerford (ICO) sat down with lawyer and podcast host Rory O’Keeffe (RMOK Legal) to unpack the practical meaning of anonymity under GDPR. Their session tackled a question that cuts to the heart of nearly every data protection debate today: what does it actually mean for data to be “anonymous”?
The session moved beyond surface-level definitions and into the underlying difficulty: anonymity sits at the intersection of legal thresholds, statistical methods, and real-world threat models. The result was a clear-eyed examination of what organisations can legitimately call “anonymous” today, and where the boundaries begin to blur.
Redrawing the Line Between Anonymous and Pseudonymous
Under the UK and EU GDPR, data qualifies as anonymous information when it does not relate to an identified or identifiable natural person. The assessment hinges on the “means reasonably likely to be used” to identify a person, as articulated in Recital 26. That test considers cost, computational requirements, and the state of available technologies when determining whether a person could realistically be reidentified.
By contrast, pseudonymised data remains personal data, even when identifiers are replaced or separated. Its classification hinges on the fact that the additional information needed to reidentify individuals still exists somewhere, even if securely stored. In practice, the boundary between the two concepts is a gradient rather than a hard line. The ICO captures this with its spectrum of identifiability—a visual that many practitioners now rely on to navigate risk, context, and exposure.
O’Keeffe noted that most companies still misunderstand the distinction. For many, anonymisation feels like a promise of safety, while pseudonymisation feels like a technicality.
The SRB Ruling and Its Impact
The September SRB ruling from the Court of Justice of the European Union has been the subject of widespread debate. The case centred on whether pseudonymised data shared with a third party should still be treated as personal data when that party does not have access to the additional information needed to reidentify individuals.
While the judgment is not binding in the UK, its reasoning aligns closely with the ICO’s existing approach: identifiability must be assessed from the perspective of the recipient.

If a receiving party does not possess, and is not reasonably likely to obtain, the additional information needed to reidentify individuals, the data may be treated as anonymous for that specific context. This reinforces the idea that anonymity is not purely a technical test, but one rooted in capabilities, relationships, and real-world constraints.
In that sense, the SRB ruling does not overturn existing ICO thinking. It reinforces the idea that anonymity cannot be separated from context, actors, and capabilities. The ruling’s true influence may emerge only when European bodies revisit their anonymisation guidelines in the coming years.
Techniques for Anonymity and Why None Are Perfect
When asked to identify techniques that can achieve “true anonymity”, Comerford resisted the temptation to simplify. No technique is risk-free, he reminded the audience; if risk is eliminated entirely, the usefulness of the data tends to evaporate with it.
Techniques generally fall into two broad families. Generalisation reduces precision through methods such as aggregation or coarsening, while randomisation introduces uncertainty using approaches such as differential privacy or synthetic data generation. Differential privacy remains a leading standard for randomisation when correctly implemented.
However, techniques alone are not enough. Implementation quality is essential. Effective anonymisation depends on using tested, well-maintained libraries, selecting appropriate parameters for the context, and understanding the limitations of each approach.
Much of the ICO’s guidance highlights the importance of relying on proven codebases and established standards, including NIST’s differential privacy framework, rather than attempting bespoke implementations. The truth is that real-world implementations fail far more often due to poor configuration than flawed mathematics.
In practice, organisations must also consider the broader ecosystem. O’Keeffe’s advice to clients is simple: scrutinise vendors. Ask for whitepapers, third-party audits, and explanations of the model choices and parameters. Trust cannot come from a product description alone. Organisations need evidence that vendors understand generalisation, randomisation, and reidentification risk, and can articulate why their particular approach works for the intended use case.
Proving Anonymity: What Regulators Expect
When organisations need to demonstrate that their data is truly anonymous, during audits, enforcement actions, or breach investigations, regulators look first to the process and documentation. A well-reasoned approach can be as important as the technical outcome.
This includes clear records of decisions, risk assessments, threat modelling, parameter choices, and testing. The motivated intruder test remains a practical way to assess residual risk, simulating what a reasonably capable and motivated attacker could achieve with accessible resources. The nature of this test varies depending on whether the data is published openly, shared within a controlled environment, or restricted to vetted partners.
Preparation matters. GDPR’s 72-hour notification requirement leaves little time to reconstruct rationales after the fact. Many of the fines imposed on smaller companies stem not from malicious behaviour but from poor preparation and an inability to substantiate the decisions they made.

Anonymisation as an Evolving Field
Anonymisation is not a static topic. The ICO continues to monitor advances across multiple domains. AI raises new questions about identifiability as model inversion, memorisation, and extraction attacks challenge traditional assumptions about what counts as “input data.” Adtech remains another complex area, where tracking mechanisms and real-time bidding ecosystems create additional layers of identifiability risk.
New guidance on anonymisation for research is already in development, reflecting the ICO’s view that different sectors require tailored interpretations. As technology evolves, so does the regulatory understanding of what it means to safeguard identity.
The Bottom Line
The Q&A between Comerford and O’Keeffe at the Eyes-Off Data Summit 2025 in Dublin offered a blend of legal clarity, technical nuance, and regulatory pragmatism. Anonymous data is not a static category but a contextual determination shaped by capabilities, motivations, available technologies, and evidence.
Organisations do not need perfect risk elimination, but they do need defensible reasoning, rigorous documentation, and a clear understanding of how their techniques work in practice.
For companies building or adopting PETs, the session underscored a simple truth: real privacy assurance depends as much on the process surrounding anonymisation as on the techniques themselves.
The ICO’s guidance remains one of the clearest frameworks available—but its effectiveness ultimately rests on how well organisations stress test, document, and justify the choices they make.

Key Takeaways from the Q&A with Paul Comerford
Anonymity depends on context.Identifiability must be assessed based on the real capabilities of whoever receives the data—not just the technique applied.Pseudonymisation is not a privacy shield.It usually remains personal data, unless the recipient genuinely lacks the means to reverse it.Techniques only work when implemented well.Methods like differential privacy or synthetic data reduce risk, but their strength depends on configuration, parameters, and proven codebases.Documentation is key.Transparent reasoning, supporting records, and testing are important for showing regulators how you’ve approached and managed risk.
anonymisation
data governance
eyes-off data summit
regulation