Day 2 at EODS 2025: Rethinking Trust, Technology, and Responsibility

Day 2 of the Eyes-Off Data Summit 2025 moved from theory to implementation, exploring how privacy technologies are governed, deployed, and scaled responsibly across AI, data, and policy.

11 minutes

Oct 15, 2025

If Day 1 at the Eyes-Off Data Summit 2025 traced the rise of PETs from research to production, Day 2 explored what happens next, how these technologies are governed, implemented, and scaled responsibly. The discussions moved between policy and practice, between technical innovation and organisational culture. What emerged was a picture of a field in motion: privacy as an ecosystem, not a checkbox.

The day was split into two tracks. Strategic sessions examined how policy and governance are adapting to the PETs era, while the technical track went hands-on with tools shaping the next generation of privacy-preserving data. Here's a summary of the discussions, workshops, and debates that shaped the second day in Dublin.

Governance, Trust and PETs

A strong theme of the morning was trust as infrastructure. Compliance is no longer enough; organisations must demonstrate that privacy principles are embedded in their systems.

There was broad recognition that regulation and practice are still out of step. PETs can provide mathematical guarantees, but regulators often rely on open-ended language such as “appropriate safeguards”. Without a common baseline, adoption risks being inconsistent.

Several participants described efforts to close that gap and to make privacy measurable, so that policy and engineering can speak the same language.

The conversation also touched on speed. AI technologies evolve on quarterly cycles, while governance often lags behind. Organisations are beginning to build for interoperability — systems that can show compliance across multiple regimes without needing to be rebuilt every time rules change.

From Regulation to Reality: Building Privacy Into Systems

One of the clearest insights was that many challenges are less about law or code than about communication. Scientists explain privacy in terms of parameters and probabilities. Policymakers talk in values and rights. Industry leaders want workable models they can put into production. Too often, these groups sit in the same room but fail to understand one another.

PETs are beginning to act as a common bridge. Differential privacy, clean rooms, and federated learning allow organisations to demonstrate privacy in concrete terms. That in turn gives regulators something to test and companies something to operationalise.

Real-world examples are starting to surface. In finance, PETs are being used to analyse fraud across institutions without exposing customer records. In adtech, they are enabling recommendation systems trained across multiple datasets while protecting the individuals behind them. The common thread is that privacy is being designed into systems at the outset, rather than bolted on later.

Differential Privacy in Practice

Differential privacy (DP) dominated both tracks, reflecting its shift from theory to widespread deployment. The discussions showed how DP is no longer confined to academic papers — it’s shaping national statistics, corporate analytics, and cross-industry collaborations.

A key milestone highlighted was the launch of a Differential Privacy Deployment Registry, launched by NIST in collaboration with other community partners, designed to document live deployments and share lessons learned. By cataloguing parameters, data types, and accuracy trade-offs, the registry promises to create a shared knowledge base for practitioners, reducing duplication and promoting accountability.

Technical workshops demonstrated how DP can be tuned for specific use cases. The discussion extended beyond the epsilon parameter, focusing on practical trade-offs between accuracy and protection. Participants explored how to automate those choices using adaptive frameworks that can learn optimal noise levels over time.

Still, the challenge of accessibility remains. Implementing DP requires interdisciplinary literacy, an understanding of statistics, cryptography, and policy. Attendees agreed that training and shared tooling will be key to closing that expertise gap, helping smaller organisations adopt DP without needing a full research team.

As one speaker noted, “Differential privacy is the most mature of the PETs, but only if we make it usable.”

Synthetic Data and the New Foundations of Research

Synthetic data drew attention as one of the most versatile PETs, enabling realistic data generation without exposing sensitive information. What made this year’s discussion distinctive was its shift from concept to calibration: not whether synthetic data works, but how to measure its reliability.

Researchers presented comparative studies showing how different generative models, from GANs to diffusion-based architectures, perform when balancing fidelity, fairness, and privacy. The question of “utility drift” came up often: even when synthetic data mimics statistical distributions accurately, it can fail on edge cases, leading to bias amplification or false confidence in downstream AI models.

Evaluation frameworks are emerging to test synthetic data across three dimensions: privacy leakage, statistical similarity, and task-specific utility. Participants also noted that governance around synthetic data remains underdeveloped. Organisations sometimes assume that once data is synthetic, regulation no longer applies. The reality is subtler: privacy risks depend on the generation method and model transparency.

A compelling use case was shared from humanitarian operations, where differentially private synthetic data enabled scenario modelling on refugee movement without exposing individuals’ records. Such examples illustrated that synthetic data is not merely a compliance workaround but a research catalyst, a way to work responsibly when access to real data would be unethical or unsafe.

Responsible AI and the Question of Accountability

Responsible AI sessions tackled the tension between compliance frameworks and moral accountability, and how to ensure that governance does not stop at checkbox exercises. Many argued that “responsible AI” risks becoming a hollow phrase if it cannot be measured.

Discussions centred on re-embedding human judgment in AI development. Ethical checkpoints must be integrated into product cycles, not appended as post-hoc reviews. Participants acknowledged that PETs can provide the technical backbone for these processes by ensuring auditability and verifiable privacy guarantees, but cultural adoption remains the harder challenge.

Another session explored the interplay between ethics, risk, and explainability. Rather than separating them into silos, the group suggested integrating them through PET-driven pipelines where each model decision can be inspected without exposing underlying data. This vision of “transparent privacy”, where accountability and protection reinforce each other, resonated across the day.

The link between AI and privacy was made clear. Models trained on sensitive datasets are not only judged on their predictions, but also on what they remember. PETs are emerging as the way to prevent memorisation from becoming leakage.

The Memory Problem in AI

Technical discussions highlighted the problem of machine learning systems’ inability to truly “forget” data. Even as regulation moves toward giving individuals the right to be erased, most AI models remain incapable of meaningful “unlearning.” Once data enters a training set, it becomes part of the model’s internal landscape, influencing parameters in ways that can’t easily be undone.

Several participants described this as the next great privacy frontier. Traditional compliance frameworks assume that data can be deleted on demand; in machine learning, deletion is rarely that simple. Retraining from scratch is costly, and partial retraining often leaves residual traces of the original data.

Examples shared included membership inference and model inversion attacks, which allow adversaries to extract fragments of training data, names, phrases, and even full records, from supposedly sealed systems. 

Large language models are particularly vulnerable, as fine-tuning or extended prompting can surface memorised text from their datasets. In one live demonstration, a researcher showed how benign-looking prompts could coax a model into revealing private identifiers embedded deep within its weights.

The concept of machine unlearning is emerging as a response. It aims to design algorithms that can selectively forget specific data points or individuals without requiring full retraining. While promising, current methods are limited by scalability and accuracy. Removing one user’s data might distort predictions for others, especially in small or imbalanced datasets.

PETs can help contain this risk. Differential privacy can reduce memorisation during training; confidential computing can enforce secure runtime environments; synthetic data can decouple models from identifiable individuals altogether. But no single solution resolves the core tension: once knowledge is learned, forgetting it without breaking the system remains technically difficult.

Confidential Computing and Decentralised Collaboration

Talks and demos showed how secure enclaves are being used to protect sensitive computations in finance, healthcare, and AI deployment pipelines. The discussions went beyond hardware and into architectural thinking: what happens when data protection is guaranteed by code, not contracts.

The shift toward decentralised collaboration was a significant theme. Instead of centralising datasets for analysis, organisations are moving toward secure, distributed models where algorithms travel to the data. These architectures reduce exposure risk while maintaining analytical power, a model increasingly referred to as “compute-in-place.”

Participants also contrasted confidential computing with blockchain-based approaches, debating their respective strengths. The emerging view was that hybrid systems, combining verifiable logging with protected computation, may define the next generation of secure data exchange.

This mirrors a larger transition across industries: from reactive compliance to proactive assurance. As one speaker pointed out, “We’re not trying to hide data anymore, we’re trying to use it safely.”

Compliance Engineering

Early in the day, a technical session on AI compliance engineering offered a pragmatic bridge between ethics and implementation. Instead of relying on annual audits, build automated assurance directly into data pipelines.

Speakers demonstrated prototype systems where every model executed logs its provenance, data source, and PET configuration, creating a transparent “paper trail” for both regulators and internal teams. This continuous validation model ensures that compliance scales with deployment speed. The message resonated strongly with engineers.

Generative AI and the Expanding Threat Surface

The security sessions brought a sharper edge to the AI conversation. Participants discussed how generative models are changing the privacy threat landscape, introducing new risks like prompt injection, malicious agents, and automated data exfiltration.

One case study illustrated how attackers can use language models to chain together privacy leaks, crafting adversarial prompts that trick systems into exposing sensitive data or metadata. Data minimisation, federated training, and enclave-based filtering can all reduce exposure.

Communication, and the Human Layer of Privacy

Beyond code and policy, Day 2 returned repeatedly to the question of language. Technical communities, regulators, and product teams often speak past each other. Misalignment over terms like “risk,” “fairness,” or “identifiability” creates unnecessary friction. Several speakers called for a shared lexicon; not new laws, but new clarity.

Training emerged as an underestimated pillar of PET adoption. Many organisations deploy advanced privacy technologies without ensuring that teams understand their implications. Awareness must reach every layer of decision-making, from boardrooms to engineers writing queries.

Another undercurrent was the importance of communication design. When privacy interfaces are opaque, users disengage. PETs can only build trust if their guarantees are both real and understood. Attendees shared examples of user-centric transparency, such as dashboards showing how data contributes to aggregate outcomes without revealing individuals, for example, as small but powerful trust-building tools.

Ultimately, technology will fail if culture doesn’t keep pace. Several participants reflected that the next wave of privacy progress will be sociotechnical: embedding ethical reasoning, communication, and transparency into the very structure of product development.

Bottom Line: Privacy as the Architecture 

If Day 1 established that privacy is infrastructure, Day 2 showed what that means in practice. The sessions collectively outlined a new stage of maturity for PETs, where governance, verification, and collaboration become as critical as algorithms themselves.

The conversations meet on a simple truth: privacy is not slowing AI down; it is shaping how AI will scale responsibly. From differential privacy to confidential computing, from synthetic data to decentralised analytics, PETs are redefining how trust is engineered.

For those who weren’t in Dublin, the message is clear: the frontier of PETs is no longer about invention, but integration. Privacy has become the connective tissue between compliance, innovation, and human agency. The work ahead is cultural as much as technical; and it’s already begun.

eyes-off data summit

ai

anonymisation

confidential computing

data governance

data privacy

pets

pets adoption

regulation

synthetic data