What Happens When AI Safety Fails and How Do We Fix It?
Oct 24, 2025
This week, we explore how AI privacy fails, from legal grey zones to leaky models, and how differential privacy is shaping the next generation of safeguards.
One Article
Using third-party AI often exposes your most sensitive data without meaningful safeguards. This article lays out the risks, from prompt injection to insider access, and explains why leaders like CISOs at JP Morgan are calling for a shift from policy-based trust to cryptographically enforced control.
One Book
This new book, edited by Ferdinando Fioretto and Pascal Van Hentenryck, offers a comprehensive look at how differential privacy is applied across modern AI systems. It covers everything from core mechanisms and private learning methods to real-world use, insights on fairness, policy, and deployment challenges.
One Use Case
Google’s new VaultGemma is the largest open-source language model trained with differential privacy and it comes with formal guarantees and no detectable memorisation. Backed by new scaling laws, it shows how private LLMs can be trained efficiently without sacrificing utility.


