FAQs
Differential Privacy Basics
How Can Existing Models Be Made More Private After Deployment?
FAQs
Differential Privacy Basics
How Can Existing Models Be Made More Private After Deployment?

Making existing models more private post-deployment can be challenging, especially if they were trained without privacy considerations. One approach is to retrain models using differential privacy techniques like DPSGD, where noise is added during training to protect individual data. Another strategy could be model distillation, where a 'teacher' model (without privacy constraints) is used to train a 'student' model under differential privacy constraints. This approach allows the student model to learn generalised representations without directly accessing sensitive data. However, if a model has already leaked private information, mitigating this after deployment is complex and often involves more than just technical solutions.

Curious about implementing DP into your workflow?

Join Antigranular

Got questions about differential privacy?

Ask us on Discord

Want to check out articles on similar topics?

Read the blog

2024 Oblivious Software Ltd. All rights reserved.