Differential Privacy Basics

How Can Existing Models Be Made More Private After Deployment?

Making existing models more private post-deployment can be challenging, especially if they were trained without privacy considerations. One approach is to retrain models using differential privacy techniques like DPSGD, where noise is added during training to protect individual data. Another strategy could be model distillation, where a 'teacher' model (without privacy constraints) is used to train a 'student' model under differential privacy constraints. This approach allows the student model to learn generalised representations without directly accessing sensitive data. However, if a model has already leaked private information, mitigating this after deployment is complex and often involves more than just technical solutions.

Read more about it

Curious about implementing DP into your workflow?

Curious about implementing DP into your workflow?

Got questions about differential privacy?

Got questions about differential privacy?

Want to check out articles on similar topics?

Want to check out articles on similar topics?