Making existing models more private post-deployment can be challenging, especially if they were trained without privacy considerations. One approach is to retrain models using differential privacy techniques like DPSGD, where noise is added during training to protect individual data. Another strategy could be model distillation, where a 'teacher' model (without privacy constraints) is used to train a 'student' model under differential privacy constraints. This approach allows the student model to learn generalised representations without directly accessing sensitive data. However, if a model has already leaked private information, mitigating this after deployment is complex and often involves more than just technical solutions.
Join Antigranular
Ask us on Discord
Read the blog