Protecting Trained Models in Privacy-Preserving Federated Learning
Summary
This NIST Cybersecurity Insights post discusses protecting trained models in privacy-preserving federated learning, as part of a series in collaboration with the UK government’s Responsible Technology Adoption Unit (RTA). The series explores techniques for input privacy in federated learning, particularly in horizontally and vertically partitioned data scenarios, aiming to build complete privacy-preserving federated learning systems.
IFF Assessment
Focus on privacy-preserving techniques in federated learning aids defenders in developing secure and privacy-respecting AI systems.
Defender Context
Defenders need to understand the nuances of privacy-preserving federated learning to properly secure and protect sensitive data used in AI model training. Watch for evolving standards and best practices in this area. Trends include increased adoption of federated learning techniques and growing regulatory scrutiny around data privacy.