Apple's New On-Device AI Training Method: A Privacy-Focused Evolution or a Familiar Controversy?
Apple has announced plans to implement on-device training for its AI systems using Differential Privacy techniques, similar to the methods it previously proposed for CSAM detection but now applied in a less controversial context. Differential Privacy is a method that introduces noise to user data to prevent tracing back to individual users, ensuring privacy while still allowing Apple to train its AI models. Starting with iOS 18.5, this will be used to improve systems like Genmoji, Image Playground, and Writing Tools by analyzing patterns in user-generated prompts without ever removing the data from the device. The system is opt-in, and users can control their participation through the Analytics & Improvements settings in Settings > Privacy & Security.