Apple's New On-Device AI Training Method: A Privacy-Focused Evolution or a Familiar Controversy?

Apple has announced plans to implement on-device training for its AI systems using Differential Privacy techniques, similar to the methods it previously proposed for CSAM detection but now applied in a less controversial context. Differential Privacy is a method that introduces noise to user data to prevent tracing back to individual users, ensuring privacy while still allowing Apple to train its AI models. Starting with iOS 18.5, this will be used to improve systems like Genmoji, Image Playground, and Writing Tools by analyzing patterns in user-generated prompts without ever removing the data from the device. The system is opt-in, and users can control their participation through the Analytics & Improvements settings in Settings > Privacy & Security.
Latest News

xBloom Studio: The Coffee Maker That Puts Science in Your Cup
3 months ago

Matter 1.4.1 Update: Daniel Moneta Discusses Future of Smart Home Interoperability on HomeKit Insider Podcast
3 months ago

OWC Unleashes Thunderbolt 5 Docking Station with 11 Ports for M4 MacBook Pro
3 months ago

Nomad Unveils Ultra-Slim 100W Power Adapter for On-the-Go Charging
3 months ago

iOS 19 Set to Debut Bilingual Arabic Keyboard and Virtual Calligraphy Pen for Apple Pencil
3 months ago

Big Tech Lawyers Accused of Encouraging Clients to Break the Law
3 months ago