Apple's New On-Device AI Training Method: A Privacy-Focused Evolution or a Familiar Controversy?

Apple has announced plans to implement on-device training for its AI systems using Differential Privacy techniques, similar to the methods it previously proposed for CSAM detection but now applied in a less controversial context. Differential Privacy is a method that introduces noise to user data to prevent tracing back to individual users, ensuring privacy while still allowing Apple to train its AI models. Starting with iOS 18.5, this will be used to improve systems like Genmoji, Image Playground, and Writing Tools by analyzing patterns in user-generated prompts without ever removing the data from the device. The system is opt-in, and users can control their participation through the Analytics & Improvements settings in Settings > Privacy & Security.
Latest News

Android 16 Beta 4 Introduces Subtle But Noticeable Changes to Status Bar Clock Design
1 day ago

Revolutionizing Gaming on iOS: AltStore Classic Brings Full-Speed Switch Emulation to iPhones and iPads in the EU
1 day ago

Indie Spotlight: ‘Ping Pong Club’ Serves Up Realistic Table Tennis on Apple Vision Pro
1 day ago

iPhone 17 Pro Camera Bar Leaks in Vibrant Case Colors
1 day ago

Why Apple's iMessage Still Struggles with AVIF Image Files
1 day ago

Simplifying macOS Updates for IT and End Users in 2025
1 day ago