The Risks of Including Personal Details in AI Chats
Security researchers have discovered a way to instruct AI chatbots to gather personal data from conversations and upload it to a server, raising concerns about privacy and security in AI chats. The researchers tested the method on two large language models, LeChat by French AI company Mistral and Chinese chatbot ChatGLM, and found that users could be offered seemingly helpful prompts which secretly contain malicious instructions obfuscated as gibberish only understandable by the AI. Experts warn that as more people use AI assistants and grant them greater authority over their activities, these kinds of attacks are likely to become more widespread.
Latest News
xBloom Studio: The Coffee Maker That Puts Science in Your Cup
6 months ago
Moto Watch Fit Priced at $200: Is It Worth the Cost for Fitness Enthusiasts?
6 months ago
iOS 18's Subtle but Significant Privacy Boost: Granular Contact Sharing Control
6 months ago
Walmart Unveils Onn 4K Plus: The Affordable $30 Google TV Streaming Device
6 months ago
Judge Forces Apple to Comply: Epic Games' Fortnite Returns Hinge on Court Order
6 months ago
OnePlus Unveils the ‘Plus Key’: Is It Just an iPhone Knockoff or Something Revolutionary?
6 months ago