Microsoft Uncovers AI Chatbot Flaw Exposing Private Conversations
Your chats with AI assistants like ChatGPT or Google Gemini may not be fully private. Microsoft has uncovered a serious security flaw that exposes the topics of your conversations.Researchers call it “Whisper Leak.” It doesn’t break encryption but instead uses information that encryption cannot hide. The flaw affects almost every large language model (LLM) tested.
How the Whisper Leak Works
AI chats are protected by Transport Layer Security (TLS), the same system used in online banking. TLS hides what you say, but it doesn’t hide how your data moves.Microsoft researchers found that attackers could analyze metadata,like packet sizes and timing,to guess what you’re talking about. Therefore, even encrypted chats can unintentionally reveal sensitive topics.
Testing the Flaw
In a study on the arXiv preprint server, Microsoft tested 28 AI models. They created questions about a sensitive topic, such as money laundering, and mixed them with random questions.Then, they trained another AI to recognize the topic only through data rhythms. As a result, the program identified sensitive topics with up to 98% accuracy.
The Call for Stronger Protections
Microsoft tested three defense methods. However, none completely stopped the leak.The issue isn’t with TLS encryption itself, but with how data packets reveal hidden clues. Experts say AI providers must fix metadata exposure to protect users.In conclusion, this discovery is a wake-up call. As AI becomes part of daily life, ensuring private, secure conversations is more important than ever.

