EchoLeak: What It Is, Why It Matters, and Why You Can Still Trust AI

by | Jun 16, 2025 | News

The excitement around AI is plain to see, but it is tempered by an understandable caution.  Many readers will be using AI to work smarter—from writing emails to analysing reports.

But the caution is appropriate.  Just like any powerful tool, AI needs to be used safely. That’s where the recent “EchoLeak” story comes in.

What Is EchoLeak?

EchoLeak is the name given to a newly discovered security flaw in Microsoft 365 Copilot, an AI assistant used by many businesses. What made it unusual—and concerning—is that it was a “zero-click” vulnerability. That means hackers didn’t need you to click on anything. Instead, they could hide a malicious message in something like an email, and the AI could accidentally leak sensitive information without you ever knowing 

The good news? Microsoft quickly fixed the issue, and there’s no evidence that anyone was harmed by it 

Why It Matters

EchoLeak is a wake-up call. It shows that as AI becomes more integrated into our daily work, we need to think about AI security just like we do with passwords, firewalls, and antivirus software. It’s not just about what AI can do—it’s about how we protect it from being misused.

So, Is AI Still Safe?

Yes—when used responsibly. Here’s why you can still trust AI:

  • The flaw was found and fixed quickly. That’s a sign the system works. Researchers spotted the issue, reported it, and Microsoft patched it before it could cause harm.
  • AI doesn’t act on its own. It follows instructions. EchoLeak worked by tricking the AI with hidden commands. With better safeguards, these tricks can be blocked.
  • Security is evolving. Just like we’ve learned to protect our phones and laptops, we’re now learning how to protect AI tools too.

What Can You Do?

If you’re a business leader or team member using AI tools like Microsoft Copilot, here are a few simple steps to stay safe:

  • Keep your software updated. Most security fixes come through updates—don’t ignore them.
  • Be cautious with unknown content. Even if you don’t click, AI might read it. Treat suspicious emails or documents with care.
  • Ask your IT team about AI security. Make sure your organization is aware of how AI is being used and protected.

Final Thought

AI is here to stay—and that’s a good thing. It can help us work faster, smarter, and more creatively. EchoLeak reminds us that with great power comes great responsibility. But with the right precautions, AI can be not just powerful, but safe too.

Key Highlights:

  • What is EchoLeak? EchoLeak is a critical vulnerability discovered by Aim Security. It allows attackers to exfiltrate sensitive corporate data from Microsoft 365 Copilot without any user interaction—no clicks, downloads, or warnings.
  • How it works: A single maliciously crafted email containing specific markdown syntax can silently trigger Copilot to:
    • Parse the email in the background.
    • Follow hidden prompts.
    • Access internal files (emails, Teams chats, OneDrive).
    • Send confidential data to an attacker’s server.
  • Technical Exploit: The attack exploits:
    • Copilot’s ability to process both trusted internal and untrusted external data.
    • An open redirect vulnerability in Microsoft’s Content Security Policy (CSP).
    • A flaw classified as an LLM Scope Violation, where AI is tricked into accessing data beyond its intended scope.
  • Security Implications: Experts warn this is a new class of AI threat:
    • Traditional defences like DLP (Data Loss Prevention) may not be effective.
    • AI’s contextual understanding can be weaponised.
    • Enterprises must adopt real-time behavioural monitoring and agent-specific threat modelling.
  • Microsoft’s Response: Microsoft has patched the vulnerability and confirmed that no customers were affected and no real-world attacks occurred.
  • Broader Impact: The flaw highlights systemic risks in AI systems, especially those using Retrieval-Augmented Generation (RAG). It underscores the need for a new AI security paradigm, especially in sensitive sectors like finance, healthcare, and defence.