med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

365
active users

#echoleak

0 posts0 participants0 posts today
LMG Security<p>No Click. No Warning. Just a Data Leak.</p><p>Think your AI assistant is secure? Think again. The new EchoLeak exploit shows how Microsoft 365 Copilot, and tools like it, can silently expose your sensitive data without a single user interaction. No clicks. No downloads. Just a well-crafted email.</p><p>In this eye-opening blog, we break down how EchoLeak works, why prompt injection is a growing AI threat, and the 5 actions you need to take right now to protect your organization. </p><p>Read now: <a href="https://www.lmgsecurity.com/no-click-nightmare-how-echoleak-redefines-ai-data-security-threats/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">lmgsecurity.com/no-click-night</span><span class="invisible">mare-how-echoleak-redefines-ai-data-security-threats/</span></a></p><p><a href="https://infosec.exchange/tags/AIDataSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIDataSecurity</span></a> <a href="https://infosec.exchange/tags/Cyberaware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cyberaware</span></a> <a href="https://infosec.exchange/tags/Cyber" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cyber</span></a> <a href="https://infosec.exchange/tags/SMB" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SMB</span></a> <a href="https://infosec.exchange/tags/Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copilot</span></a> <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://infosec.exchange/tags/EchoLeak" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EchoLeak</span></a> <a href="https://infosec.exchange/tags/PromptInjection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptInjection</span></a> <a href="https://infosec.exchange/tags/MicrosoftCopilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MicrosoftCopilot</span></a> <a href="https://infosec.exchange/tags/Cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cybersecurity</span></a> <a href="https://infosec.exchange/tags/CISO" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CISO</span></a> <a href="https://infosec.exchange/tags/ITsecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ITsecurity</span></a> <a href="https://infosec.exchange/tags/InfoSec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfoSec</span></a> <a href="https://infosec.exchange/tags/AISecurityRisks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISecurityRisks</span></a></p>
LMG Security<p>Can Your AI Be Hacked by Email Alone?</p><p>No clicks. No downloads. Just one well-crafted email, and your Microsoft 365 Copilot could start leaking sensitive data.</p><p>In this week’s episode of Cyberside Chats, <span class="h-card" translate="no"><a href="https://infosec.exchange/@sherridavidoff" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sherridavidoff</span></a></span> and <span class="h-card" translate="no"><a href="https://infosec.exchange/@MDurrin" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>MDurrin</span></a></span> discuss EchoLeak, a zero-click exploit that turns your AI into an unintentional insider threat. They also reveal a real-world case from LMG Security’s pen testing team where prompt injection let attackers extract hidden system prompts and override chatbot behavior in a live environment.</p><p>We’ll also share:</p><p>• How EchoLeak exposes a new class of AI vulnerabilities<br>• Prompt injection attacks that fooled real corporate systems<br>• Security strategies every organization should adopt now<br>• Why AI inputs need to be treated like code</p><p>🎧 Listen to the podcast: <a href="https://www.chatcyberside.com/e/unmasking-echoleak-the-hidden-ai-threat/?token=90468a6c6732e5e2477f8eaaba565624" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">chatcyberside.com/e/unmasking-</span><span class="invisible">echoleak-the-hidden-ai-threat/?token=90468a6c6732e5e2477f8eaaba565624</span></a> <br>🎥 Watch the video: <a href="https://youtu.be/sFP25yH0sf4" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/sFP25yH0sf4</span><span class="invisible"></span></a></p><p><a href="https://infosec.exchange/tags/EchoLeak" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EchoLeak</span></a> <a href="https://infosec.exchange/tags/Cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cybersecurity</span></a> <a href="https://infosec.exchange/tags/AIsecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIsecurity</span></a> <a href="https://infosec.exchange/tags/Microsoft365" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft365</span></a> <a href="https://infosec.exchange/tags/Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copilot</span></a> <a href="https://infosec.exchange/tags/PromptInjection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptInjection</span></a> <a href="https://infosec.exchange/tags/CISO" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CISO</span></a> <a href="https://infosec.exchange/tags/InsiderThreats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InsiderThreats</span></a> <a href="https://infosec.exchange/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://infosec.exchange/tags/RiskManagement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RiskManagement</span></a> <a href="https://infosec.exchange/tags/CybersideChats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CybersideChats</span></a></p>
Jukka Niiranen<p>LLM based agents cannot be secured if you 1) give them access to private data, 2) let them read untrusted content, and 3) allow them to communicate with the external world.</p><p>In his new blog post, Simon Willison writes a sharp &amp; easy to understand summary about this "lethal trifecta for AI agents": <a href="https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">simonwillison.net/2025/Jun/16/</span><span class="invisible">the-lethal-trifecta/</span></a></p><p>M365 <a href="https://mstdn.social/tags/Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copilot</span></a> hacks like <a href="https://mstdn.social/tags/EchoLeak" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EchoLeak</span></a> are nasty. But as Simon points out, once you combine different AI tools and <a href="https://mstdn.social/tags/MCP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MCP</span></a> to build your own agents, securing agents gets even harder.</p>
Guido Stevens<p>Copilot happily executes attack instructions it finds in emails you receive. This bubble is going to burst.</p><p><a href="https://kolektiva.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://kolektiva.social/tags/echoleak" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>echoleak</span></a> <a href="https://kolektiva.social/tags/copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>copilot</span></a></p><p><a href="https://pivot-to-ai.com/2025/06/15/echoleak-send-an-email-extract-secret-info-from-microsoft-office-365-copilot-ai/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">pivot-to-ai.com/2025/06/15/ech</span><span class="invisible">oleak-send-an-email-extract-secret-info-from-microsoft-office-365-copilot-ai/</span></a></p>
Miguel Afonso Caetano<p>"Aim Labs reported CVE-2025-32711 against Microsoft 365 Copilot back in January, and the fix is now rolled out.</p><p>This is an extended variant of the prompt injection exfiltration attacks we've seen in a dozen different products already: an attacker gets malicious instructions into an LLM system which cause it to access private data and then embed that in the URL of a Markdown link, hence stealing that data (to the attacker's own logging server) when that link is clicked.</p><p>The lethal trifecta strikes again! Any time a system combines access to private data with exposure to malicious tokens and an exfiltration vector you're going to see the same exact security issue.</p><p>In this case the first step is an "XPIA Bypass" - XPIA is the acronym Microsoft use for prompt injection (cross/indirect prompt injection attack). Copilot apparently has classifiers for these, but unsurprisingly these can easily be defeated:"</p><p><a href="https://simonwillison.net/2025/Jun/11/echoleak/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">simonwillison.net/2025/Jun/11/</span><span class="invisible">echoleak/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/CyberSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CyberSecurity</span></a> <a href="https://tldr.nettime.org/tags/EchoLeak" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EchoLeak</span></a> <a href="https://tldr.nettime.org/tags/Microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft</span></a> <a href="https://tldr.nettime.org/tags/Microsof365Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsof365Copilot</span></a> <a href="https://tldr.nettime.org/tags/ZeroClickVulnerability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZeroClickVulnerability</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/PromptInjection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptInjection</span></a> <a href="https://tldr.nettime.org/tags/Markdown" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Markdown</span></a> <a href="https://tldr.nettime.org/tags/Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copilot</span></a></p>
Hackread.com<p>A zero-click flaw in <a href="https://mstdn.social/tags/Microsoft365Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft365Copilot</span></a>, dubbed <a href="https://mstdn.social/tags/EchoLeak" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EchoLeak</span></a>, lets attackers steal company data through a single email, no user action needed. AI assistants now pose real risks.</p><p>Read: <a href="https://hackread.com/zero-click-ai-flaw-microsoft-365-copilot-expose-data/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hackread.com/zero-click-ai-fla</span><span class="invisible">w-microsoft-365-copilot-expose-data/</span></a></p><p><a href="https://mstdn.social/tags/CyberSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CyberSecurity</span></a> <a href="https://mstdn.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mstdn.social/tags/ZeroClick" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZeroClick</span></a> <a href="https://mstdn.social/tags/Vulnerability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Vulnerability</span></a> <a href="https://mstdn.social/tags/CoPilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CoPilot</span></a></p>
Marcel SIneM(S)US<p>Kritische Sicherheitslücke in <a href="https://social.tchncs.de/tags/Microsoft365" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft365</span></a> <a href="https://social.tchncs.de/tags/Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copilot</span></a> zeigt Risiko von KI-Agenten | heise online <a href="https://www.heise.de/news/Kritische-Sicherheitsluecke-in-Microsoft-365-Copilot-zeigt-Risiko-von-KI-Agenten-10441034.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/news/Kritische-Sicher</span><span class="invisible">heitsluecke-in-Microsoft-365-Copilot-zeigt-Risiko-von-KI-Agenten-10441034.html</span></a> <a href="https://social.tchncs.de/tags/EchoLeak" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EchoLeak</span></a> <a href="https://social.tchncs.de/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a></p>