By Ziko, reporter for The LeadWay Tech.
OpenAI has successfully fixed a critical security vulnerability in its ChatGPT Deep Research agent that could have allowed malicious actors to steal sensitive data from users’ Gmail accounts. The flaw was discovered by the cybersecurity firm Radware and has been patched before any known exploitation.
The “ShadowLeak” Vulnerability Explained
The vulnerability, which Radware researchers named “ShadowLeak,” was a zero-click, service-side attack. This means it could have been exploited without any action from the user, such as clicking a malicious link. The attack worked by leveraging an indirect prompt injection technique, where hidden commands were embedded in an email’s HTML code using tricks like white-on-white text or tiny fonts.
SEE ALSO The New Brand Growth Formula: A Data-Driven Culture Fueled By AI
Here’s how it could have unfolded:
- An attacker would send an innocent-looking email containing the hidden instructions.
- A user who had connected their Gmail account to ChatGPT’s Deep Research agent would ask the chatbot to perform a task, such as “summarize today’s emails.”
- The agent, in processing the email, would read the hidden commands and, without user confirmation, be instructed to extract personal data from the inbox (e.g., names, addresses) and send it to an attacker-controlled server.
- Crucially, because the data exfiltration happened on OpenAI’s cloud infrastructure and not on the user’s device, the attack would have been invisible to local or corporate security defenses.
Radware demonstrated that while the vulnerability was not easy to exploit, it was a credible threat that could have been used to compromise both personal and corporate accounts.
SEE ALSO Market Expansion: South Africa’s Float Raises $2.6 Million to Boost Card-Linked Instalment Platform
OpenAI’s Swift Response
Upon being notified of the vulnerability through its bug bounty program in June, OpenAI moved quickly to address the issue. The company confirmed the vulnerability and released a patch in early September, and has stated that it found no evidence of real-world abuse.
OpenAI’s spokesperson confirmed its commitment to user safety and stated that it encourages adversarial testing from security researchers as it helps strengthen the platform against future threats like prompt injections. The incident serves as a significant example of how AI agents, while powerful, also introduce new and complex security challenges that require continuous vigilance from both developers and users.

