Skip to content

ChatGPT bug rectification without fanfare: An unnoticed issue that potentially revealed Gmail data addressed by OpenAI

Security breach in ChatGPT's Deep Research tool fixed, potentially revealing confidential Gmail information now avoided. Details inside.

ChatGPT bug rectified in secrecy, potentially disclosing Gmail information: The lowdown on the...
ChatGPT bug rectified in secrecy, potentially disclosing Gmail information: The lowdown on the issue and its resolution.

ChatGPT bug rectification without fanfare: An unnoticed issue that potentially revealed Gmail data addressed by OpenAI

In a recent development, tech giant OpenAI has confirmed the resolution of a security flaw discovered in its ChatGPT's Deep Research feature. This vulnerability, uncovered by cybersecurity firm Radware, could potentially allow cybercriminals to access users' email accounts, according to Radware's director of threat intelligence, Pascal Geenens.

The Deep Research feature, exclusive to paying ChatGPT subscribers, enables the AI to extract information from email with user permission. However, the flaw allowed hackers to extract sensitive information from both personal and corporate email accounts without the need for the victim to interact with a malicious email.

In a statement, a company spokesperson emphasised that user safety remains a top priority at OpenAI. Adversarial testing by researchers is encouraged by OpenAI because it helps strengthen the platform against future threats.

Ashish Singh, the Chief Copy Editor at the platform, has been working with tech jargon since 2020. Prior to joining OpenAI, Singh worked for Times Internet and Jagran English. When not policing commas, he can be found fueling his gadget habit with coffee, strategising his next virtual race, or plotting a road trip to test the latest in-car tech.

It is worth noting that the company that discovered the security vulnerability in ChatGPT in February this year is not explicitly named in the available search results. However, Radware's discovery of the security flaw in ChatGPT's Deep Research agent emphasises the potential security risks these systems pose.

OpenAI's quick response to address the issue is commendable. There is currently no evidence of real-world exploitation of this vulnerability. The company's commitment to user safety and its encouragement of adversarial testing demonstrates its proactive approach to maintaining a secure platform.

Read also:

Latest