Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Greenland: Denmark is ‘deeply disturbed’ by President Trump’s appointment of special envoy who wants island to become part of US

December 22, 2025

Scottish Premiership: Celtic, Rangers, Hearts, Aberdeen, Motherwell, Dundee United, St Mirren, Kilmarnock live on Sky Sports | Soccer News

December 22, 2025

Paramount WBD stock tender offer: pros and cons

December 22, 2025
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » OpenAI says AI browsers can always be vulnerable to prompt injection attacks
AI

OpenAI says AI browsers can always be vulnerable to prompt injection attacks

Editor-In-ChiefBy Editor-In-ChiefDecember 22, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI is working to harden its Atlas AI browser against cyberattacks and acknowledges that prompt injection is a type of attack that manipulates an AI agent to follow malicious instructions hidden in web pages or emails. This is a risk that isn’t going away anytime soon, raising questions about how securely AI agents can operate on the open web.

“As with fraud and social engineering on the web, instant attacks are unlikely to be fully ‘solved’,” OpenAI said in a blog post on Monday, detailing how the company is hardening Atlas’ defenses to counter the constant attacks. The company acknowledged that ChatGPT Atlas’ “Agent Mode” “expands the surface of security threats.”

OpenAI announced its ChatGPT Atlas browser in October, and security researchers have rushed to release a demo showing that you can change the behavior of the underlying browser by writing a few words in a Google Doc. On the same day, Brave published a blog post explaining how indirect prompt injection is an organizational challenge for AI-powered browsers, including Perplexity’s Comet.

OpenAI isn’t the only company to realize that prompt-based injection isn’t going away. Earlier this month, the UK’s National Cyber ​​Security Center warned that prompt injection attacks on generative AI applications “may not be completely mitigated”, leaving websites at risk of falling victim to a data breach. UK government agencies have advised cyber experts to reduce the risk and impact of immediate injections, rather than thinking they can “stop” an attack.

Regarding OpenAI, the company said, “We believe rapid injection is a long-term AI security challenge, and we need to continually strengthen our defenses against it.”

What is the company’s answer to this Sisyphean-like challenge? The company says its proactive, rapid response cycle is showing early promise in helping discover new attack strategies internally before they can be exploited “in the wild.”

This is not entirely different from what competitors like Anthropic and Google claim. This means defenses must be layered and continually stress-tested to combat the persistent risk of prompt-based attacks. For example, recent efforts at Google have focused on architectural and policy-level controls for agent systems.

But what OpenAI does differently is its “LLM-based automated attacker.” The attacker is essentially a bot trained by OpenAI using reinforcement learning to play the role of a hacker looking for a way to secretly send malicious instructions to an AI agent.

Bots can test attacks in a simulation before actually using them, and the simulator shows how the target AI will think and act if it recognizes the attack. The bot can then study that response, fine-tune its attack, and try again and again. In theory, OpenAI’s bots should be able to discover flaws faster than real-world attackers, since insights into the target AI’s internal reasoning are inaccessible to outsiders.

This is a common tactic in AI safety testing. Build an agent to find edge cases and quickly test it in simulation.

“With our (reinforcement learning) training, an attacker can coax an agent into executing a lengthy, sophisticated, and harmful workflow that unfolds over dozens (or even hundreds) of steps,” OpenAI wrote. “We also observed new attack strategies that did not appear in human red teaming operations or external reports.”

Screenshot showing a prompt injection attack on OpenAI browser.
Image credit: OpenAI

In a demo (partially pictured above), OpenAI showed how an automated attacker could sneak a malicious email into a user’s inbox. Later, when the AI ​​agent scanned the inbox, it followed the instructions hidden in the email and sent a resignation message instead of creating an out-of-office reply. However, the company says that after a security update, “Agent Mode” was able to successfully detect the prompt injection attempt and flag the user.

The company says prompt injections are difficult to defend against in a fool-proof manner, but it relies on extensive testing and faster patch cycles to harden systems before they appear in an actual attack.

An OpenAI spokesperson declined to say whether Atlas’ security updates led to a measurable reduction in successful injections, but said the company has been working with third parties to harden Atlas against rapid injections since before its launch.

Rami McCarthy, principal security researcher at cybersecurity firm Wiz, said reinforcement learning is one way to continually adapt to an attacker’s behavior, but it’s only part of the picture.

“A useful way to infer risk in an AI system is to multiply autonomy with access,” McCarthy told TechCrunch.

“Agent browsers tend to be at the difficult end of the spectrum, which is a combination of moderate autonomy and very high access,” McCarthy said. “Many of the current recommendations reflect that trade-off: Restricting login access primarily reduces risk, but requiring review of confirmation requests constrains autonomy.”

These are two of OpenAI’s recommendations to help users reduce their own risks, and a spokesperson said Atlas is also trained to obtain confirmation from users before sending messages or making payments. OpenAI also suggests that users give the agent specific instructions, rather than giving the agent access to their inbox and telling them to “perform the required action.”

According to OpenAI, “wide tolerance makes it easier for hidden or malicious content to impact agents, even when safety measures are in place.”

OpenAI says protecting Atlas users from prompt injections is a top priority, but McCarthy is skeptical about the return on investment for the risk-prone browser.

“For most everyday use cases, agent browsers still don’t provide enough value to justify their current risk profile,” McCarthy told TechCrunch. “Even though that access is what makes them powerful, given their access to sensitive data such as email and payment information, the risks are high. That balance will evolve, but the trade-offs are still very real today.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Alphabet to acquire Intersect Power to avoid energy grid bottlenecks

December 22, 2025

ChatGPT launches year-end reviews like Spotify Wrapped

December 22, 2025

Splat’s app uses AI to turn photos into coloring pages for kids

December 22, 2025
Add A Comment

Comments are closed.

News

Canada appoints new ambassador to US as important trade talks loom | Canada Donald Trump News

By Editor-In-ChiefDecember 22, 2025

The terms of the North American Free Trade Agreement will be renegotiated in 2026 in…

President Trump pauses further offshore wind projects citing national security concerns | Renewable Energy News

December 22, 2025

UPS stumbles over holiday season amid changing trade rules | Trade war

December 22, 2025
Top Trending

OpenAI says AI browsers can always be vulnerable to prompt injection attacks

By Editor-In-ChiefDecember 22, 2025

OpenAI is working to harden its Atlas AI browser against cyberattacks and…

Alphabet to acquire Intersect Power to avoid energy grid bottlenecks

By Editor-In-ChiefDecember 22, 2025

Google’s parent company Alphabet has agreed to acquire data center and clean…

ChatGPT launches year-end reviews like Spotify Wrapped

By Editor-In-ChiefDecember 22, 2025

ChatGPT is releasing its own version of Spotify Wrapped. To wit, the…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2025 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.