Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

At least 3 US service members killed during Iran operation: CENTCOM | Donald Trump News

March 1, 2026

From Tehran to Dubai: Geolocated video shows shockwaves of US and Israeli attacks and Iranian retaliation

March 1, 2026

Joao Pedro has impressed under Liam Rosenior and evolved into the striker Chelsea need – The Radar Soccer News

March 1, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » AI risks that can disrupt your business
World

AI risks that can disrupt your business

Editor-In-ChiefBy Editor-In-ChiefMarch 1, 2026No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


Aire Images | Moments | Getty Images

As the business world comes to terms with artificial intelligence, the biggest risk may be that those running the economy will not be able to stay ahead of the curve. As AI systems become more complex, humans will be unable to fully understand, predict, and control them. Not understanding at a fundamental level where AI models are headed over the next few years makes it difficult for organizations deploying AI to anticipate risks and apply guardrails.

“We’re basically going after a moving target,” said Alfredo Hickman, chief information security officer at Obsidian Security.

Hickman was struck by a recent experience spending time with the founder of a company building core AI models, he says. “When you say we don’t understand what this technology is going to be in one, two, three years from now…the technology developers themselves don’t understand, they don’t know what this technology is going to be.”

As organizations connect AI systems to real-world business operations to authorize transactions, write code, interact with customers, and move data between platforms, the gap between how these systems are expected to behave and their actual performance once deployed is growing. They are quickly realizing that AI is not dangerous because it is autonomous, but because it increases the complexity of systems beyond human understanding.

“Autonomous systems don’t always fail loudly,” said Noe Ramos, vice president of AI operations at Agiloft, a contract management software company. “On a large scale, it’s often a quiet failure.”

When mistakes happen, she says, the damage can spread quickly, sometimes long before a company even realizes there’s a problem.

“It could escalate slightly and become aggressive, which could be a waste of business, or it could result in a new record with some margin of error,” Ramos said. “These errors may seem minor, but when they become large over weeks or months, they can lead to operational disruptions, compliance exposure, or decreased trust. And because nothing crashes, it can take a while for someone to realize it’s happening,” she added.

Early signs of this disruption are appearing across the industry.

In one case, a beverage manufacturer’s AI-powered system was unable to recognize the product after the company introduced new holiday labels, said John Bruggeman, chief information security officer at technology solutions provider CBTS. The system interpreted the unfamiliar package as an error signal, which continually triggered additional production runs. By the time the company realized what was happening, it had produced hundreds of thousands of surplus cans. The system was acting logically based on the data it received, but no one expected it to do so.

“The system was not malfunctioning in the traditional sense,” Brueggemann said. Rather, it was responding to a situation that the developers had not anticipated. “That’s the danger: These systems are doing exactly what you tell them to do, not just what you intend them to do,” he said.

Customer-facing systems have similar risks.

Suja Viswesan, Vice President of Software Cybersecurity IBMsays it has identified cases where autonomous customer service representatives began approving refunds outside of policy guidelines. The customer convinced the system to issue a refund and left a positive public review after receiving the refund. Agents then began freely granting additional refunds and optimizing for receiving more positive reviews rather than adhering to established refund policies.

“I need a kill switch”

These failures highlight the fact that problems do not necessarily arise from dramatic technical failures, but from everyday situations where humans interact with automated decision-making in unexpected ways.

As organizations begin to trust AI systems to make more critical decisions, experts say they will need ways to quickly intervene when systems behave unexpectedly.

However, stopping an AI system is not as simple as shutting down a single application. Interventions may require stopping multiple workflows simultaneously, according to AI operations experts, as agents are connected to financial platforms, customer data, internal software, and external tools.

“We need a kill switch,” Brugeman said. “And you need someone who knows how to use it. The CIO needs to know where that kill switch is, and if it goes sideways, multiple people need to know where it is.”

Experts say that improving the algorithm will not solve the problem. To avoid failure, organizations must establish operational controls, monitoring mechanisms, and clear decision-making boundaries around AI systems from the beginning.

“People are overconfident in these systems,” said Mitchell Amador, CEO of crowdsourcing security platform Immunefi. “It’s not secure by default, and you have to assume that you have to build it into your architecture, otherwise you’re going to get excited.”

However, he said, “Most people don’t even want to learn it. They want to give their work to Anthropic or OpenAI and are like, ‘Well, they’ll figure it out.'”

Defense chief Pete Hegseth gives Anthropic CEO until Friday to withdraw AI protections

Ramos said many companies are not operationally ready, and workflows, exceptions and decision boundaries are often not fully documented. “Autonomy provides operational clarity,” she said. “If exception handling exists in people’s heads rather than a documented process, AI will immediately expose those gaps.”

Ramos also said that companies often underestimate the amount of access their teams have to AI systems because they think automation is efficient, and edge cases that humans handle intuitively are often not encoded into the system. We need to move from being the person in the loop to being the person above the loop, she said. “Humans in the loop review the output, humans in the loop monitor performance patterns, detect anomalies and system behavior over time, and mitigate small errors that can multiply at scale,” she said.

Corporate pressure to respond quickly

The pace at which this technology will be introduced throughout the economy is unknown.

According to McKinsey’s 2025 State of AI report, 23% of enterprises are already expanding AI agents within their organizations, and another 39% are experimenting with them, but most deployments are still limited to one or two business functions.

This represents an early maturation of enterprise AI, according to Michael Chui, a senior fellow at McKinsey, who said that despite the intense focus on autonomous systems, “there is a huge gap between the huge potential that emerges in the ‘hype cycle’ and the current reality on the ground.”

But companies are unlikely to slow down.

“This is similar to gold rush thinking or FOMO thinking, where organizations fundamentally believe that if they don’t take advantage of these technologies, they will become a strategic liability in the marketplace,” Hickman says.

Balancing the speed of deployment and the risk of losing control is a key issue. “There’s a lot of pressure among AI operations leaders to move really quickly,” Ramos said. “But at the same time, there’s the challenge of not disrupting the experiment, because that’s how you learn.”

Expectations for technology continue to rise while risks increase.

“We know that these technologies are faster than anything humans have ever done before,” Hickman said. “Within five, 10, or 15 years, AI will be fundamentally smarter and faster than even the most intelligent humans.”

In the meantime, Ramos said there will be many learning moments. “The next wave will be less ambitious and more disciplined.” She says the organizations that will mature fastest will be those that learn how to manage failure rather than avoid it.

Can AI be controlled? Google DeepMind's Responsible AI Plan



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Full highlights of Berkshire CEO Abel’s first letter to shareholders

March 1, 2026

Buffett’s successor, Greg Abel, is facing his first big test as CEO of Berkshire Hathaway. Did he pass?

March 1, 2026

What we know as markets brace for disruption

March 1, 2026
Add A Comment

Comments are closed.

News

At least 3 US service members killed during Iran operation: CENTCOM | Donald Trump News

By Editor-In-ChiefMarch 1, 2026

Five other people were “seriously injured” in the operation, U.S. Central Command (CENTCOM) said in…

Did Trump misunderstand Iran’s Revolutionary Guards and Basij forces? |Explainer

March 1, 2026

World reacts to killing of Iran’s Ayatollah Khamenei by US and Israeli forces | Israel-Iran conflict News

March 1, 2026
Top Trending

SaaS inflow, SaaS outflow: Here’s what drives SaaSpocalypse

By Editor-In-ChiefMarch 1, 2026

One day not too long ago, our founder texted an update to…

Anthropic’s Claude rises to No. 1 on App Store after Pentagon conflict

By Editor-In-ChiefMarch 1, 2026

Anthropic’s chatbot Claude appears to be benefiting from the attention surrounding the…

A trap that Anthropic has built for itself.

By Editor-In-ChiefFebruary 28, 2026

On Friday afternoon, just as this interview was about to begin, a…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.