OpenAI on Monday released new data showing that usage of its AI tools by businesses has skyrocketed dramatically over the past year, reporting that ChatGPT message volume has increased eight times since November 2024 and that employees are saving up to an hour each day. The findings come a week after CEO Sam Altman sent a “Code Red” memo to the company about Google’s competitive threats.
The timing underscores OpenAI’s efforts to reposition itself as a leader in enterprise AI even as it faces increasing pressure. Nearly 36% of U.S. businesses are ChatGPT Enterprise customers, compared to 14.3% of Anthropic customers, according to the Ramp AI Index, but the majority of OpenAI’s revenue still comes from consumer subscriptions, a foundation threatened by Google’s Gemini. OpenAI also has to compete with rival AI company Anthropic. Anthropic’s revenue primarily comes from B2B sales, with a growing number of open-weight model providers for enterprise customers.
The AI giant is committing $1.4 trillion to infrastructure efforts over the next few years, making the company’s growth integral to its business model.
“Consumers are really important from an economic growth perspective,” Ronnie Chatterjee, chief economist at OpenAI, said at a press conference. “But when you look at historically transformative technologies like the steam engine, the biggest economic benefits actually come when companies adopt and scale these technologies.”
New research from OpenAI suggests not only growing adoption in large enterprises, but also increasing integration into workflows. It’s not just that employees are sending more messages; Organizations using OpenAI’s API (developer interface) are consuming 320 times more “inference tokens” than a year ago, suggesting that companies are using AI to solve more complex problems. Or they are constantly experimenting with new technologies and running out of tokens without necessarily gaining long-term value.
An increase in inference tokens that correlates with increased energy usage can be costly for businesses and is therefore not sustainable in the long term. TechCrunch asked OpenAI about enterprise budget allocation for AI and the sustainability of this growth rate.

Beyond raw usage metrics, OpenAI is also changing the way companies deploy tools. The report found that the use of custom GPT, which companies use to codify internal knowledge into assistants and automate workflows, has soared 19x this year and now accounts for 20% of corporate messages. OpenAI pointed to digital banking customer BBVA, which said it regularly uses more than 4,000 custom GPTs.
tech crunch event
san francisco
|
October 13-15, 2026
“It shows how much people can really take advantage of this powerful technology and start customizing it into something that works for them,” Brad Lightcap, OpenAI’s chief operating officer, said in a briefing.
According to OpenAI, these integrations have resulted in significant time savings. Participants reported saving 40 to 60 minutes per day using OpenAI’s enterprise products. However, it may not include time spent learning the system, prompting the AI output, or revising it.
The report found that corporate employees are also increasingly leveraging AI tools to augment their capabilities. Three-quarters of those surveyed said that AI has enabled them to do things they couldn’t do before, such as technical tasks. OpenAI reported a 36% increase in coding-related messages outside of engineering, IT, and research teams.
While OpenAI has reaffirmed the idea that its technology is democratizing access to skills, it’s important to note that more vibe coding can lead to more security vulnerabilities and other flaws. When asked about this, Lightcap pointed to OpenAI’s recently released agent security researcher Aardvark as a potential way to detect bugs, vulnerabilities, and exploits. This is a private beta.

OpenAI’s report also found that even the most active ChatGPT Enterprise users aren’t using the most advanced tools available, including data analysis, inference, and search. Lightcap believed in the briefing that this is because fully implementing AI systems requires a shift in mindset and deeper integration with an enterprise’s data and processes. He said the introduction of advanced features will take time as companies restructure their workflows to better understand what is possible.
Lightcap and Chatterjee also highlighted the report’s findings showing that some “frontier” workers are using more tools and saving time more frequently than “laggard” workers, creating a “widening gap in AI adoption.”
“There are still companies that think of these systems as a piece of software that I can buy and give to my team, and that’s kind of the end of it,” Lightcap said. “And some companies are really starting to embrace this, almost as an operating system. It’s basically replatforming a lot of what they do.”
OpenAI executives are certainly feeling the pressure of the company’s $1.4 trillion infrastructure effort, but they see this as an opportunity for laggards to catch up. For workers who train AI systems to replicate their work, “catch up” can feel like a countdown.
