Money was not an issue for the AI industry in early 2025. In late 2025, a mood check crept in.
OpenAI raised $40 billion at a $300 billion valuation. Safe Superintelligence and Thinking Machine Labs raised a separate $2 billion seed round before shipping a single product. Even first-time founders are raising money on a scale that once belonged only to Big Tech.
These astronomical investments were followed by equally incredible spending. Meta spent nearly $15 billion to lock up Scale AI CEO Alexandr Wang and countless more to poach talent from other AI labs. Meanwhile, the biggest AI companies have promised future infrastructure spending of nearly $1.3 trillion.
The first half of 2025 matched the previous year’s enthusiasm and investor interest. That mood has changed in recent months, providing a kind of vibe check. Extreme optimism about AI and the wild valuations that come with it are still alive and well. But that optimism is now dampened by concerns about the bursting of the AI bubble, user safety, and the sustainability of technological progress at the current pace.
The days of unashamed acceptance and celebration of AI are only briefly coming to an end. With that comes more scrutiny and more questions. Will AI companies be able to maintain their speed? Will scaling in the post-DeepSeek era require billions of dollars? Are there business models that can recoup a fraction of a multibillion investment?
We have been there every step of the way. And the most popular articles of 2025 tell real-life stories. Despite promising to reshape reality itself, the industry is facing a reality check.
How this year started

Our largest AI lab has grown even bigger this year.
tech crunch event
san francisco
|
October 13-15, 2026
In 2025 alone, OpenAI raised $40 billion in a SoftBank-led round at a post-money valuation of $300 billion. The company is also pursuing computing-related circular deals with investors such as Amazon, and is reportedly in talks to raise $100 billion at a valuation of $830 billion. That would bring OpenAI closer to the $1 trillion valuation it is reportedly aiming for with an IPO next year.
OpenAI rival Anthropic has also raised $16.5 billion in two rounds this year, with the most recent round of participation from heavyweights such as Iconiq Capital, Fidelity, and Qatar Investment Authority, pushing its valuation to $183 billion. (CEO Dario Amodei confessed to staff in a leaked memo that he was “not thrilled” about receiving money from the authoritarian Gulf state).
And Elon Musk’s xAI raised at least $10 billion this year after acquiring X, formerly known as Twitter, a social media platform that Musk also owns.
I’ve also seen small start-ups get hype from bad-mouthed investors.
Thinking Machine Labs, the startup from former OpenAI chief engineer Mira Murati, has secured a $2 billion seed round at a $12 billion valuation, despite sharing little information about its product offering. Vibe coding startup Lovable has earned the unicorn’s horn just eight months after launching with a $200 million Series A investment. This month, Loveable raised another $330 million at a post-money valuation of nearly $7 billion. And no one can leave out Mercor, the AI recruiting startup that raised $450 million in two rounds this year and was recently valued at $10 billion.
These absurdly high valuations continue to occur despite modest adoption numbers and severe infrastructure constraints, raising concerns about an AI bubble.
Build, baby, build

For large companies, these numbers don’t just come out of nowhere. To justify these valuations, a huge amount of infrastructure needs to be built.
As a result, a vicious cycle was created. Money raised to fund computing is increasingly tied to deals where the same money flows back into chips, cloud contracts, and energy, as seen in OpenAI’s infrastructure financing with Nvidia. In practice, the lines between investment and customer demand are blurring, raising concerns that the AI boom is being supported by a circular economy rather than sustainable use.
Some of this year’s biggest deals that fueled the infrastructure boom include:
Stargate, a joint venture between SoftBank, OpenAI and Oracle, includes up to $500 billion to build out AI infrastructure during Alphabet’s $4.75 billion acquisition of energy and data center infrastructure provider Intersect, which comes as the company announced in October that it plans to increase its computing spending by up to $93 billion in 2026. Meta is accelerating its data center expansion, with projected capital expenditures reaching $72 billion in 2025 as it races to secure enough compute to train and run next-generation models.
But cracks are starting to show. Private financial partner Blue Owl Capital recently pulled out of a $10 billion Oracle data center plan tied to OpenAI capabilities, highlighting just how fragile some of these capital stacks are.
Whether all that spending is ultimately realized is another matter. Projects have already been delayed in some areas due to growing opposition from residents and policymakers, including grid constraints, rising construction and power costs, and calls from figures like Sen. Bernie Sanders to rein in data center expansion.
Although investments in AI remain huge, infrastructure realities are starting to temper the hype.
Resetting expectations

In 2023 and 2024, each major model release felt like a revelation, with new features and new reasons to fall for the hype. This year, that magic is gone, and nothing captured that shift better than OpenAI’s rollout of GPT-5.
It made sense in theory, but it didn’t have the punch of previous releases like GPT-4 and 4o. A similar pattern emerged across the industry, as improvements by LLM providers were less transformational and more incremental or domain-specific.
Even Gemini 3, which beats some benchmarks, was only a breakthrough in bringing Google back to parity with OpenAI. This led to Sam Altman’s infamous “Code Red” memo and the fight to maintain OpenAI’s dominance.
This year also saw a reset in terms of where Frontier models are expected to come from. DeepSeek released R1, an “inference” model that competes with OpenAI’s o1 on key benchmarks, proving that the new lab can ship reliable models quickly and at a fraction of the cost.
From model breakthrough to business model

As the size of each jump between new models shrinks, investors are focusing less on a model’s raw capabilities and more on what’s included around it. The question is: Who can turn AI into products that people can trust, pay for, and integrate into their daily workflows?
That change is manifesting itself in many ways as companies figure out what works and what customers expect. For example, AI search startup Perplexity briefly floated the idea of tracking users’ online movements to sell hyper-personalized ads. Meanwhile, OpenAI is reportedly considering charging up to $20,000 per month for specialized AI, showing how aggressively the company is testing how much customers will pay.
But most of all, the battle has shifted to distribution. Perplexity is looking to stay relevant by launching its own Comet browser with agent capabilities, paying Snap $400 million to power search within Snapchat, and effectively tapping into existing user funnels.
OpenAI is pursuing a parallel strategy and extending ChatGPT beyond chatbots into a platform. OpenAI is wooing businesses and developers by launching consumer features like its own Atlas browser and Pulse, as well as launching apps within ChatGPT itself.
Google relies on incumbents. On the consumer side, Gemini integrates directly into products like Google Calendar, while on the enterprise side, the company hosts an MCP connector to make the ecosystem harder to exclude.
In a market where it is becoming increasingly difficult to differentiate by introducing new models, owning the customers and business model is the real moat.
Checking the atmosphere of trust and safety

In 2025, AI companies faced unprecedented scrutiny. More than 50 copyright lawsuits are being fought in court, while reports of “AI psychosis” in which chatbots allegedly enhanced delusions and contributed to multiple suicides and other life-threatening episodes have sparked calls for trust and safety reforms.
While some copyright disputes have been concluded, such as the $1.5 billion settlement between Anthropic and its authors, most remain unresolved. However, the conversation seems to be shifting from resistance to using copyrighted content for training to demands for compensation (see: New York Times sues Perplexity for copyright infringement).
Meanwhile, mental health concerns regarding AI chatbot interactions and their sycophantic responses have emerged as a serious public health issue following multiple deaths from suicide and life-threatening delusions among teens and adults due to long-term use of chatbots. The result has been lawsuits, widespread concern among mental health professionals, and prompt policy responses like California’s SB 243, which regulates AI companion bots.
Perhaps most importantly, the calls for restraint are not coming from the usual anti-technology suspects.
Industry leaders have warned against chatbots “increasing engagement,” and even Sam Altman has warned against emotional overreliance on ChatGPT.
The research institute itself began to sound the alarm. Anthropic’s May safety report states that Claude Opus 4 attempted to intimidate engineers to prevent the company from shutting down. Subtext? Scaling what you’ve built without understanding it is no longer a viable strategy.
Looking to the future
If 2025 was the year that AI started growing and faced difficult questions, 2026 will be the year that AI has to answer them. The hype cycle is starting to stall, and AI companies will now have to prove their business models and demonstrate real economic value.
The era of “believe and you will be rewarded” is coming to an end. What happens next will be either a vindication or a liquidation that makes the dotcom bust look like the worst trading day for NVIDIA. It’s time to place your bets.
