Amazon Web Services’ annual technology conference, AWS re:Invent, wrapped up today with a ton of product news, keynotes, and the requisite customer success stories.
The obvious topic is enterprise AI. This year is all about upgrades that allow customers to further customize their AI agents, including one that AWS claims can learn from users and work independently for days.
AWS re:Invent 2025, which runs through December 5, kicked off with a keynote address from AWS CEO Matt Garman. He embraced the idea that AI agents could unlock the “true value” of AI.
“AI assistants are starting to be replaced by AI agents that can perform and automate tasks on behalf of users,” he said in his Dec. 2 keynote. “This is where our customers are starting to see significant business returns from their AI investments.”
The December 3rd conference took a deep dive into AI agent messaging and customer stories. Swami Sivasubramanian, VP of Agentic AI at AWS, gave one of the keynotes. To say he was bullish might be underestimating the mood.
“We are living in a time of great change,” Sivasubramanian said during his speech. “For the first time in history, you can describe what you want to accomplish in natural language, and agents generate a plan. They write the code, call the necessary tools, and execute the complete solution. Agents give you the freedom to build without limits, accelerating the speed from idea to big impact.”
AI agent news promises to be a constant presence throughout AWS re:Invent 2025, but there were other announcements as well. Here’s a roundup of the ones that caught our attention. TechCrunch will update this article with the latest insights until the end of AWS re:Invent. Please be sure to check.
Double your LLM
AWS announced more tools for enterprise customers to create their own models. Specifically, AWS said it is adding new features to both Amazon Bedrock and Amazon SageMaker AI to make it easier to build custom LLMs.
For example, AWS is introducing serverless model customization to SageMaker, which allows developers to start building models without having to think about compute resources or infrastructure. Customization of serverless models can be accessed through a self-guided path or guided by an AI agent.
AWS also announced Reinforcement Fine Tuning in Bedrock, which allows developers to choose a preconfigured workflow or reward system and let Bedrock automatically run the customization process from start to finish.
tech crunch event
san francisco
|
October 13-15, 2026
Andy Jassy shares some numbers
Amazon CEO Andy Jassy appeared on social media platform X to elaborate on AWS head Matt Garman’s keynote. The message is that the current generation of NVIDIA’s competing AI chip Trainium2 is already bringing in big profits.
His comments were related to the announcement of the next-generation chip Trainium3 and were intended to predict the product’s promising earnings future.
Database savings realized
Among the dozens of announcements, there’s one item that’s already drawing cheers. discount.
Specifically, AWS announced the launch of a database savings plan that allows customers to reduce their database costs by up to 35% after a certain amount of usage ($ per hour) over a one-year period. The company says the savings will be automatically applied hourly to eligible usage across supported database services, and any additional usage beyond the commitment will be billed at on-demand rates.
Corey Quinn, Chief Cloud Economist at Duckbill, sums it up well in a blog post titled “Six years of complaining finally paid off.”
Amazon expects no better deal than free.
Is there a way for another AI coding tool to win the hearts of startup founders? Amazon hopes its service will do well if it can offer a year’s worth of credits for free, Kiro. The company plans to give Kiro Pro+ credits to eligible startups that apply for contracts by the end of this month. However, only early-stage startups in certain countries are eligible.
AI training chip and Nvidia compatibility
AWS has introduced a new version of its AI training chip called Trainium3 and the AI system called UltraServer that runs it. Bottom line: This upgraded chip features some impressive specs, including promising up to 4x performance improvements for both AI training and inference, while reducing energy usage by 40%.
AWS also provided a teaser. The company is already developing Trainium 4, which will be able to work with Nvidia chips.
Extending AgentCore functionality
AWS announced new features for its AgentCore AI agent building platform. One notable feature is AgentCore policies. This allows developers to more easily set boundaries for AI agents.
AWS also announced that its agents can now record and remember information about users. It also announced that it will help customers rate agents through 13 pre-built rating systems.
Non-stop AI agent worker bees
AWS announced three new AI agents called “frontier agents” (there’s that term again). One of them is called the Kiro autonomous agent, which is designed to write code and learn how teams work, allowing it to operate almost independently for hours or days.
One of these new agents handles security processes such as code reviews, and the third performs DevOps tasks such as incident prevention when publishing new code. A preview version of the agent is currently available.
New Nova models and services
AWS is rolling out four new AI models within the Nova AI model family. Three of them are text generators, and one is a feature that allows you to create text and images.
The company also announced a new service called Nova Forge. This gives AWS Cloud customers access to pre-trained, in-training, or post-trained models and the ability to train and refine them on their own data. The big selling point of AWS is flexibility and customization.
Lyft’s claims about AI agents
The ride-hailing company was one of many AWS customers gathered during the event to share success stories and evidence of how the product has impacted their business. Lyft uses Anthropic’s Claude model via Amazon Bedrock to create AI agents that handle driver and rider questions and issues.
The company says the AI agent has reduced average resolution time by 87%. Lyft also said it has seen a 70% increase in the use of its AI agents by drivers this year.
AI Factory for Private Data Centers
Amazon also announced AI Factory, which allows large enterprises and governments to run AWS AI systems in their data centers.
The system was designed in partnership with Nvidia and includes technology from both Nvidia and AWS. Companies using it can include Nvidia GPUs, but they can also choose Amazon’s latest in-house AI chip, Trainium3. The system is Amazon’s way of addressing data sovereignty, the need for governments and many companies to control data without sharing it, even when using AI.
Catch the latest announcements on everything from agent AI to cloud infrastructure to security and more from Amazon Web Services’ flagship event in Las Vegas. This video is provided in partnership with AWS.
