Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Jim Cramer says Medline stock is a little too expensive to buy after IPO

December 17, 2025

Russia-Ukraine War: List of major events, day 1,393 | Russia-Ukraine War News

December 17, 2025

US and Taiwan announce largest arms deal in history

December 17, 2025
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Tensormesh raises $4.5 million to squeeze more inference out of AI server load
AI

Tensormesh raises $4.5 million to squeeze more inference out of AI server load

Editor-In-ChiefBy Editor-In-ChiefOctober 25, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


As the push for AI infrastructure reaches incredible scale, the pressure to squeeze as much inference out of GPUs as possible is greater than ever. And for researchers with expertise in a particular technology, now is a great time to raise funding.

That’s part of the driving force behind Tensormesh, which emerged from stealth this week with $4.5 million in seed funding. The investment was led by Laude Ventures, with additional angel funding provided by database pioneer Michael Franklin.

Tensormesh is using its funding to build a commercial version of its open source LMCache utility, launched and maintained by Tensormesh co-founder Yihua Cheng. When used successfully, LMCache can reduce inference costs by up to 10x. This ability has made LMCache a staple in open source deployments, drawing integrations from powerhouses like Google and Nvidia. Now Tensormesh plans to leverage its academic reputation into a viable business.

The core of this product is the key-value cache (or KV cache). This is a memory system used to process complex input more efficiently by condensing it into key values. In traditional architectures, the KV cache is discarded at the end of each query, which Tensormesh co-founder and CEO Junchen Jiang argues is a major source of inefficiency.

“It’s like a very smart analyst reading all the data, but forgetting what he learned after each question,” Jiang says.

Instead of discarding that cache, Tensormesh’s system preserves it and allows you to redeploy it when your model performs a similar process with another query. Since GPU memory is at a premium, this means distributing the data across multiple different storage layers, but the benefit is significantly more inference power for the same server load.

This change is especially powerful for chat interfaces, as the model must continually reference a chat log that grows as the conversation progresses. The agent system has a similar problem, with a growing log of actions and goals.

In theory, these changes could be made by AI companies on their own, but the technical complexity makes this a difficult task. As the Tensormesh team studies the process and considers the complexity of the details itself, the company believes there will be a lot of demand for a ready-to-use product.

“Keeping the KV cache on a secondary storage system and reusing it efficiently without slowing down the overall system is a very challenging problem,” Jiang says. “We’ve seen people hire 20 engineers and spend three to four months building a system like that. Or they can use our product to build it very efficiently.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Adobe files class action lawsuit for allegedly misusing author’s work for AI training

December 17, 2025

Amazon names longtime AWS executive Peter DeSantis to lead new AI organization

December 17, 2025

Google’s vibe coding tool Opal comes to Gemini

December 17, 2025
Add A Comment
Leave A Reply Cancel Reply

News

Russia-Ukraine War: List of major events, day 1,393 | Russia-Ukraine War News

By Editor-In-ChiefDecember 17, 2025

These are important developments since day 1,393 of Russia’s war against Ukraine.Published December 18, 2025December…

Jeffrey Epstein’s accomplice Ghislaine Maxwell seeks release | News Court News

December 17, 2025

Can India catch up with the US, Taiwan and China in the global semiconductor race? |Technology News

December 17, 2025
Top Trending

Adobe files class action lawsuit for allegedly misusing author’s work for AI training

By Editor-In-ChiefDecember 17, 2025

Like almost every existing technology company, Adobe has been leaning heavily into…

Amazon names longtime AWS executive Peter DeSantis to lead new AI organization

By Editor-In-ChiefDecember 17, 2025

Amazon CEO Andy Jassy announced in a message to staff Wednesday that…

Google’s vibe coding tool Opal comes to Gemini

By Editor-In-ChiefDecember 17, 2025

Opal, Google’s vibe coding tool, is coming to Gemini. The company announced…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2025 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.