Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Iran’s nuclear program: Cornered and injured, will Tehran now fight for the bomb?

March 29, 2026

Willie Peters ends historic tenure with Hull KR at the end of the Super League season and pursues coaching opportunities in the NRL | Rugby League News

March 29, 2026

Bahrain’s aluminum giant says Iranian attack targeted its facilities

March 29, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Tensormesh raises $4.5 million to squeeze more inference out of AI server load
AI

Tensormesh raises $4.5 million to squeeze more inference out of AI server load

Editor-In-ChiefBy Editor-In-ChiefOctober 25, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


As the push for AI infrastructure reaches incredible scale, the pressure to squeeze as much inference out of GPUs as possible is greater than ever. And for researchers with expertise in a particular technology, now is a great time to raise funding.

That’s part of the driving force behind Tensormesh, which emerged from stealth this week with $4.5 million in seed funding. The investment was led by Laude Ventures, with additional angel funding provided by database pioneer Michael Franklin.

Tensormesh is using its funding to build a commercial version of its open source LMCache utility, launched and maintained by Tensormesh co-founder Yihua Cheng. When used successfully, LMCache can reduce inference costs by up to 10x. This ability has made LMCache a staple in open source deployments, drawing integrations from powerhouses like Google and Nvidia. Now Tensormesh plans to leverage its academic reputation into a viable business.

The core of this product is the key-value cache (or KV cache). This is a memory system used to process complex input more efficiently by condensing it into key values. In traditional architectures, the KV cache is discarded at the end of each query, which Tensormesh co-founder and CEO Junchen Jiang argues is a major source of inefficiency.

“It’s like a very smart analyst reading all the data, but forgetting what he learned after each question,” Jiang says.

Instead of discarding that cache, Tensormesh’s system preserves it and allows you to redeploy it when your model performs a similar process with another query. Since GPU memory is at a premium, this means distributing the data across multiple different storage layers, but the benefit is significantly more inference power for the same server load.

This change is especially powerful for chat interfaces, as the model must continually reference a chat log that grows as the conversation progresses. The agent system has a similar problem, with a growing log of actions and goals.

In theory, these changes could be made by AI companies on their own, but the technical complexity makes this a difficult task. As the Tensormesh team studies the process and considers the complexity of the details itself, the company believes there will be a lot of demand for a ready-to-use product.

“Keeping the KV cache on a secondary storage system and reusing it efficiently without slowing down the overall system is a very challenging problem,” Jiang says. “We’ve seen people hire 20 engineers and spend three to four months building a system like that. Or they can use our product to build it very efficiently.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Bluesky tackles AI with Attie, an app that creates custom feeds

March 28, 2026

Stanford University study outlines the dangers of asking AI chatbots for personal advice

March 28, 2026

Anthropic’s Claude is soaring in popularity among paying consumers

March 28, 2026
Add A Comment
Leave A Reply Cancel Reply

News

Vice President J.D. Vance tops CPAC straw poll and becomes U.S. president in 2028 | Election News

By Editor-In-ChiefMarch 28, 2026

For the second year in a row, US Vice President J.D. Vance has topped the…

Photo: “No Kings” protests erupt across the United States, mainly in Minnesota | Protest news

March 28, 2026

One month later, disapproval ratings are rising, yet US lawmakers take no action on Iran war | Donald Trump News

March 28, 2026
Top Trending

Bluesky tackles AI with Attie, an app that creates custom feeds

By Editor-In-ChiefMarch 28, 2026

Bluesky’s team built another app. This time, it’s not a social network,…

Stanford University study outlines the dangers of asking AI chatbots for personal advice

By Editor-In-ChiefMarch 28, 2026

There has been much discussion about the tendency of AI chatbots to…

Anthropic’s Claude is soaring in popularity among paying consumers

By Editor-In-ChiefMarch 28, 2026

Regardless of how Anthropic ultimately ends up in its feud with the…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.