Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Oliver Bearman: British driver should take next available Ferrari seat after strong performance at Haas, says Martin Brundle F1 News

October 29, 2025

Nvidia becomes first public company valued at $5 trillion

October 29, 2025

Why Bitcoin miner CleanSpark beat Microsoft in Wyoming AI data center

October 29, 2025
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Tensormesh raises $4.5 million to squeeze more inference out of AI server load
AI

Tensormesh raises $4.5 million to squeeze more inference out of AI server load

whistle_949By whistle_949October 25, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


As the push for AI infrastructure reaches incredible scale, the pressure to squeeze as much inference out of GPUs as possible is greater than ever. And for researchers with expertise in a particular technology, now is a great time to raise funding.

That’s part of the driving force behind Tensormesh, which emerged from stealth this week with $4.5 million in seed funding. The investment was led by Laude Ventures, with additional angel funding provided by database pioneer Michael Franklin.

Tensormesh is using its funding to build a commercial version of its open source LMCache utility, launched and maintained by Tensormesh co-founder Yihua Cheng. When used successfully, LMCache can reduce inference costs by up to 10x. This ability has made LMCache a staple in open source deployments, drawing integrations from powerhouses like Google and Nvidia. Now Tensormesh plans to leverage its academic reputation into a viable business.

The core of this product is the key-value cache (or KV cache). This is a memory system used to process complex input more efficiently by condensing it into key values. In traditional architectures, the KV cache is discarded at the end of each query, which Tensormesh co-founder and CEO Junchen Jiang argues is a major source of inefficiency.

“It’s like a very smart analyst reading all the data, but forgetting what he learned after each question,” Jiang says.

Instead of discarding that cache, Tensormesh’s system preserves it and allows you to redeploy it when your model performs a similar process with another query. Since GPU memory is at a premium, this means distributing the data across multiple different storage layers, but the benefit is significantly more inference power for the same server load.

This change is especially powerful for chat interfaces, as the model must continually reference a chat log that grows as the conversation progresses. The agent system has a similar problem, with a growing log of actions and goals.

In theory, these changes could be made by AI companies on their own, but the technical complexity makes this a difficult task. As the Tensormesh team studies the process and considers the complexity of the details itself, the company believes there will be a lot of demand for a ready-to-use product.

“Keeping the KV cache on a secondary storage system and reusing it efficiently without slowing down the overall system is a very challenging problem,” Jiang says. “We’ve seen people hire 20 engineers and spend three to four months building a system like that. Or they can use our product to build it very efficiently.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
whistle_949
  • Website

Related Posts

Nvidia becomes first public company valued at $5 trillion

October 29, 2025

How AI Labs uses Mercor to capture data that companies won’t share

October 29, 2025

Eleven Lab CEO says AI audio models will become ‘commoditized’ over time

October 29, 2025
Add A Comment
Leave A Reply Cancel Reply

News

Chipmaker Nvidia’s valuation reaches $5 trillion | Technology News

By whistle_949October 29, 2025

This surge occurred just three months after reaching the $4 trillion level.Nvidia became the first…

President Trump wants China’s “assistance” in dealing with Russia during the war. can he understand it? |Russia-Ukraine War News

October 29, 2025

Trump says it’s a shame he won’t be able to run for a third term as US president | Donald Trump News

October 29, 2025
Top Trending

Nvidia becomes first public company valued at $5 trillion

By whistle_949October 29, 2025

Nvidia, the biggest beneficiary of the ongoing AI boom, became the first…

How AI Labs uses Mercor to capture data that companies won’t share

By whistle_949October 29, 2025

AI labs are trying something new these days: hiring former senior employees…

Eleven Lab CEO says AI audio models will become ‘commoditized’ over time

By whistle_949October 29, 2025

Matty Staniszewski, co-founder and CEO of AI audio company ElevenLab, believes that…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2025 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.