As the push for AI infrastructure reaches incredible scale, the pressure to squeeze as much inference out of GPUs as possible is greater than ever. And for researchers with expertise in a particular technology, now is a great time to raise funding.
That’s part of the driving force behind Tensormesh, which emerged from stealth this week with $4.5 million in seed funding. The investment was led by Laude Ventures, with additional angel funding provided by database pioneer Michael Franklin.
Tensormesh is using its funding to build a commercial version of its open source LMCache utility, launched and maintained by Tensormesh co-founder Yihua Cheng. When used successfully, LMCache can reduce inference costs by up to 10x. This ability has made LMCache a staple in open source deployments, drawing integrations from powerhouses like Google and Nvidia. Now Tensormesh plans to leverage its academic reputation into a viable business.
The core of this product is the key-value cache (or KV cache). This is a memory system used to process complex input more efficiently by condensing it into key values. In traditional architectures, the KV cache is discarded at the end of each query, which Tensormesh co-founder and CEO Junchen Jiang argues is a major source of inefficiency.
“It’s like a very smart analyst reading all the data, but forgetting what he learned after each question,” Jiang says.
Instead of discarding that cache, Tensormesh’s system preserves it and allows you to redeploy it when your model performs a similar process with another query. Since GPU memory is at a premium, this means distributing the data across multiple different storage layers, but the benefit is significantly more inference power for the same server load.
This change is especially powerful for chat interfaces, as the model must continually reference a chat log that grows as the conversation progresses. The agent system has a similar problem, with a growing log of actions and goals.
In theory, these changes could be made by AI companies on their own, but the technical complexity makes this a difficult task. As the Tensormesh team studies the process and considers the complexity of the details itself, the company believes there will be a lot of demand for a ready-to-use product.
“Keeping the KV cache on a secondary storage system and reusing it efficiently without slowing down the overall system is a very challenging problem,” Jiang says. “We’ve seen people hire 20 engineers and spend three to four months building a system like that. Or they can use our product to build it very efficiently.”
