Twenty years ago, Duke University professor David R. Smith created a real-life cloak of invisibility using man-made composite materials called “metamaterials.” Although the cloak didn’t actually have Harry Potter-like functionality and was limited in its ability to hide objects from microwave-length light, advances in materials science eventually spilled over into research in electromagnetism.
Now, Austin-based Neurophos, a photonics startup spun out of Duke University and Metacept (an incubator run by Smith), is taking that research a step further to solve what may be the biggest problem facing AI labs and hyperscalers: how to scale computing power while using less power.
The startup has devised a “metasurface modulator” with optical properties that allow it to act as a tensor-core processor to perform matrix-vector multiplication. This operation is at the heart of much AI work (particularly inference), which is currently performed by specialized GPUs and TPUs that use traditional silicon gates and transistors. By putting thousands of these modulators on a chip, Neurophos claims that its “light processing units” will be significantly faster than the silicon GPUs currently used en masse in AI data centers, and far more efficient at inference (running trained models), which can be a fairly expensive task.
Neurophos just raised $110 million in a Series A round led by Gates Frontier with participation from Microsoft’s M12, Carbon Direct, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital and others to fund development of its chip.
Now, photonic chips are not new. In theory, photonic chips offer higher performance than traditional silicon because light generates less heat than electricity, can travel faster, and is much less sensitive to changes in temperature and electromagnetic fields.
However, optical components tend to be much larger than silicon components and can be difficult to mass produce. You also need a converter to convert the data from digital to analog and back, which can be large and consume a lot of power.
However, Neurophos claims that the metasurface it has developed is approximately 10,000 times smaller than conventional phototransistors, and can solve all of these problems at once. The startup says its small size allows for thousands of units on a chip, making it much more efficient than traditional silicon because the chip can perform more calculations at once.
tech crunch event
san francisco
|
October 13-15, 2026
“Scaling down the phototransistor allows us to perform more calculations in the optical domain before translating it to the electronics domain,” Dr. Patrick Bowen, CEO and co-founder of Neurophos, told TechCrunch. “If you want to go fast, you first have to solve energy efficiency problems, because if you try to make a chip 100 times faster, it will use 100 times more power. So if you solve energy efficiency problems, you get the privilege of going fast.”
The result, Neurophos claims, is an optical processing unit that significantly outperforms Nvidia’s B200 AI GPU. The company says its chip operates at 56 GHz, delivers 235 peak operations per second (POPS), and consumes 675 watts of power, while the B200 can deliver 9 POPS at 1,000 watts.
Bowen said Neurophos already has several customers signed up (though he declined to name them) and that companies including Microsoft are “looking at the company’s products very closely.”
Still, the startup is entering a crowded market dominated by Nvidia, the world’s most valuable publicly traded company. Nvidia’s products have more or less supported the entire AI boom. There are other companies working on photonics, but some, like Lightmatter, are focused on interconnection. And Neurophos expects mass production to still be several years away, with the first chips hitting the market by mid-2028.
But Bowen is confident that the advances in performance and efficiency offered by his company’s metasurface modulators will be enough of a moat.
“What other companies are doing, NVIDIA included, is more evolutionary than revolutionary in terms of the fundamental physics of silicon, and it’s tied to TSMC’s advances. If you look at the improvements in TSMC nodes, they’re about 15% more energy efficient on average, and that’s going to take several years,” he said.
“Even if you chart Nvidia’s architectural improvements over the years, by the time we are announced in 2028, we will still have a significant advantage over the rest of the market because we start at 50 times faster than Blackwell in both energy efficiency and real-world speed.”
Also, to address the mass production problems that optical chips have traditionally faced, Neurophos says its chips can be manufactured with standard silicon casting materials, tools and processes.
The new funding will be used to develop the company’s first integrated photonic computing system, including a data center-ready OPU module, a complete software stack, and early access developer hardware. The company is also opening an engineering location in San Francisco and expanding its headquarters in Austin, Texas.
“Modern AI inference requires tremendous amounts of power and computing,” said Dr. Mark Tremblay, corporate vice president and technical fellow for core AI infrastructure at Microsoft, in a statement. “We need breakthrough advances in computing comparable to the leaps we’ve seen in AI models themselves, and that’s what Neurophos’ technology and talent-dense team are developing.”
