A new AI lab called Flapping Airplanes launched on Wednesday with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and the goal of finding less data-intensive ways to train large models is particularly interesting.
Based on what I’ve seen so far, I’d rate them a level 2 on the money-making scale.
But there’s something even more exciting about the Flapping Airplanes project that I didn’t realize until I read this post by Sequoia partner David Cahn.
As Khan explains, Flapping Airplanes is one of the first labs to move beyond scaling, the constant construction of data and compute that has defined much of the industry to date.
The scaling paradigm advocates committing as much societal resources as the economy can muster toward scaling up today’s LLMs in the hopes of leading to AGI. The research paradigm argues that we are two to three research breakthroughs away from “AGI” intelligence, and that, as a result, we need to devote resources to long-term research, especially projects that may take five to 10 years to realize.
(…)
A compute-first approach prioritizes cluster scale above all else, heavily favoring short-term wins (on the order of 1-2 years) over long-term bets (on the order of 5-10 years). A research-first approach should spread out bets in time and be proactive about making many bets that have a low absolute probability of success but collectively expand the search space of what is possible.
Maybe the computing guys are right, and there’s no point in focusing on anything other than frenzied server expansion. But it’s nice to see someone go in a different direction when so many companies are already heading in that direction.
