- The Angle
- Posts
- Lean startup rebooted
Lean startup rebooted
The Angle Issue #281
Lean startup rebooted
Gil Dibner
For over a decade, the startup world has been dominated by two competing approaches: Eric Ries’s Lean Startup (published in 2011) and Reid Hoffman’s Blitzscaling (popularized as a Stanford course in 2015 and published as book in 2018). Since it appeared on the scene, Blitzscaling has gradually but convincingly overwhelmed dominated the discussion. Our copies of the Lean Startup have mostly found themselves deep in a drawer, slowly gathering dust. In the AI era especially, Blitzscaling—the pursuit of massive growth despite uncertainty—has taken center stage, fueled by billions of dollars of consensus capital chasing a handful of popular ideas.
A debate is currently raging about whether or not AI is a bubble and, if so, when and if it will burst. There’s a related debate on the wisdom of VCs plowing ever larger sums of money into companies at ever higher valuations (Roelof Botha of Sequoia recently referred to this as “return-free risk.”) For most founders, however, these debates are largely academic. A far more useful conversation is whether we should pull the Lean Startup methodology out of the drawer and how we can update it for the era we find ourselves in.
OG lean. To recap, the original Lean Startup methodology is a continuous loop of Build-Measure-Learn, stressing the use of a Minimum Viable Product (MVP) to conduct controlled experiments and gather validated learnings before committing vast resources. This approach champions agility, rapid iteration, and the willingness to pivot based on empirical data rather than relying on gut instinct. Its central implication was revolutionary at the time: huge value, particularly in SaaS, could be built with minimal and tightly controlled capital spend.
Despite the billions of dollars of capital flowing into a handful of consensus AI ideas, the concept of Lean Startup has never been more relevant for the vast majority of founders. It's high time we pulled that book out of the drawer, dusted it off, and read it again. But this time, I would suggest three essential additions:
The Ultra Lean Startup. The primary team bottleneck in the original Lean methodology was human capacity and speed. That constraint is now being shattered by AI. AI supercharges every aspect of the Build-Measure-Learn loop: Generative coding radically accelerates MVP creation. Generative documentation and integration helpers eliminate long-tail drudgery. AI-powered SDRs and marketing tools automate the "Measure" and "Learn" feedback cycles. A five-person team today, equipped with the right generative tools, can achieve the output, iteration speed, and learning cycle of a team of 30 just a few years ago. We see it in our portfolio every day. Speed, cost efficiency, and capital-to-output ratio have all reached new extremes.
The Brilliantly Thin Startup. The term "thin layer" has become pejorative in the AI discourse, often used to dismiss startups that are not building durable value on top of a deep AI stack built by others. This misses the point entirely. AI enables a new type of company that is, in fact, a strategically thin layer between a powerful AI-native inference stack below and massive, high-friction customer demand above. The fact that this layer is thin doesn't automatically make it less valuable or non-durable. Some of these are sophisticated software-enabled services. Others are highly-sticky applications that integrate into complex, durable human workflows where AI handles the heavy lifting, and the software provides critical human oversight, data routing, and compliance layers. Their defensibility comes not from the proprietary nature of the third-party model below, but from the network effects and lock-in of the workflow they orchestrate above. These startups are not thin in a negative sense. They are, instead, brilliantly thin - and very relevant for a capital efficient approach to engineering and scale.
Lean GTM. In AI era, customers are now able to easily search for, identify, and integrate new applications faster than ever. Traditional GTM friction—finding the right person, getting them to click a demo, enduring procurement—is shrinking. We are moving toward a world where frameworks like MCP (model context protocol) allow rapid human integration, and, increasingly, allow agentic discovery and direct agentic implementation. For the right products, this new streamlined GTM reality allows a leaner startup than ever. It makes the surface area that a founding team needs to cover dramatically smaller. If you build it well enough—if the validated learning confirms you're solving a deep enough pain—they might just come on their own. Organic adoption powered by the frictionless discovery of AI tools is the ultimate Lean GTM.
Let’s get lean again. Some founders will find themselves in the eye of the storm of VC interest and willingness to deploy enourmous amounts of capital. These founders will be able to raise large amounts of venture dollars and can spend aggressively to build their businesses. For some of them, this may even make sense given the race conditions we are in in some areas. But the majority of founders building truly innovative pre-consensus companies are going to need a different playbook. Fortunately, it’s one we have had lying around for a while—and it's more relevant than ever. Wait there just a minute, and I’ll give you a copy - I am sure it’s in this drawer somewhere…
FROM THE BLOG
The AI-Native Enterprise Playbook
Ten real-time observations on a rapidly evolving playing field
No More Painting by Numbers
It’s the end of the “SaaS playbook.
The Age of Artisanal Software May Finally be Over
Every wave of technological innovation has been catalyzed by the cost of something expensive trending to zero. Now that’s happening to software.
Founders as Experiment Designers
David on why founders should run everything as an experiment.
WORTH READING
ENTERPRISE/TECH NEWS
Rare Earth awakening. Ben Thompson’s interview with Gracelin Baskaran is a masterclass on the geopolitics and economics of critical minerals, and a sobering look at how deeply China dominates the sector. Baskaran traces how decades of U.S. neglect, capped by the 1996 closure of the Bureau of Mines, ceded control of mining and especially processing to China, which can afford to operate unprofitably to secure strategic choke points. For investors and founders, the takeaway is that energy and minerals have become the new semiconductors: cheap power, long-term capital, and aligned industrial policy will define which regions control the next generation of technologies, from EVs to AI infrastructure.
Deel’s new deal. Deel just raised a $300M Series E at a $17.3B valuation, posting $1.2B in run-rate revenue, 70% YoY growth, and 16% EBITDA margins. It’s also been profitable for three years. In six years, Deel has gone from zero to billion-dollar ARR scale faster than any software company in history, blending fintech rails with SaaS-like margins and discipline. For investors, it’s a signal that the next generation of infrastructure winners won’t be pure software or pure fintech, they’ll be hybrid operators turning payments, compliance, and employment into compounding software businesses that can IPO on both growth and cash flow.
HOW TO STARTUP
Karpathy calls the top. Andrej Karpathy argues that we’re entering not the year but the decade of agents. It will be a long, iterative climb toward AI systems that can actually act, remember, and learn rather than just chat. He’s strikingly pragmatic: current models are “cognitively lacking,” reinforcement learning is “terrible,” and true progress will come from building the missing layers of continual learning, cultural scaffolding, and multi-agent cooperation. For founders and VCs, it’s a sober reminder that AI is still deep in its “march of nines” phase. We’re entering a decade of hard engineering where useful, reliable agentic systems will be built layer by layer, not conjured by hype.
DeepSeek OCR. DeepSeek’s new “OCR” paper is far more significant than its modest title suggests, as it points to a possible paradigm shift in how multimodal models represent and store information. By achieving 10x better compression with visual tokens than text tokens, DeepSeek effectively inverts the old inefficiency of vision inputs, turning them from space-hungry add-ons into a superior storage format for dense linguistic information. If this approach generalizes, it could unlock massive context expansion. That would let models operate over entire corporate data sets, codebases, or knowledge repositories in memory, rather than through retrieval systems, blurring the boundary between LLMs and persistent cognitive engines. For founders and investors, it’s a signal that breakthroughs in token efficiency and attention sparsity may define the next frontier of AI infrastructure.
Is RL the future? The new ScaleRL paper, reviewed by Nathan Lambert here, marks the first serious attempt to formalize scaling laws for reinforcement learning in large language models, essentially bringing the same predictive discipline that governed pretraining to the chaotic world of post-training. The authors show how to extrapolate the end performance of RL runs from partial curves, fit constants for peak accuracy, slope, and compute, and then forecast final results. Beyond the math, the work validates a maturing RL ecosystem: stable libraries, efficient systems like Pipeline RL and in-flight updates, and algorithmic advances such as TIS and CISPO that are now table stakes for scaled runs. This research suggests that the next wave of performance gains won’t come from bigger models but from smarter, more predictable RL at scale, and the open-source community is finally catching up to the frontier labs’ secret sauce.
HOW TO VENTURE
Momentum != Moat. In response to a16z’s Bryan Kim declaring that “momentum is the only moat,” Kyle Harrison argues that momentum investing is poisoning the startup ecosystem, confusing early ARR spikes with genuine defensibility. While rapid traction is valuable in a fast-moving AI market, the focus on $0–$2M in ten days reflects the logic of billion-dollar funds that need billion-dollar outcomes, not durable businesses. The result is a feedback loop where too much capital chases hype, short-term growth hides weak economics, and a generation of strong but “middle-class” companies, which are profitable, sustainable, and worth hundreds of millions, not tens of billions, are being written out of venture’s future.
PORTFOLIO NEWS
Aquant is recognized on Fast Company as one of the 2025 Next Big Things in Tech for the second year in a row.
CruxOCM Co-Founder and CEO Vicki featured at Silicon Foundry’s ecosystem spotlight.
PORTFOLIO JOBS
Groundcover
Infra Engineer (Tel Aviv)
Reply