• The Angle
  • Posts
  • Drivers, Surfers, and Sailors: Three paradigms to make or meet the AI moment

Drivers, Surfers, and Sailors: Three paradigms to make or meet the AI moment

The Angle Issue #299

Drivers, Surfers, and Sailors: Three paradigms to make or meet the AI moment

The VC tech landscape has bifurcated into a world of haves and have-nots. A few massively capitalized VC funds account for well over half of all VC dollars raised, and a few highly legible consensus companies account for the majority of venture dollars invested.

While true, this reality masks another, more interesting one. The tech landscape today is split into three distinct types of companies:

  1. Inflection Drivers – Companies that are directly powering the AI inflection point.

  2. Inflection Surfers – Companies that are riding the massive AI wave and depend on those dynamics for growth.

  3. Post-Inflection Sailors – Companies that are building for a post-inflection point future and are set up to ride that wave to long-term value creation.

The Inflection-Point Drivers

Inflection point drivers fall into two broad categories:

  • Mega-labs: The best examples are OG labs like Nvidia, OpenAI, Anthropic as well as newer players like Physical Intelligence, AMI Labs, World Labs, Decart, and others. They are leveraging massive investments in innovation to directly power the wave itself. The jury is still out, but it is highly likely that the majority of the value here has already been captured. (Even Meta recently announced they were giving up on many of their AI initiatives.)

  • Efficient Innovators: There are a handful of opportunities where capital-efficient, innovation-driven strategies can lead to substantial growth by providing key pieces of the tech stack that enable or accelerate the AI inflection point. These are companies building foundational technologies for the AI inflection point that are unlikely to be subsumed by the mega-labs in the short term and have an opportunity to grow into substantial stand-alone businesses for structural reasons. Good examples on the AI infrastructure side include Exa, Langchain, Falkor, and Hornet.

The Inflection-Point Surfers

Inflection surfers appear to account for the majority of VC dollars flowing into newly formed companies right now. These are typically capital-intensive startups that are pursuing growth-at-all-costs strategies. Some of these companies are AI application companies and some are neo-clouds. Sometimes capital is thrown into compute/inference, sometimes it’s thrown at Forward Deployed Engineers (FDEs), and sometimes both - but it all boils down to the same thing: trying to ride the massive AI adoption wave by trading margins for growth. The dominant logic here is race conditions, not necessarily innovation. The revenue growth these companies are posting is truly impressive. In case after case, as the truth leaks out, the picture becomes somewhat less flattering. High churn, low/negative margins, services disguised as product, thin layers on top of third-party models, and—too often—ARR definitions that are so loose that they fade into meaninglessness.

By definition, inflection points don’t last forever. The rate of change right now is near-infinite, but near-infinite rates of change don’t persist indefinitely. We are already seeing signs of a gradual rationalization of AI spend and deployment. The remainder of our lives will be lived in a post-AI inflection point world—and the companies we are building need to thrive in that new, different, but somewhat more stable world. This is especially true for companies that are just getting started now. Founding a company today and betting that the massive wave of AI dislocation is going to persist for long enough that you can just surf it to the huge outcome by throwing capital at your growth problems seems like a risky bet.

The Post-Inflection Sailors

Inflection sailors are companies built for the post-AI world. They assume that AI exists and will continuously improve, and they are already wrestling with the challenges of how to build a successful, enduring company in that environment. The difference, of course, between a sailor and a surfer is the wave. The inflection surfers are riding a massive, temporary wave of AI adoption that is bringing a huge flow of revenue. The inflection sailors are building for a future where the seas are sometimes calmer, but the winds are strong and can carry you fast and far, but only if you build the right boat.

Because of the scale of the inflection point itself, it is becoming increasingly clear that the playbook for startups, company-building, and venture capital needs to be radically rewritten. One common trait of successful inflection sailors is that they leverage AI capabilities to the absolute utmost. We are seeing smaller development and GTM teams than ever before. We are seeing radically faster release cycles. The very nature of expertise itself has been redefined. Hiring plans are different today than ever before. Fewer people. More focus on cohesion and mission. Less emphasis on functional experience. More emphasis on vertical depth. The myth of the 10x engineer has been replaced by the dual realities of the 100x coding agent orchestrator and the 100x GTM agent orchestrator. At the same time, the ability of a Founder/CEO to look a prospect in the eye and close a sale has never been more valuable.

Even in the post-inflection world, economic reality exists, and the only way to create true value is to generate sustainable free cash flow (FCF). FCF, as ever, requires a moat. There appear to be three types of moats that will sustain in the post-inflection world:

  1. The Physical World. Hard tech as always been the backbone of venture capital, it was just overshadowed by the SaaS wave. But the fundamental ability of a team of engineers to create value with hard tech innovation is unchanged. This is why we are backing companies like Blue Energy, Moonshot, Sequins, SwanNeck Bio, and others. For many of these companies, AI has become a key part of reducing their cost structure, but their sustainable advantage comes from the inherent complexity of building with atoms rather than bits.

  2. Deep Industry Expertise. Hard industries by their very nature create barriers to entry and moats for those who know how to sell into them. This is why we backed companies like Motorica, Medida, Wisery, Crux, and Portchain.

  3. Speed. The final moat that AI has unlocked is the speed of commercial execution itself. Just as SaaS unlocked this in a previous generation by reducing the costs of software creation and delivery, AI has done it again. There are a handful of companiee with credible plans to grow insanely fast with very little capital - and as an incepetion-stage VC firm we want to back as many of those as we can. The ratio between growth and cash burn is the ultimate metric of startup success - and AI has unlocked new ways to push the edge of the envelope on that metric.

Gil Dibner

FROM THE BLOG

Could the future of software be fluid
How do we get the best of AI without losing the soul of software?

The future belongs to young missionary teams
Why it makes more sense betting on youth in the current moment

The AI-native enterprise playbook
Ten real-time observations on a rapidly evolving playing field

No more painting by numbers
It’s the end of the “SaaS playbook.

WORTH READING

ENTERPRISE/TECH NEWS

The physics of AI scaling. Dylan Patel of SemiAnalysis lays out the brutal hardware realities behind the AI buildout on Dwarkesh's podcast. The core message: software people are about to get a crash course in physics. Interconnect speeds, memory bandwidth, and literal power grid capacity are now the binding constraints on AI progress, not algorithms. Hyperscalers are spending half a trillion in CapEx not for today's models but to lock in future compute capacity, while inference costs remain deeply unsustainable. For founders building on top of foundation models, this is a useful reminder that the infrastructure underneath you is neither cheap nor guaranteed.

HOW TO STARTUP

Karpathy's autonomous AI researcher. Andrej Karpathy open-sourced "autoresearch," a minimal repo where an AI agent autonomously iterates on LLM training code in a loop, running 5-minute training runs on a single GPU and committing improvements to a git branch without human involvement. The human's only job is writing the prompt; the agent handles architecture, optimizer, and hyperparameter decisions indefinitely. It's a neat toy project today, but the underlying idea (agents that autonomously run and improve experiments) points to where a lot of R&D workflows are heading, and why "speed of iteration" is becoming less about team size and more about how well you can direct an agent.

Agents as economic actors. From Francois Chollet (and something we’ve written about before): the next phase of agentic AI isn't just about task completion, it's about agents that autonomously transact. Think agents that spin up compute, purchase datasets, and hire other agents to accomplish goals with a budget rather than a to-do list. We're probably 1-2 years from seeing this at scale, which raises big questions for founders building infrastructure: who captures value when the buyer isn't a human? Payments, identity, and procurement all need to be rethought for a world where your best customer might be a piece of software.

Cursor's $2B identity crisis. Forbes reports that Cursor has doubled annualized revenue to $2 billion in just three months, but internally the company is in "war time" mode. The problem: as coding agents like Claude Code and OpenAI's Codex let developers skip the editor entirely, Cursor's core product thesis is eroding even as its numbers soar. The company is responding by building its own coding models on top of open-source Chinese LLMs and pushing hard into enterprise contracts, but it's racing against competitors willing to subsidize aggressively, with Anthropic reportedly eating up to $5,000 in compute per $200 Claude Code subscription.

HOW TO VENTURE

Five paths for AI into the enterprise. Rory O'Driscoll lays out a clean framework for how foundation model intelligence actually reaches enterprise customers, from models that "just work" out of the box to full-stack AI-enabled service providers. Each path implies a radically different investment thesis: if enterprises build it themselves, application software is dead; if incumbents integrate fastest, you buy the dip on SaaS multiples. The real insight is that the answer will vary by market, and figuring out which path wins where is basically the entire game for venture investors over the next five years.

VC returns have never been this concentrated. Meghan Reynolds at Altimeter shares a striking stat: gross profits from VC investments in just three LLM companies now account for roughly 70% of all VC profits from the entire previous decade. LPs are obsessed with getting into OpenAI and Anthropic rounds at levels she's never seen, and the conversation is already shifting to IPO preparation and public market dynamics. For the broader venture ecosystem, the concentration is the story. A tech super cycle has never produced so few massive winners so fast, which raises uncomfortable questions about what the rest of the portfolio is actually doing.

PORTFOLIO NEWS

Moonshot Space CEO Hilla Haddad Chmelnik examines how Israel’s role in lunar infrastructure could define future space power.

Valohai demonstrates how enterprise AI actually scales after launching on Oracle Cloud.

PORTFOLIO JOBS

Reply

or to participate.