• The Angle
  • Posts
  • The other VC barbell: engagement

The other VC barbell: engagement

The Angle Issue #272

The other VC barbell: engagement
Gil Dibner

A lot has been made of the barbelling of venture capital into large mega funds and small specialist funds. Marc Andreessen made a powerful case for this in his recent interview with Sam Altman, as have we in our writings on VC swimlanes. But there is another barbell that is also worthy of attention: the degree of engagement that a venture investor has with his or her portfolio.

The engagement barbell. On one end of the barbell is the power-law diversification players. At both late and early stages (but more in early stages) there are plenty of players that build incredibly large portfolios. Portfolios of 100 or more companies per fund or fund cycle are not uncommon. At the other end of the scale are firms that take a far more concentrated portfolio approach with 20-30 positions per fund. Assuming a 2-3 year fund cycle, that is as few as 6-15 investments per year. This might mean as few as 1-4 per partner per year.

These deeply engaged firms are not just capital providers; they are often acting as true capital partners, deeply integrated into the strategic fabric of the company. Their concentration means they have the bandwidth, and indeed the imperative, to lean in heavily. When a portfolio company hits a snag – a managed crisis, as it were – these investors are often the first call, rolling up their sleeves to help navigate product pivots, leadership changes, or critical fundraising rounds. Crucially, they offer unvarnished truth to founders, even when unwelcome. These investors are also often the last call, helping founders navigate difficult decisions, shut down companies, and stepping in with financing when others won't. This is the essence of a lead investor, vital for high-risk, non-consensus companies.

Mathematically, “spray and pray” is better. Dan Gray of Equidam and Peter Walker of Canva recently analyzed how larger portfolios tend to yield better return distributions than smaller ones. I have deep respect for both Dan and Peter for their consistently insightful contributions to the venture capital industry discourse. Their analysis of portfolio size, while mathematically indisputable, prompted my thinking on a crucial and – I think – flawed assumption about how VC works in the real world. Their analysis assumes no relationship between portfolio size and individual company outcomes, implying that diversification is free.

Is diversification free? If diversification were truly free, VCs would logically pursue maximum diversification, continuously increasing fund size to capture every opportunity without limit. But diversification is decidedly not free for two crucial reasons. First, running a highly diversified strategy is inherently difficult. Funds like YC, EF, Antler, Tiny, Angel Invest, and Kima have perfected this model through years of systems building and learning. New entrants to this game will be easily outmaneuvered by these established players, whose mastery of scale is itself a source of alpha.

Second, diversification reveals a fundamental assumption. If a VC assumes no advantage in picking winners or working with companies, then the time and energy involved offer no value. In that scenario, diversification is free, and concentration adds no value. But if value exists in the energy a VC deploys in selecting and working with portfolio companies, then diversification inherently incurs a cost. That cost is borne in the form of poorer decisions, reduced engagement, reduced information, and reduced impact. While the mathematical physics of a Monte Carlo simulation remain constant, a VC's belief about diversification's benefits reveals much about their self-perception. For those of us in search of alpha, the concentrated strategy dominates. And if you are not pursuing alpha, what are you doing?

In search of alpha. The pursuit of investment alpha—returns exceeding what market benchmarks would predict—underpins the barbelling observation. Some VCs build moats around operating at massive scale. While not always deeply engaged, they excel at high-volume sourcing and selection where it counts. Others employ high-conviction, high-concentration strategies, deeply engaging as true capital partners during founders' most difficult moments. These represent two valid paths to alpha – the barbell of engagement. As a VC, there is no effective middle ground. One must either master the highly diversified strategy (doable, but very hard!) or the concentrated strategy (equally doable, equally hard).

At Angular, we are fully committed to true and deep partnership with a small set of outstanding non-consensus companies. I am not sure, however, that we chose that strategy. I like to believe that it chose us. There is nothing more rewarding than earning the privilege of being a founder’s first (or last) call, of being their sounding board on their hardest decisions, of knowing that a founder feels we truly understand him or her and the choices they face. We wouldn’t have it any other way. 

FROM THE BLOG

No More Painting by Numbers
It’s the end of the “SaaS playbook.

The Age of Artisanal Software May Finally be Over
Every wave of technological innovation has been catalyzed by the cost of something expensive trending to zero. Now that’s happening to software.

Founders as Experiment Designers
David on why founders should run everything as an experiment.

When Growth Stalls
Or why to kickstart growth you should narrow your ICP.

WORTH READING

ENTERPRISE/TECH NEWS

Superintelligence dream team. Alexandr Wang tweeted that he’s joining Meta as its Chief AI Officer alongside Nat Friedman, framing the new role as a push “towards super-intelligence.” The announcement lands days after Meta agreed to pay approximate $14.8B for a 49% non-voting stake in Wang’s Scale AI. This was essentially an acquihire that also brings a raft of OpenAI, Anthropic and DeepMind alumni into the freshly branded “Meta Superintelligence Labs. For founders and early-stage investors, it’s another loud signal that Big Tech will spend double-digit billions to lock up scarce AI talent and proprietary data.

Fair use? Two Northern District of California judges just tossed out copyright-training suits against Meta (Llama) and Anthropic, but for opposite reasons: both found LLM training “transformative,” yet Judge Alsup waved off market harm entirely while Judge Chhabria argued AI’s scale could swamp authors…but then ruled for Meta anyway because plaintiffs showed no evidence of harm. The split highlights a key legal fault line: existing fair-use doctrine wasn’t built for models that can ingest millions of books and pump out infinite derivative text, so outcomes now hinge less on doctrine than on whether plaintiffs can prove measurable substitution. For founders and early-stage investors, that means model training remains legally viable for now, but I wonder if we’ll end up seeing a patchwork of rulings before the Supreme Court delivers clarity, making IP diligence and data-sourcing hygiene table stakes for AI startups.

HOW TO STARTUP

“All-you-can-eat” token buffet. In this post, Simon Willison discusses Cursor’s June 16 pricing revamp, which swaps the old “500 requests per month” cap for pure token/compute metering and introduces an “Ultra” tier at $200/month that offers roughly 20× the included compute of the $20 Pro plan—mirroring Anthropic’s Claude Code pricing. The sudden change left some power users with eye-watering bills, prompting Cursor to issue refunds for unexpected usage through July 4 and underscoring how wide token-cost variance across external models (OpenAI, Anthropic, xAI, etc.) can wreck flat-rate economics. More broadly, the move signals the twilight of VC-subsidized “all-you-can-eat” LLM access: mature AI-native apps are shifting margin risk back to customers via usage-based billing. For startups building developer-focused AI tools, the $200/month “pro power user” price point is fast becoming both the de-facto revenue ceiling and a sticky lock-in mechanism. How many engineers will justify paying that premium to more than one assistant? If ChatGPT, Claude or Gemini is “good enough” at some piece of functionality, how many people will pay for an additional tool if they’re already paying $200//month?

Data scarcity. In this old, but super insightful, newsletter post, Jack Morris contends that every true step-change in AI (AlexNet on ImageNet, Transformers on the open Web, RLHF on human preferences, and “reasoning” models trained on verifier feedback) was less about clever algorithms and more about tapping a brand-new dataset. If that historical pattern holds, the next breakthrough won’t be a fancier architecture but a massive, under-exploited data source such as YouTube-scale video or real-world sensor/robotics streams. For founders, that points to businesses that can create or unlock proprietary troves of richly-labeled video, simulation, or verification data; for investors, it reframes the moat from model IP to data access and collection infrastructure. In short: the teams that control tomorrow’s unique datasets are the ones most likely to capture the next wave of AI value. A more recent reflection on this article from Akshit here.

HOW TO VENTURE

Figma IPO. Figma’s S-1 looks stellar on the surface - 41 % ARR growth, 27 % FCF margin, a Rule of 40 score of 68 - but this poster highlights that its headline 132 % NRR and 96 % GRR rely on non-standard formulas. Instead of tracking last year’s cohort forward, Figma measures NRR by taking today’s ≥ $10K customers and looking backward, and it excludes downsells from GRR. Think about this for a second and you’ll realize that this inflates retention and obscures churn, quite significantly. For founders eyeing their own exits, the dust-up is a timely reminder that public-market diligence punishes metric “creativity,” so bake transparent, industry-standard retention reporting into your dashboards long before bankers show up.

PORTFOLIO NEWS

FalkorDB achieves SOC 2 type II certification.

Groundcover platform now being used by EX.CO, cutting their observability costs by 50%.

Steadybit launches the first MCP server for chaos engineering, bringing experiment insights to LLM workflows.

PORTFOLIO JOBS

Reply

or to participate.