- The Angle
- Posts
- The knowledge problem
The knowledge problem
The Angle Issue #304

The knowledge problem
All this doom and gloom about the end of SaaS, the end of venture, and the end of software writ large has me reaching for something, anything, to put our current moment in context. This week, that impulse had me revisiting readings from a political economy class I took in undergrad, and specifically a 1945 paper by Friedrich Hayek that should have ended a certain kind of argument permanently.
It didn't. And now the argument is back, armed with GPUs instead of five-year plans.
"The Use of Knowledge in Society" is not really about economics. Or not only about economics. It's about what knowledge is and where it lives. Hayek's claim was precise: the knowledge required to coordinate complex activity is dispersed, practical, often unstated, and constantly changing. And the price system works so elegantly because it allows actors to act on knowledge they alone possess without having to transmit it to anyone else.
In other words, central planners failed not because they were underfunded or unsophisticated, but because the knowledge they needed was unavailable to them.
I’ve been thinking about this argument a lot as SaaS implodes around me. There's a particular kind of slightly amazed, slightly exhausted take spreading through investor circles right now. Every week a new model release (or Anthropic launch) spurs another sell-off in the public markets and the death of a hundred startups. There’s this sense that the only opportunity left is to fund the labs, or wait.
The implicit assumption behind all of this doom is that general intelligence, at sufficient scale, makes specialized knowledge irrelevant. That once a model knows enough about everything, the advantage of knowing something specific collapses. But that is almost exactly the premise Hayek was arguing against. His point wasn't that central planners lacked intelligence. It was that the knowledge they needed wasn't the kind that could be aggregated at the center, regardless of how sophisticated the aggregator became.
The strongest counterargument is that tacit knowledge only remains tacit until someone builds the right data flywheel. Brynjolfsson and Hitzig do a great job detailing this argument in their 2025 NBER paper AI’s Use of Knowledge in Society. AI increases the codifiability of knowledge, they argue, through two channels: it codifies local knowledge that was previously tacit, and it expands information processing capacity to aggregate and act on data at a scale that no human organization can match. In other words, the question isn't whether knowledge can be absorbed. It's who absorbs it first and at what scale.
That’s a real argument. But I don't think it holds, for three reasons.
First, not all knowledge leaves a trail. In practice a lot of the most durable tacit knowledge is much messier than we may realize. A lot of knowledge lives in behavior and judgment, not in records.
There’s a company in the Angular portfolio called Belidor that automates pre-construction workflows for general contractors, starting with bid leveling and scope generation. You might think that with capable enough models, you could simply ingest a stack of subcontractor bids, compare them, and surface the best one. But “best” turns out to be incredibly subjective. Experienced estimators love Belidor because it eliminates data entry drudgery and frees them to do the part that actually matters: talking directly to tradespeople and figuring out who is the right fit for this project, in this city, given what they heard in those conversations. That judgment doesn't generate a log. You can't build a flywheel on data that was never captured.
Second, even when data flywheels do work, they produce differential advantage. Let’s use the Belidor example again. Imagine you somehow built a model, fine-tuned on hundreds of decisions made by pre-construction estimators in Atlanta, to decide exactly which tradespeople to work with for which project. That model might be genuinely useful for that specific type of GC in Atlanta, but it tells you almost nothing about Austin. You haven't eliminated specialization. You've moved it one layer up. The insight that domain-specific training produces domain-specific advantage is actually an argument for the proliferation of specialized companies, not against it. Every Hayekian knowledge gap is a company-building opportunity.
Third, the most valuable knowledge decays fastest. Local knowledge is valuable precisely because it's current. The flywheel argument assumes a stable enough domain that you can accumulate signal, train on it, and deploy something durable. But the most valuable domains are the ones that change fastest, because those are the ones where incumbents are most exposed and where current, local knowledge commands the highest premium.
Yes, the labs matter. The capability improvements are real. But the idea that this concentrates all value creation at the frontier is a failure of economic imagination.
The enduring companies of the next decade will not be the ones that merely know more. They will be the ones closest to where new knowledge is produced: inside workflows, relationships, regulatory regimes, supply chains, job sites, labs, hospitals, and factories that are changing faster than they can be summarized.
Hayek wasn't arguing that central planners were wrong. He was arguing that they couldn't be right, because the knowledge that they needed was always somewhere else. It still is.
David Peterson
FROM THE BLOG
Could the future of software be fluid
How do we get the best of AI without losing the soul of software?
The future belongs to young missionary teams
Why it makes more sense betting on youth in the current moment
The AI-native enterprise playbook
Ten real-time observations on a rapidly evolving playing field
No more painting by numbers
It’s the end of the “SaaS playbook.
WORTH READING
ENTERPRISE/TECH NEWS
Manus - Beijing has intervened to block Meta's acquisition of Manus, the autonomous AI agent app originally founded in China before relocating to Singapore. China's NDRC has ordered the deal fully unwound with funds returned, ownership re-registered, and Meta's use of the Manus algorithm halted (with penalties including potential criminal charges for individuals if the parties don't comply). Meta says the deal was fully legal and expects "an appropriate resolution." As the FT reports, the catch is that the deal has already closed and Meta has integrated Manus into its tools, making an unwind extremely complex. One person briefed on the decision suggested the announcement may be intended primarily as a warning shot against future deals of this kind rather than a practically enforceable order.
Claude’s #1 user - Judah Taub, Managing Partner at Hetz Ventures’s thoughtful opinion piece in Calcalistech speaks to Israel’s continued AI early adoption, and questions whether there is more strategic planning to be done to harness this enthusiasm. The headline stat is striking: by Anthropic's own data, Israel is the number one user of Claude relative to population (4.9x the global average, ahead of Singapore and the US). As Taub puts it: "That is not a coincidence. That is the Start-Up Nation doing what it does." However, he argues that the headstart isn't permanent, and Israel isn't matching it with national ambition. The comparison is blunt: France has committed €100 billion to a national AI strategy; Israel's multi-year plan is budgeted at roughly €250 million. He's also candid about the pressure on Israel's core asset, which is its engineering talent, especially given Anthropic's own CEO has said AI could write 90% of all code within six months.
HOW TO STARTUP
A new era - In their piece ‘Lasers, air defense and AI take center stage as Israel and U.S. enter post-aid era’ Calcalistech outline the recent shift towards joint development by Israel and the U.S. Formal negotiations are set to begin next month on a new U.S.-Israel security framework for 2029–2038, and the shift in structure is significant. Rather than direct financial assistance (currently $3.8bn/year), the new agreement is being designed as a transition toward joint development of advanced military technologies, including directed-energy weapons, enhanced air defence against hypersonic missiles, and AI. The stated goal is to phase out direct U.S. financial aid to Israel entirely by 2038.
HOW TO VENTURE
Generic, Buzzwordy or Meaningless - Leslie Feinzaig writes in her substack ‘Venture with Leslie’ about how she’s been keeping a burn list of funds who raise a lot of money with a ‘generic, buzzwordy or meaningless’ thesis and still raise a lot of money. A sharp, honest piece from a fund manager on a pattern every VC insider recognises but rarely names directly: funds quietly rewrite their theses to chase LP appetite. Climate funds became "American dynamism" funds when the political winds shifted. Diversity-focused funds had to rebrand or risk extinction. SaaS-focused investors are now doing consumer. The culprit isn't just VC flimsiness, it's the LP feedback loop. LPs read the same headlines everyone else does, and it's genuinely hard to justify backing a fund in a category the market has declared dead, even if the underlying thesis is sound.
PORTFOLIO NEWS
Groundcover introduces agentic AI tracing to observability platform.
Moonshot Space signs a preliminary partnership agreement with Alaska Aerospace Corporation.
PORTFOLIO JOBS
Reply