No More Painting by Numbers

The Angle Issue #235

No more painting by numbers
David Peterson

Every product builder I know is spending a significant amount of their time these days thinking about how to build with generative AI.

And, let’s not sugarcoat it…it’s hard to do. Even harder to do well. But there’s some value, I think, in describing how and why it’s hard to build with AI. So let me try to do that today.

The approaches I’ve seen to building with AI fit into four broad categories, each with their own challenges:

  1. Make the product experience better with AI

  2. Augment a human user with AI

  3. Outsource a specific task to AI

  4. Empower end users to build with AI

I’ll cover each below.

Make the product experience better with AI. I’d put features like LLM-powered summarizations (launched early by the likes of Notion as part of “Notion AI,” but now ubiquitous) or “chat with your help center” experiences into this category.

The obvious pitfall with these sorts of bolt-ons is when companies build “AI-powered” features that check the box (we launched an AI feature!) but don’t actually make the user experience any better. This is, unfortunately, all too common. Indeed, it’s amazing how many AI-powered features one comes across these days that essentially don’t work. Or only work sometimes. Which just isn’t good enough.

Augment a human user with AI. These are the “copilots” we’ve all heard so much about. It’s become a consensus view that developer copilots are incredibly useful (though probably not replacing software developers any time soon!). Unfortunately, I haven’t come across any other types of copilots, across any other domains, that come even close to the usefulness of coding copilots. (Please let me know if I’ve missed any!).

Model capabilities are partially to blame. (Foundation models are particularly good at coding, so it makes sense coding copilots would be the early breakout successes in this category). But I think the bigger challenge here is actually because of product design / UX. We just don’t have good product paradigms that I know of for a true “copilot” that sits alongside you, observes in the background, and only helps when it's useful. In early 2023, Diagram announced Genius, a “design copilot,” that would design right alongside a designer. As it turns out, it wasn’t real. But that demo remains probably the best example of what I imagine this could feel like.

Outsource tasks to AI. Generative AI models can be used to complete many tasks that have well-defined outcomes, but non-deterministic solutions (that is, there isn’t “one right” way to do it). That’s incredibly disruptive. (There’s a reason that the phrase “AI Employees” is on the lips of board members everywhere.) Examples of products leveraging this capability range from companies like Quilter or Robin AI (replacing a step in a process with AI) to Enso and 11x (replacing an employee with AI).

The challenge is that, despite best efforts, LLMs aren’t good enough at the multi-hop reasoning required to fully outsource complex tasks. This suggests to me that you’re better off narrowing down your focus so that, as Linus suggests below, you’re able to build a deterministic logic engine to guide the LLMs to do your bidding in a somewhat predictable way.

So, if you’re a product builder (or investor) targeting this space, and you want to find a sufficiently constrained vertical/function to target, where should you focus? I like Bill Gurley’s framework from the latest BG2 Podcast, where he argued that AI will win at “finite games” but struggle at “infinite games.” In other words, start by finding a function that is governed by “finite games” and build there.

Empower users to build with AI. This is what you see open-ended, customer-built, products like Zapier, Airtable, Notion, and even Adobe, Figma and Canva, attempting to do (as well as countless new “AI-first” startups hoping to disrupt these incumbents). And this is, perhaps, the hardest approach to get right.

Why? Because users don’t yet have an intuition for what AI can do. In this recent interview on the No Priors podcast, Howie Liu (Airtable CEO) shared that, especially amongst enterprise customers, users just don’t understand what these models are capable of. As Howie says, “even though there are so many different applications and you can apply AI to almost any use case in any industry, the gap right now is in imagination and know-how.”

In other words, if you want to succeed as a meta-platform that empowers users to build with AI, you first need to educate your users about what AI can actually do. That’s a tall order. Not impossible, of course. We’ve done this before when products with brand-new capabilities entered the scene (remember those old Macintosh ads…). But it will be a unique challenge these meta-platforms face.

So, it’s hard to build with AI. That much is clear. But what makes it so hard?

I think it’s because, as Sam Altman said, generative AI is kind of like an alien species. And it breaks most of our mental models for what it enables a piece of software to do.

Here’s what I mean. For the past decade or more, it was assumed that whatever you wanted to build was possible. There was basically zero “technology risk.” But it’s different with AI. You can’t just assume it will work (as builders attempting to outsource tasks with AI have seen). And even if it does work, it probably won’t work consistently enough (as builders launching AI-powered features that don’t live up to the hype know all too well). As a result, building with this technology requires a huge shift in the mindset of product and engineering teams.

And it’s not just that. As detailed above, you also can’t rely on well understood design paradigms to bring your product to life (as builders creating copilots have experienced). And you can’t assume your users understand the capabilities well enough to take full advantage of their power, either (as builders of customer-built products have seen). At each step, product builders are faced with another problem to solve. Nothing can be taken for granted.

That’s because what we’re really seeing, in my estimation, is the end of the “SaaS playbook” itself. All the foundations of product building that founders have relied upon for the past decade are being upended. And that makes everything harder. But, as I am nothing if not an optimist, it also suggests to me that many incumbents may have a harder time evolving than first imagined.

In other words, for the past decade, we’ve all been painting by numbers in the same SaaS coloring book. Startups and incumbents alike. Now, for the first time in a long time, we’ve got a blank sheet of paper in front of us. Game on.

David

FROM THE BLOG

No Sleepwalk to Success
Engineering success in a technical startup.

Revenue Durability in the LLM World
Everything about LLMs seems to make revenue durability more challenging than ever.

A Digital Fabric for Maritime Trade
Why we invested in Portchain.

WORTH READING

ENTERPRISE/TECH NEWS

A rational approach to irrational investments. MIT’s Technology Review published a very intriguing interview with Mike Schroepfer, the former CTO of Meta, who has launched a new career as a climate-focused VC and philanthropist. He touches on the significance of some under-the-radar technologies and approaches such as glacial restoration. The interview is especially interesting where Schroepfer discusses the “power of the prototype” and how he thinks about backing extremely radical technologies, both as a commercial investor and as a philanthropist. “I think a lot of what my role in the world is to do is to get us to there. I’m willing to take a lot of risks that these things just don’t work and that people make fun of me for wasting my money, and I’m willing to stick it out and keep trying. What I hope I do is put a bunch of proof points on the board, so that when the time comes that we need to start making decisions about these things, we’re not starting from scratch—we’re starting from a running start.”

Google buys Character.AI in a massive acqui-hire? According to The Information, “Google has agreed to pay a licensing fee to chatbot maker Character.AI for its models and will hire its cofounders and many of its researchers.” It looks like investors will be bought out at a heavy 2.5x multiple to the most recent valuation ($1B enterprise value). Given that total investment into the company was around $150M, the total purchase price for Google looks like it’s going to be much lower than the notional paper value of the last round, despite investors making off with reasonably healthy returns. As more and more overvalued startups seek the exit, we’ll probably see more nontraditional exit agreements such as this one.

Vertical farming goes down? Crunchbase reports that venture capital for the vertical farming industry is drying up. This comes alongside news of German startup Infarm’s struggles to revive its business amidst bankruptcy and legal battles. Crunchbase posits that the main problem here is capital intensivity coupled with long time frames that are not compatible with the VC model. It’s also possible, however, that the problem is more deeply rooted - with the cost per calorie equation not yet making sense for the vast majority of applications.

HOW TO STARTUP

The funding gap gets bigger. Crunchbase published data that should be of interest to any early-stage founder. Since 2021, seed rounds have trended larger and larger, presumably because founders want more capital to have more time to reach their next financing round. More importantly, however, the graduation rate from Seed to Series A has plummeted since 2021.

An open-source messaging masterclass. Mark Zuckerberg recently wrote a manifesto outlining why Meta has open-sourced its AI models and why it continues to do so with Llama 3.1 405B, their latest model. The letter is a masterclass in both corporate strategy and clear communication. He outlines clearly why open source is good for developers (data privacy, greater control, freedom to build). Meta (ensures access to the best models), and the world (avoidance of concentrated power, greater AI safety). It’s worth a read both to understand where Mark thinks the AI market is going and also how a CEO can clearly communicate where he is taking his company and why.

HOW TO VENTURE

Echos of a previous bubble in AI. Charles Hudson, a preseed VC with Precursor Ventures in San Francisco, wrote a thoughtful piece outlining his struggles with investing in the era of AI. “In some cases, it has to do with the fact that I am a pre-seed investor and some of the most interesting AI-powered companies skip pre-seed as a stage and raise really large rounds on day one. In other cases, I have questions about moats, defensibility, and the ability to charge a premium for AI-powered products and services when AI capabilities become table stakes and not differentiators. In the last few weeks, it feels like there has been a shift in investor perception of where we are in terms of generating meaningful business value from AI. I paid a lot of attention to Microsoft's earnings announcement and their comments about the money invested in AI infrastructure and when they would see returns on that investment. I found it both sobering and refreshingly honest about how long it will take for this to play out and when they expect to see the benefits of their massive investment.” Hudson goes on to make a convincing case that AI in 2024 resembles the internet in 1999: “There is so much optimism about what this technology can do and how quickly it will be ready for full-scale commercial deployments. The story is much different when I talk to people in the trenches deploying this tech. It's hard to make this stuff work in production. The technology works well, but it isn't perfect. Their customers want the advertised productivity gains but aren't always sure how to adjust their business processes to take advantage of what AI can do. It's harder in practice, and many folks have told me this is inevitable but will take a lot longer than they thought when they started. We may be coming to the end of the unbridled optimism phase and settling into the more important and harder phase of aligning what the tech can do today and what benefits companies can truly reap from it. But when I think about what's different this time, one thing really stands out. The dollars the venture capital industry has at its disposal are on a whole different level than 2000. To use a gaming term, we are speedrunning AI and investing so much capital so quickly that we aren't getting the opportunity to learn from what is and isn't working and making adjustments along the way.”

PORTFOLIO NEWS

Forter announced IMPACT 2024, a conference that brings together digital commerce leaders to network and discuss fraud, payments and customer experience.

Datos Health is part of the BIRD Foundation's $6.55M investment in six innovative projects.

CruxOCM CEO, Vicki Knott, says that oil and gas companies do not need to be wary of generative AI.

PORTFOLIO JOBS

Reply

or to participate.