Don't Get Addicted to Fantasy Economics
In 2015, I spent about six months living and working in San Francisco, around the peak of when venture dollars were subsidizing the SF-tech-employee lifestyle. Uber and Lyft were cheap. Spoonrocket was charging so little for lunch that it was questionable that they could make the economics work even without the cost of delivery. Laundry apps were in land-grab mode. DTC mattresses were essentially free. Some companies had marketing incentives so aggressive that people were using billboards to arbitrage Tesla's referral program. It felt like the peak of a weird era in which pensions, endowments, and high-net worth investors gave their money to VCs, who gave their money to startup CEOs, who hurled it at one another with maximum force to see if it would somehow deliver shareholder value.
And, weirdly enough, it did. This period saw VC as an asset class achieve the best returns in a decade, though they were below some of the early periods that made VC a viable asset class to begin with, and also worse than what the category has generated in the few years since. Those returns are, of course, composed of a mix of some companies that didn't make it, and some that survived, got to scale, raised their prices, and reached a better steady state. DoorDash isn't GAAP profitable quite yet, but they're no longer setting prices so low that it enables pizzerias to turn a profit by buying their own pizza through the service.
As it turns out, the model of subsidizing the early version of a product with VC dollars, then getting to critical mass before monetizing it better, was actually a decent one.
Which makes sense, because if you shift around the mix of capex, opex, and COGS, this has always been the tech industry model, going back to software and even before that to chips. All of these products had a cost structure weighted towards fixed costs, mostly R&D, while the marginal cost of one more chip or one more CD-ROM, not to mention one more pageview, was minimal. The initial R&D cost for a ride-hailing, mattress-selling, or food-delivering company is minimal, but incremental costs are higher. Most high-margin companies have to take a leap of faith with their initial investment, betting that the unavoidable outlay will be recouped later on. Lower-margin companies have to make a different kind of bet: that their cost structure will improve with scale and experience, and that recurring customers will produce recurring contribution profits long after the initial cost of getting them on board. Of course, this does lead to variance in steady-state margins, but some of this is an artifact of what shows up on the company's P&L and what doesn't. Meta margins would look worse if they also included the costs of advertisers who are part of the extended Meta ecosystem. The extended ecosystem of pure software startups and their customers will tend to have a similar margin structure and return on capital to that of full-stack startups, but the former are structured so that the highest-margin slice of the business is contained in its own independent company.
The temporary state where a company is underpricing its product in order to get market share can lead to a trap for customers—especially when the justification for that strategy is that switching costs are high or network effects are strong. The problem was not especially dire for SF startup employees in the mid-2010s. By and large, this group survived the transition from $8 Spoonrockets to $20 DoorDash lunches just fine. But restaurants haven't found it trivial to adjust to increases in delivery platform effective take rates, and any company that gets its customers from ads has increasingly ceded its economics to the ad-based platforms.
Right now, there's a good case for AI companies to underprice their products, especially if the products are APIs. Usage data is training data, and it's differentiated training data that can give some companies a non-duplicable edge. That thumbs up/thumbs down icon in ChatGPT is constantly accruing valuable training data.
A decent approach for companies using these tools is to try to make a guess as to what the long-term economics will look like, and try to operate as quickly as possible while staying in line with those economics. For a previous generation of companies whose growth was fueled by inefficiently-priced ads, that meant developing some kind of durable advantage that would persist and lead to profits even if the ads were priced appropriately—this is something that Expedia and Booking.com were able to accomplish in travel, for example, while many of their subscale competitors started flailing once AdWords clicks for searches like "hotels in NYC" were priced at close to the expected incremental margin from capturing such a click. For AI wrapper companies, it probably means some combination of in-house projects, clever prompt engineering, or using a marketable product to hone an advantage in, well, marketing. The goal for any AI-copywriting-as-a-service or natural-language-search-for-your-text-corpus startup should be to develop something that will be differentiated when both of those categories are commodities.
This is a somewhat perverse strategy, because it means ignoring what is generally your most salient advantage. Jasper could sell API calls at a very nice markup, and prospered while doing so, but the company on the other side of that transaction knows that its economics will be better when there are more companies competing in every wrapper category: once they start competing on price, the usual supply and demand forces start to apply, so usage and thus API revenue goes up as a consequence of margins for the wrapper companies coming down.
(This approach is one reason Amazon, which does now own some high-incremental-margin straightforward-growth kinds of properties, does not succumb to the usual bad habits such categories lead to. Their corporate culture jelled when they were mostly in retail, the Salusa Secundus of industries.)
Paul Graham has written about how good founders need to "live in the future" a bit, and to use and build the kinds of products people will come to expect eventually. This is a good pattern; Dropbox anticipated a world where no one would lose documents because they forgot to hit ctrl-S, and, aside from people who compose long messages directly in a CMS or form, this world has largely come to pass. Stripe imagined a world where someone could accept arbitrary payments by adding a few lines of code to a page, and has gotten quite close to this asymptote. But that's the strategic version of living in the future. The tactical version is to live in an immediate future where the counterparties who were giving you free money have either gotten what they wanted from that approach or gone bankrupt because they didn't.
Habituating to this is annoying, but most kinds of realism are. A business can't survive in a steady state indefinitely based on some third party's irrational behavior. (Even the businesses that do depend on this, like gambling, addictive substances, many parts of education, and certain financial products end up with regulations that keep them from maximally exploiting this.) The only question is whether the company adapts in advance or if they don't start adjusting until their old model has been rendered non-viable by someone else.
Disclosure: Long Meta, Amazon.
Annoyingly, Cambridge Associates, the source for this information, has joined the "don't give it all away for free" trend, and now publishes much less detailed performance numbers. Earlier editions of this report have more details and break out vintages by year rather than by multi-year chunks, but since VC stated performance is noisy for the first few years, we have to compromise a bit. ↩︎
This kind of thing was happening elsewhere, too; Uber, for example, got in trouble for violating iOS rules and deanonymizing some users in China because at the time, their rider subsidies and driver subsidies were large enough that it was profitable to use a second phone to give rides to fake passengers. ↩︎
Amazon is a good case study in this. Their ads business would be worth a lot as a standalone company: trailing revenue is $29bn, growth last quarter was 23% YoY, more than 2x the consensus for Meta's growth over the next few years. And there's no platform risk! If Meta's worth 6.3x trailing sales, it's plausible that Amazon's business would be worth 10x or more, so at least $300bn as a standalone company. But all those wonderful economic traits that make it more defensible than comparatively thin layers like Meta and Google also mean that it doesn't make sense as an independent business, and works best if it's attached to the drearier, lower-margin business of hawking TVs and deodorant as cheaply as possible. ↩︎
Though what's probably more valuable right now is tracking what text users are selecting (presumably to copy and paste). If ChatGPT is doing your homework or your job, OpenAI knows about it, even if you're otherwise passing this informal Turing Test. ↩︎
Companies in the Diff network are actively looking for talent. A sampling of current open roles:
- A company building ML-powered tools to accelerate developer productivity is looking for software engineers. (Washington DC)
- A company reinventing the way Americans build wealth for the long-run by enabling them to access "Universal Basic Capital" is looking for a product manager with fintech experience. (NYC)
- A proprietary trading firm is seeking systematic-oriented traders with ML experience—ideally someone who has displayed excellence in DS and ML, like a Kaggle Master. (Montreal)
- A new service that's trolling the dating market with a better product and better monetization is looking for a full-stack founding engineer. (Los Angeles)
- A startup building a new financial market within a multi-trillion dollar asset class is looking for generalists with banking and legal experience. (US, Remote)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.
Exporting Clean Energy
Last week's Capital Gains talked about the sometimes fuzzy line between tradable goods and non-tradables. Electricity is, in a literal sense, hard to trade: it requires expensive infrastructure to deliver over long distances, and most countries don't want to rely on someone else for essential infrastructure if they can help it. But it's also a ubiquitous input which makes it indirectly tradable: Tokyo Steel usually takes advantage of cheap off-peak energy prices by producing overnight, but is considering a day shift as renewables change the intraday distribution of electricity prices ($, Nikkei). (Incidentally, a good working definition of an "energy-intensive" industry is any one where the wage premium for only running a night shift is worth it purely because of power costs.)
Reverse Turing Tests
The WSJ has a profile of DeepScribe, a healthcare startup that claims to be using AI to generate medical records, but is actually using plenty of human labor ($, WSJ). There's a lot of conventional startup advice that needs to be tweaked when applied to regulated or high-risk fields; rocket companies should not, as a rule, move fast and break things as much as casual games. Leaving aside privacy issues, there is an incentive for companies to pitch a process as more automated than it really is. (In the early days of Amazon, the company would get encrypted payment information on one Internet-connected machine, put it on a floppy disk, and physically walk it over to another machine that wasn't connected to the Internet but was connected to payment networks, so sensitive data would never be stored on an Internet-connected device in cleartext, according to Get Big Fast.) What they're ultimately pitching is an automated system, and that automation is sometimes possible, but even if a process gets to 99% accuracy, a hybrid human and AI process generally starts out beating either pure human or pure automation, and creates its own training data for the edge cases the AI can learn to spot on its own.
The general problem the entire finance industry is trying to solve is the mismatch between who has money and who can put it to use. Venture syndicates are a fun addition to the mix, because there's a set of people who have the deal flow to make some interesting early-stage investments, but don't have the capital base to habitually write $50k checks. Syndicating deals partially solves this problem by outsourcing some of the capital needs and some of the paperwork (yes The Diff has one). But as this post argues, the economics can be very challenging. Part of what distinguishes venture as an asset class is that the capital needs and uncertainty are so front-loaded. There are other categories that have high capital needs, like infrastructure, but they also tend to have economics that are in principle more predictable. Whereas angel investing, even through syndicates, involves a long period of net cash consumption before there are returns, and an even longer period before there's good evidence of an investing edge.
In 2010, Deutsche Bank acquired another German bank, Postbank. Over a frantic weekend earlier this month, they finally finished ($, FT) integrating the two banks' IT systems. (Probably.) The usual paradox of technology adoption applies here: the earliest users will consequently have the most out-of-date systems. In banking, that often means that banks have mutually incompatible back office systems, custom-built before any kind of standardized offering was available. And the more mission-critical the system is, the longer it's likely to have been since it was last touched. This makes industry consolidation in those sectors a surprisingly expensive endeavor: the theoretical economics are enticing, but the practical task is to reverse-engineer someone's home-grown system and then rebuild it.
Not for the first time, China's GDP data literally doesn't add up ($, FT) (specifically, quarter-over-quarter growth for the last four quarters implies that GDP has grown faster than the reported year-over-year number). Fudging economic data has economics of its own, especially in places where there's both significant state participation in the economy and the state capacity necessary to make the numbers work. China has somewhat exaggerated growth in the past, but this has essentially functioned as a commitment from the government to ramp up spending in order to roughly hit GDP targets. The short-term result is stable growth, but the longer-term results are 1) an increasing reliance on sectors that can be easily stimulated by the government, and 2) a growing gap between aggregate numbers like GDP and more granular ones. The second problem is being quasi-solved by removing some of the economic data analysts previously used to estimate true GDP growth, but eventually the government's choices are either to report more realistic GDP numbers or to report numbers that outsiders view as entirely fictitious.