What Would the Aftermath of the AI Bust Look Like?

In this issue:

  • What Would the Aftermath of the AI Bust Look Like?—AI would be a very unusual capital-intensive bubble if it didn't have a demand hiccup or two along the way. But this collides with another feature of general-purpose technologies: as they scale, their outputs get more fungible with other parts of the economy.
  • Succession—Buffett takes one last opportunity to give subordinates good PR.
  • The Stats—If the government takes a more active role in managing the economy, it's more important than usual to have trustworthy metrics.
  • Going Direct—Venture and podcasting require some complementary skills and can cover similar networks.
  • Banks are Back—"Highest since 2008" is usually a worrying sign, but sometimes it's quite positive.
  • Young Founders—The less experience you need to start a company, the more that implies about there's a big but uncertain opportunity set.
The Diff August 4th 2025 1
0:00
/918.439184

What Would the Aftermath of the AI Bust Look Like?

So far, there hasn't been a general-purpose technology that a) turned out to be truly useful, and b) did not, at some point, destroy a huge sum of capital due to overinvestment. In the 1840s, the UK blew through around 15-20% of annual GDP over the course of a few years of railroad investing. Half a century later, the panic of 1893 meant that a quarter of all US railroad-miles were owned by companies in bankruptcy, and while the US still has one of the world's best freight rail systems, we also have less than half the track miles we used to—half of the steel, dynamite, effort, deaths, etc. were wasted. (Shareholders didn't do all that well, either.) Aggressive extrapolation of manufacturers' growth in the 1920s catalyzed a global crash soon after. And the 1990s telecom boom's malinvestment was somewhere between 1.5% of GDP (looking at writedowns of assets) and more than ten times that number (based on market cap). Mobile is the rare counterexample, but perhaps an illustrative one—some of the inputs were existing capital categories (we needed fabs to build chips for PCs, servers, cars etc. so while mobile increased demand, there were backup buyers for that capacity), and building up a mobile network in a developing country was relatively less capital-intensive, at least at first, than wiring every household.

It just wouldn't be far outside the historical norm for all of us to say, a few years in the future, that while AI is very cool and does many useful things, we probably could have done without the last $200bn of capex for now.

But what would that look like in practice?

The first thing to note is that for most of these capex-heavy booms, the problem wasn't the absence of growth, just a slower pace—US electricity consumption had reached its 1929 high by 1935, and compounded at 4.5% annualized over the course of the 1930s. It's roughly flat today. Home electrification also had a brief hiccup in the early 20s, but was back to record levels by the late 1930s. Similarly, railroad use kept rising, adjusted for general economic cycles, throughout each bust. And of course bandwidth consumption went up plenty in the wake of the telco bust, partly because it was so cheap—excess capacity owned by bankrupt entities is never good for industrywide pricing discipline.

In some ways, AI is naturally resistant to demand shortfalls: the more things a given product can substitute for, the less likely it is that demand will completely evaporate. The resilience of some cities that deindustrialize is instructive: the richer a city gets, the more the critically expensive input is, basically, land. You have to pay someone better than the national average for them to afford living in NYC, and your factory also needs to be unusually productive to afford those rents. So terms like the “Garment District","Meatpacking", and most recently “Hudson Yards” provide some local flavor and a reminder of the city's past. That real estate has been repurposed to something more profitable, but the impetus for it was that real estate costs made lower-productivity industries so expensive. Similarly, the oil industry didn't suffer too much when the 2008 financial crisis slowed what had previously been an insatiable rise in developing-world demand for oil; they had to wait until 2014 to run into serious problems, and even then the cause was discretionary supply-side decisions rather than demand shortfall.

AI investments get more fungible all the time, in two directions:

  1. On the output side, LLMs were a good substitute for very basic customer service pretty early on, and could speed up writing some kinds of boilerplate code starting with GPT-3. Here's a Reddit post from shortly after ChatGPT launched, but it's specifically about how GPT-3 can answer questions that are too embarrassing to ask an experienced programmer.
  2. On the input side, a growing share of the investment underwritten by AI is for power, not just GPUs. The marginal buyer of a gas turbine or a recommissioned nuclear power plant is certainly hoping that power demand will rise inexorably, but there are always air conditioners, EVs, aluminum smelters, and the like as demand sinks, not to mention all the kinds of demand/new activity that very cheap power can induce.

A decent story for an AI slowdown goes like this: some of the big labs start to see that scaling laws either peter out or just reach the point where the cost is too high. Maybe they decide to throw money at the problem, and get a GPT-5 that's worth using in comparison to 4o, but not worth upgrading accounts to access. (That's more or less what happened with 4.5, which was originally going to be released as GPT-5.) There's still plenty of room to deploy current models, but orders for EUV machines, datacenter capacity, and the electricity to power them all start to look like bad moves. But some of these are slow-moving decisions, and even if an investment is value-destroying in retrospect, it might be a better deal to finish it and accept a return below the cost of capital than to take the loss entirely.[1]

If that happens, we'll be in a very interesting situation: the Great Token Glut, where hardware that was going to go into training goes into inference instead, and it gets very hard to compete on general quality because the models are already good-enough at so many tasks.

If there's a widespread belief that model capabilities have peaked, this hits valuations throughout the AI space. But it's actually incredibly bullish for the wrapper companies (which the labs themselves are increasingly becoming, see Claude Code, Codex, Anthropic for Financial Services, etc.). The things that typically kill those companies are that either a) general-purpose models get good enough at whatever it is that they specialized in, or b) their product works, but their unit economics are upside-down because they're paying too much for inference. If capabilities peak, suddenly both of these problems go away: their inputs get cheaper and they don't have a looming competitor to worry about.

And there's still a lot of deployment ahead. Part of LLM adoption is getting a sense for exactly which tasks can be offloaded to AI and which ones can't, and then figuring out which ones previously weren't worth doing, but are worth it now. (In retrospect, it would have been a good idea to jot down every thought I've ever had that started with "If I had unlimited free time, I'd..."). Just like everyone has had to develop a sense for when to toggle between email, messaging, and Zoom calls or in-person meetings for communication, we'll figure out how to toggle between thinking, writing, and just asking an LLM for help on intellectual tasks.

One heuristic for how much upside there is from this is to consider a direct analogy—LLMs are smart, but unreliable, and sometimes they need a lot of context before they can perform useful work. So are people! We start out with very capable brains and no marketable skills, but most of us reach the point where we can do economically useful work. Education averages about 5% of GDP for OECD countries, but that's a low estimate for the total share of economic activity that is in some sense education. Plenty of white-collar jobs are mostly in the education sector, in the sense that people spend a substantial amount of time reading or figuring things out rather than directly producing their work product. There's a lot of education ahead, in the form of better post-training, better prompts, and a better sense of which models work well for which use cases.

And, of course, there's plenty of non-AI downside in the event of an AI bust. AI capex growth alone is ~6% of nominal US GDP growth.[2] If that slows, overall growth does, too, and at least some of the AI-induced growth will be deflationary rather than inflationary until it gets implicitly captured in higher spending on ads ($, Diff).

One thing that will change is that we'll escape the zero-lower-bound trap for white-collar productivity. In theory, knowledge workers should be continuously getting better at their jobs, through the accumulation of experience, tricks, osmosis from colleagues, etc. In practice, it's entirely possible to stall out. But if there's a substitute that's at parity for some tasks, ahead in others, and still improving in terms of how it's deployed even if it isn't in terms of absolute ability, then the baseline sustainable pace for productivity improvements is higher. An economy where people get fired because they got 3% better at their job when the baseline is now 4% is a more brutal one, but it's also a more productive one, with a big enough tax base to mitigate some of the distributional effects of this. (At least as long as the models used to write the tax code are comparable to the ones that try to game it.)

We're at a strange point in model capabilities, where you really wouldn't trust them to do 100% of your job but can potentially restructure your life a bit and have them do a lot of it, giving you either more free time or a much higher income (and also probably lowering the market value of the output, at least on a unit basis). But the models are also pretty good at falling into traps like excessive sycophancy, and if the pace of improvement slows down, retention will be a bigger deal than potential future upsells, so the problem will get worse. Consumer-facing chatbots are kind of like dogs, in that they don't have very advanced meta-reasoning but do face constant selection to be liked in order to keep getting access to resources, and, like dogs, the specific strategy they get selected for is to really, really like you. In a world where models keep getting smarter, the model has more of an incentive for tough love—it wants you to be the kind of successful, high-agency person who can afford the ultra-premium tier. Otherwise, the models will probably keep being too nice.

Like other pieces about AI, this is speculative, and with the current pace of AI progress, the shape of an AI slowdown will be different if it happens in a year or two. It's still something worth thinking about, because lots of people are still implicitly preparing for a world where models don't keep improving, without thinking quite enough about how big their impact will be if they're only slightly more capable but a lot cheaper.


  1. This isn't a claim about the sunk cost fallacy, just one about thinking on the margin. If, after you've spent half the total investment something requires, you determine that its return on investment is a little over half of what you expected, the marginal dollars you spend on completing it still produce a worthwhile return. It's just that the overall return is lower than expected. ↩︎

  2. That's based on comparing 2024 numbers to 2025 guidance, and making the assumption that most companies are mostly spending in their home markets. Further, note that growth contribution numbers always look big, and the more finely you divide them the larger the total magnitude of plusses and minuses. A static GDP growth number is the result of gross gains and losses across different sectors and demographic cohorts. ↩︎

SPONSORED## Elsewhere

You're on the free list for The Diff! Last week, paying subscribers read about why lobbying should play a growing role in policy as the problems get more complex ($), more products will get turned into feeds ($), and the mystery of the AI leisure time dividend ($). Upgrade today for full access!

Upgrade Today

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

  • Thiel fellow founder (series A) building full-stack software, hardware, and chemistry to end water scarcity, is looking for an ambitious software engineer to help build the core abstractions that enable global cloud seeding operations - from mission planning to post-flight analysis. If you want to use your software engineering toolkit to help solve a substantive problem in the world of atoms and have experience with ERP/MES systems, data streaming, and API design, please reach out. (Los Angeles)
  • A blockchain company that’s building solutions at the limits of distributed systems and driving 10x performance improvements over other widely adopted L1s is looking for an entrepreneur in residence to spearhead (prototype, launch, grow) application layer projects on their hyper-performant L1 blockchain. Expertise in React/React Native required. Experience as a builder/founder with 5–10 years in consumer tech, gaming, fintech, or crypto preferred. (SF)
  • A Series B startup building regulatory AI agents to help automate compliance for companies in highly regulated industries is looking for legal engineers with financial regulatory experience (SEC, FINRA marketing review, Reg Z, UDAAP). JD required; top law firm experience preferred. (NYC)
  • A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring Associates, VPs, and Principals to lead AI transformations at portfolio companies starting from investment underwriting through AI deployment. If you’re a generalist with a technical degree (e.g., CS/EE/Engineering/Math) or comparable experience and deal/client-facing experience in top-tier consulting, product management, PE, IB, etc. this is for you. (Remote)
  • Well funded, Ex-Stripe founders are building the agentic back-office automation platform that turns business processes into self-directed, self-improving workflows which know when to ask humans for input. They are initially focused on making ERP workflows (invoice management, accounting, financial close, etc.) in the enterprise more accurate/complete and are looking for FDEs and Platform Engineers. If you enjoy working with the C-suite at some of the largest enterprises to drive operational efficiency with AI and have 3+ YOE as a SWE, this is for you. (Remote)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

Succession

Last week The Diff note a WSJ piece on how Berkshire Hathaway holds some of its equity investments at a higher valuation than their current market price ($), which is allowed under accounting rules, may well be accurate, but means that Berkshire would have to report an embarrassing multi-billion dollar loss if Buffett's successor decided to sell one of the stakes. As it turns out, Berkshire wrote down its Kraft-Heinz position by about $5.0bn pretax. The 10-Q covers the period through June 30th, so Berkshire actually dealt with this problem in advance. It's a nice little capstone to Buffett's career as a business communicator: it's accurate as a statement of fact and also timed in a way that makes things easier for Greg Abel while making Buffett's own book value-creation track record look slightly worse—but only to someone who didn't carefully read the relevant SEC filings.

The Stats

An interesting feature of reputations is that at any given time, correcting yourself makes it look like you don't know what you're talking about, but if you've never corrected yourself then it's very likely that you're lying, either to others or yourself. So, organizations that want to be seen as reliable will often have formal mechanisms for correction: accountants produce interim results and then audit them and sometimes need to correct things, news outlets will often post corrections in articles; academic journals maintain correspondence sections (and the academics have Twitter). And the BLS also sometimes corrects numbers it issued before, as happened last week when most of the previous two months' unemployment numbers were almost completely revised away. So, Trump fired BLS commissioner Erika McEntarfer.

There's value in knowing what's going on in the economy. And there's also value in making sure the numbers look good—fudging macro data is basically a sub-branch of monetary policy, just under a different part of the org chart. It's a way to make people who read the business section feel more optimistic, and those people tend to be overrepresented among the people who make cycle-driving decisions around capex and the financing that backs it. If unemployment numbers are artificially low, there will be at least some people who decide that they're just unlucky, and that the rest of the country is doing fine, and the kinds of people who separate their view of the economy as a whole from year-over-year changes in their income are also overrepresented in the electorate.

But, like other forms of policy-driven credit expansion, short-term benefits can come at the cost of a worse long-run equilibrium. Especially given Trump's aggressive plans to reshape the US economy, accurate performance numbers are a necessary input. Tariffs, for example, will necessarily have some course corrections—a default tariff of zero simply has fewer means of providing policy feedback than a tariff system that ideally tries to protect industries in which the US can be competitive while limiting the frictional cost of importing inputs into that industry (a tariff on steel is a tax on domestic auto production, for example, and you don't really know the relevant elasticities—how many auto jobs lost per steel job gained—until you've had the policy in effect for a while). So the Trump administration has an unusually practical interest in accurate data. It will be hard to pull the same fire-the-statistician move twice in a row, which means that the trustworthiness of US economic data will come down to whether the commissioner seems like the kind of person who'd quit their job in protest or not.

Going Direct

Bloomberg has a piece on why the VC/podcast overlap is so high, a phenomenon with which I have some familiarity. One reason is, of course, that podcasts are not just a pretty easy thing to produce but also pretty close to the work product of actual investors, i.e. a discussion about what's happening in the world right now (just followed up by an investment that takes advantage of this theme rather than an ad break). More seriously, podcasting is very complementary to being a central node in a fairly broad network that rumors and vibe shifts can't instantly traverse. In some industries, there are jobs that lead to disproportionate connections, which probably leads to enough news- and rumor-flow to sustain a few podcasts.

Banks are Back

European banks are trading at their highest prices since 2008 ($, FT), which is a lot less alarming than it sounds like. As a general rule, when banks are trading at a high multiple to book value, especially relative to other sectors, it implies that they're doing some combination of excessive risk-taking and crowding out of other economic activity. But when they trade at a discount to book value, it implies that their optimal course of action is to gradually liquidate. In a healthy economy, $1 of retained earnings at a bank is worth more than $1 in market cap, and it looks like Europe's banks are finally getting to that point again.

Young Founders

The NYT profiles the cohort of very young founders in AI, who are mostly one layer away from the model developers in either direction. This is a useful indicator to watch because it's a rough measure of the magnitude of new unexploited opportunities a given technology has produced. Young founders were part of the late 90s narrative, and that of the 2010s, but less so lately: in Tyler Cowen's interview with Paul Graham two years ago, Cowen notes that the important people in tech seem a little older than they used to be, and some of the same ones you'd name a decade ago are still on the list of young, surprisingly high-impact people. If there are lower barriers to entry now, there's more room to start interesting companies again.