Just Like 2007 (In a Good Way)

Plus! Diff Jobs; Asset Heterogeneity; Ads; Sales and Marketing; Breadth; More Convergence

In this issue:

audio-thumbnail
The Diff January 20th 2026
0:00
/967.026939

Ask questions about this article on Read.Haus.

Just Like 2007 (In a Good Way)

Peter Thiel once pointed out that every peak of mania is a peak of clarity. In March of 2000, people saw the fully online future more clearly than ever, and unfortunately hallucinated some details around timing and exactly where or by whom the value would be captured. Three quarters of a century earlier, the investors who were buying RCA, GE, GM, and Insull's utilities stocks were more right about the world's electrifying and automotive future than investors of 1927 or 1931 were.

That applies to market peaks that were driven by less tangible factors. In the late 60s, one of the most thrilling stories you could tell equity investors was that you were assembling a portfolio of companies in unrelated businesses and using your management brilliance and financial engineering skill to make the collective worth more than its parts. That was the pitch for ITT, Ling-Temco-Vought, Teledyne, etc. But even contemporary observers didn't imagine a Yoyodyne or Soylent or a J R Family of Companies that operated at the same scale as the biggest private equity firms today. Investors in the late 90s were absolutely right that losing money in order to acquire repeat customers for a business with high fixed costs and relatively low marginal costs was a viable strategy, even if they were mostly wrong about which companies could do this. Speculators in the 20s who bought stocks at a 100% premium to par, even if those stocks had a low dividend yield, did end up getting the last laugh—equity investors today don't even think about things in those terms; from the 1920s perspective, we all think like the gamblers who were pushing up RCA and not the responsible investors who stuck with railroads.[1]

The mid-2000s were a peak period for a different kind of financial engineering, in the form of bundling together a large number of loans, and then slicing the overall returns into different tranches. It's straightforward to illustrate this if we imagine an instantaneous structured product created to bet on the outcome of a series of coin tosses. Individual coin tosses are 50/50, but you can manufacture a safe tranche that only loses money if every toss comes up tails, and that one is only going to lose money a bit less than 0.01% of the time. And the reason you'd do this is that some investors want to take risk in a category, but not to underwrite specific loans, while others want to underwrite but don't have the balance sheet to hold these loans indefinitely. So, tranche away!

They knew that a coin toss was not a great illustration, or rather than the right way to think about this is to imagine that you have a biased coin whose specific bias is unknown: maybe in some coin toss-backed structured products, the coin shows heads only 40% of the time, and suddenly the default rate on your safest tranche is six times higher. There were two mostly-unrelated problems they didn't quite grasp:

  1. On the credit side, housing prices and mortgage defaults were partly a function of the flow of housing-related credit; every new subprime-backed collateralized debt obligation made existing houses a little more valuable, which gave homeowners a little more flexibility to use refinancing to extract equity in order to smooth out their own consumption. So some of the usual inflationary pressure that starts to slow down consumption in a booming economy, like high gas prices, didn't put homeowners in a position where they had to cut one part of their budget to maintain another. It's nerve-wracking if your expenses are rising faster than your income, but that's soothing if, even after a home equity line of credit, you have more home equity than you did a few years earlier and are in some sense a richer person who's just enjoying the returns from a savvy investment in exurban Phoenix real estate. What all of this led to was higher correlations within specific structured products; the good ones were better than people thought, and the bad ones were worse.
  2. The thing that turned this from a speed bump for banking into a financial crisis was that so many of these products were financed with overnight loans. Once liquidity providers thought there was a chance that some AAA-rated tranches were actually worth 90 cents on the dollar instead of 100 cents on the dollar, they didn't want to lend against any of them, and if investors had previously levered up 30:1 to buy these products, that meant that suddenly, everyone had to sell at once. Directionally, this problem still exists any time there's leverage, but in practice, if bank leverage is closer to 10:1 than 30:1, and if it's provided in a way that's more transparent to central banks, and if those central banks also have more authority to intervene directly in markets to support asset prices, a 2008-style crash can't happen. (The next one will be completely different—stay safe out there!)

Some structured products completely disappeared after the crisis; you just don't see banks issuing many CDO-squareds these days. On the other hand, collateralized loan obligations (i.e. structured products backed by loans to companies, often for buyouts) are twice the size of their 2007 peak, and commercial mortgage-backed security issuance is about two thirds of its pre-crisis peak.

And the general idea that you can bundle together a bunch of risky cash flows, manufacture a safer asset from them, and then lever it up is still popular. In fact, it's a hot category in the equity markets right now—that's exactly what neoclouds like CoreWeave, Nebius, Crusoe, Applied Digital, IREN, etc. are! Their basic model is to intermediate in a two-sided market: big AI users, including AI users with cloud infrastructure of their own, want access to compute. But Nvidia can afford to be choosy about who gets their GPUs.

Neoclouds, like pure-play chip foundries, are a weird combination of statistical corporate cowardice and narrower corporate bravery: they're completely agnostic as to what the end source of demand will be—maybe we'll all wear smart glasses that Ghiblify our every interaction, maybe searching for solutions to Erdős problems will be the modern equivalent to a big game hunt or climbing Mount Everest (in that it requires some skill but is also a very expensive way to deploy that skill), or maybe there will be an endless arms race where the compute required to recognize AI slop gets asymptotically closer to the amount of compute required to generate it, to the point that every available GPU and TPU is at all times devoted to either producing or identifying AI slop.

But they're also a hybrid asset class, halfway between a forward-flow agreement and a NAV loan. A forward-flow agreement is one of those post-crisis structured product innovations: instead of taking a roster of existing loans and securitizing them, you create a security that automatically buys future loans, as long as they fit the right criteria. This is a good vehicle for funding a peer-to-peer financial marketplace like LendingClub (which is only peer-to-peer if all of your peers happen to work at specialized credit funds) or to create a long-term vehicle for funding BNPL loans; they'll churn through lots of $50 pay-in-four transactions, balanced by longer-term purchases of Peloton bikes and Eight Sleep mattresses, such that the mix is balanced at whatever average FICO investors want to pay. A NAV loan is a different product, with an only vaguely similar use case. The idea is that for a venture or private equity firm measuring its returns, the clock starts when they request capital from their limited partners. If they can write a check before they do that, they can compress the timeline over which returns accrue and produce a little extra upside. From a bank's perspective, once limited partners are locked in as investors, lending against future capital calls is quite safe. So, larger funds will fund their investments by borrowing and only then draw down capital from limited partners. If they make money, this mechanically increases their internal rate of return by deferring the start date. And if they don't, it doesn't really matter whether losing half of their investors' money works out to a -13% annualized return or a -14%.

The set of cash flows backing this is tricky to analyze; it's whatever the last four quarters' ~$225bn in AI-related equity raises is funding. That compares to about $12.2bn raised by the neoclouds over the same time period. It's a reasonable assumption that most of the time, a dollar invested into a big round raised by an AI lab or application company will be worth $0 in a few years; it's a high-risk business with many losers. But it's also likely that at least a few new enterprise software primitives and successful consumer apps will be built by these companies. And if that's the case, those companies will have recurring revenue.[2] For the ones that do, it's a wonderful financial convenience that so much of the work has been done in advance: instead of figuring out which companies work best as PE portfolio companies and which shouldn't, they've applied a PE-style capital structure, to whatever slice of contribution margin is most suitable to borrowing.

At a high enough ratio of neocloud borrowing to venture capital equity investing, this would be wildly irresponsible: if you assume that roughly half of venture investment in AI companies will go to infrastructure providers, then they're really betting that the worst of the best 10% will be able to pay their bills. Part of the magic here is that in the event that AI equities collapse, neocloud economics could look better. The slowest-moving part of the supply chain is energy, which means that in the event that there's a series of bad training runs and AI investors aren't willing to fund more of them, neoclouds will be doing inference in an environment where energy prices keep coming down, and where there's also a glut of chip production capacity that makes the next generation of hardware cheaper, too.

This is still not a completely responsible bet. There are many companies with a longer history of cash flows and a more concrete sense of collateral values out there. But transformative tech buildouts generally aren't funded by the most responsible market participants—those are the ones buying t-bills. Instead, they're funded by people who look through the financial structure of individual risk takers, in order to focus on the totality of the risk being taken. And in that model, companies that borrow today to buy compute for companies that don't even exist yet aren't as crazy as they look.


Disclosure: Long NVDA.


  1. To unpack this a bit: there used to be a tradition where a stock would have a par value that served a similar function to the face value of a bond, and this par value was supposed to represent the value of the company's assets. These investors implicitly assumed that everyone gets roughly the same return on capital, though public equity didn't necessarily reflect this. If you read references to "watered stock" from earlier periods, that refers to companies overestimating the value of their assets so they can go public at a higher valuation. It correlated with paying an unsustainably high dividend and with plenty of other abuses besides, but didn't have to. Eventually, helped along by electrification enabling high-resolution capex, companies started retaining more of their earnings, reinvesting those in useful projects, and generally turning EPS rather than dividend yield into the main anchor for valuation. It took them a lot longer to figure out that companies differ not just in how much scope they have for reinvestment, but in what the marginal returns on that reinvestment are. ↩︎

  2. Really, AI companies should stop talking about "ARR," in the same way that consumer Internet services eventually realized that total registered users was a metric that could be improved by making flow for recovering lost accounts worse. The revenue doesn't recur, and there are almost certainly companies today who promote an "ARR" number higher than the cumulative revenue they'll produce over their entire existence. Well-hyped fields have structurally high churn ($, Diff), at least until someone figures out what keeps users coming back and, even harder, convinces them to update their payment information. What these companies really ought to measure is something closer to quarterly gross profit, with COGS adjusted upward to account for whatever subsidies they're getting from model and infrastructure providers. Just like switching from registered users to weekly active users, this means showing a much lower number but also showing a number that can guide reasonable business decisions. ↩︎

You're on the free list for The Diff. Last week, paying subscribers read about thermostatic state capacity ($), another lens for economic convergence between software and other sectors ($), and thoughts on trucking safety company Motive ($). Upgrade today for full access.


Upgrade Today

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

Asset Heterogeneity

The count of publicly traded companies is a contentious topic, because there are companies with a ticker but no volume and there are private companies that don't have a ticker but do have a bid/ask spread. But the range is somewhere between 10,000 and 100,000 businesses that can reasonably be described as publicly traded. These companies have led to about 1.5m listed options contracts. In light of that, what does it mean when CoinGecko finds that more than 10m cryptocurrencies ceased to be actively traded in 2025. Underwriting fees used to be sticky, in part because nobody wanted to run the kind of company that had to go with a lower-tier underwriter just to go public. But they're also fixed in the sense that any company that plans to go public needs to produce a lengthy document detailing the kind of business they're in, their financial history, any conceivable risk that could cost investors in the business money, etc. ICOs don't bear those costs. A public company needs some combination of economic fundamentals and hype to go public, and when the friction to doing so is zero, it turns out that hype is easier to produce than substance. The good news for crypto is that these coins are a tiny fraction of crypto market cap, and are basically a sideshow disconnected from the rest of the crypto economy. The downside is that the crypto industry doesn't have much control over what people think when they hear the word "crypto," and that's a major input into how they're regulated.

Ads

The Diff has long argued that LLMs are best monetized through ads. That's true by induction (one of the best businesses in the world monetizes by letting people type into a search box and showing ads based on what they type) and by deduction (if someone types most of the important decisions of their life into some app, and that app is run by a company good at extracting signals from text, it's very valuable to direct their spending decisions). So, it's not a surprise that OpenAI is penning essays explaining why it's going to monetize with ads. One thing they commit to is that they won't influence the results they generate based on commercial considerations. You could imagine future revenue as just one more parameter in the model that chooses the next token, but they're opting out. One reason for that is that they'd like to make it awkward for Google to behave differently. Search is more amenable than chatbots to a setup where the generative result gently steers a user in the direction of a more commercial outcome. So OpenAI is giving up less than Google here. On the other hand, it's inevitable that commercial considerations will affect chatbots at some point, even if that's not an explicit aim: there’s evidence that suggests ads are useful in filling in information gaps in organic search results and that users may actually prefer search-like experiences influenced by commercial considerations, so if that pattern holds, this commitment will probably end up being walked back or ignored eventually.

Sales and Marketing

Mobile game companies tend to have a skewed distribution of revenue per customer: most people won't pay anything, a few will make intermittent purchases, and some customers have a basically unlimited appetite for making in-game purchases, to the point that it makes sense to hire customer service reps, enroll players in sweepstakes, and give them perks like vacations. It's an interesting example of how many purely online companies eventually find that doing things offline is a complement. The way the games typically work is that they have some kind of in-game economy, with different limiting factors, and they try to design it so players get addicted to some outcome that will eventually happen more slowly unless they pay (for other games, the digital products are cosmetic; if players are mostly socializing online, that's where their budget for dressing nicely will go). Given that the marginal cost of creating a virtual magic gem is zero, and that they can charge whatever they want for it, this is a nice business: they're basically building a virtual universe entirely around imbuing particular bits with value to users, and then selling them that value. But there's also an exchange rate between future gaming activity and current real-world rewards that make these players feel like they're being treated well.

Breadth

Walmart is increasingly offering upmarket products like De’Longhi espresso machines. It's common online for companies to grow by differentiating themselves and then have their features converge on the competition (witness longer-form video apps getting into short video, and vice-versa, or the social network whose original pitch was that every post was 140 characters or less offering a million-dollar prize for long-form writing). After a while, their edge is more in distribution and comprehensiveness than in having unique appeal to a particular kind of customer. Retailers face a different variant on this problem: in the long run, economic growth drags their customers' tastes up, and social and economic mobility means that any given customer might have started with working class tastes and eventually moved to upper-middle. The reward for executing well on affordable shopping is that the company reaches the point where they can't afford not to sell something expensive, too.

More Convergence

Sensor Tower says most ads on Instagram ran in Reels. It's hard to escape the math of maximizing datapoints per minute of user interaction ($, Diff), so it's natural that social media eventually gets taken over by TV. One of the factors that slowed that down was that the previous iteration of the product was also pretty lucrative, so the opportunity cost of promoting Reels was high (they called out over half a billion in missed ad revenue two quarters in a row, and said that cost ramped down over the next two quarters, so call it a total cost of ~$1.5bn before it flipped to a positive contributor). Modern tech companies are unusually durable in part because they're willing to pay costs like this to stay relevant, and in part because one result of that habit is that they're usually in a position to pay the cost.

Disclosure: long META.