Plus! The Muni Crisis, Worker Mobility, Supply Shocks
Welcome back to The Diff. Here are the subscribers-only posts you missed this week:
A Sputnik Moment Without an Apollo Program compares Covid-19 to the last crisis where a major world power demonstrated technical capabilities the US lacked. By the end of the next decade, there was an American flag on the moon. This time, the US hasn’t scrambled. The clock is still ticking, though; there was almost a five-year gap between Sputnik and JFK’s promise to go to the moon.
The Big Succession looks at a peculiarly modern problem: modern healthcare is better at keeping people alive for long periods than keeping them fully-functional, so some company founders remain in control long after most people would prudently choose to retire.
In Content Moderation and Regulatory Capture, I consider Facebook’s incentives around content moderation rules. If Facebook can pass the hard judgment calls off to regulators rather than their own moderators, it saves them some PR difficulty—and creates a new market.
Progyny: Fertility Inc. profiles a company that offers fertility benefits. This is not just another perk—it affects family structure and has deep ties to long-term inequality trends.
To read all of this, and to join in today’s subscribers-only call with Reeves Wiedeman, author of the new WeWork book Billion Dollar Loser (reviewed in The Diff here), subscribe now.
This is the once-a-week free edition of The Diff, the newsletter about inflections in finance and technology. The free edition goes out to 16,172 subscribers, up 215 week-over-week.
In this issue:
The Muni Crisis
Paul Graham points out something Fitbit, GoPro, and Sonos investors have also observed: hardware is hard.
It is, indeed, a brutal business. For a hardware company to grow, it needs to ship more units or ship more expensive units every year. Early on, the challenge is operational: GoPro had to figure out how to assemble more cameras, and had to design better ones. But in the later stages of a hardware company’s growth cycle, two problems emerge:
It’s hard to determine the total addressable market.
The upgrade cycle gets longer.
The addressable market question starts out easy. If a small group of fanatical users like the current iteration of the product, then a larger group of them will probably buy an improved version. But over time, it becomes a tougher, more futurism-driven decision. How many people will use fitness trackers, athletic cameras, and smart speakers? Every year, growing the market means changing the world—it means betting that millions of people will slightly alter the way they live.
Longer product cycles happen for product- and market-related reasons. On the product side, any new gadget eventually runs out of obvious improvements. Products can go from impossible to possible, but they rarely go from impossible to perfect in one step. “Perfect,” though, is an asymptote; every kitchen is full of products that are about as good as they can ever hope to be—can openers, blenders, and microwaves still get slightly better over time, but it’s almost inconceivable that someone could launch a new product in these categories and expect it to take over the world.
In the 90s, buying an upgraded PC transformed the experience. The difference between my computer, with 16MB of RAM, and a new one with 32MB, was instantly noticeable. Successive doublings since then have had less and less of an impact on my experience as a user; once the computer’s default interaction speed is instantaneous, the only improvement is instantaneous-but-pretty, and as it turns out there’s more demand for pretty than fast.
And that leads to a second problem: changes in who purchases the product. When the buyer population shifts from early adopters to average people, it necessarily shifts from people who will buy every brand-new update because it’s a brand-new update to the much larger cohort that’s satisfied by default with the current options and doesn’t feel the pressing need to upgrade. Even pathological early adopters slow down their upgrades over time, since older product categories cease to be something one could adopt early.
This means that hardware companies invariable run up against a tough year, where they overestimate either the eventual size of their market or the degree to which the next incremental improvement is a big deal to their current customers. And, because hardware has a nonzero marginal cost, this is always an expensive mistake: they’ve spent cash on R&D, marketing, and manufacturing, and they’re stuck with rapidly-depreciating inventory. Since consumer hardware sales skew to the holiday quarter, anything they don’t sell before Christmas is that much harder to sell after.
They’re also stuck with a more existential question. Now that the upside is capped, do they content themselves with a smaller market? If their product is now being used by everyone who wants it, they may already have hit peak sales, and they’ve certainly hit peak relevance. That’s a depressing downshift from rapid growth.
Compounding the problem is the fact that consumer hardware is a complement to software, and software companies tend to have stronger network effects, far more price discrimination ability, higher margins, more access to capital, and, in general, every trait that makes a company good at commoditizing its supply chain. The history of PC manufacturing in the 80s and 90s is mostly a history of companies striving mightily in an effort that mostly made Microsoft shareholders a little richer.
The fundamental difficulty with hardware is that an upfront purchase represents the capitalized value of all the future uses of the product. This constrains hardware companies' ability to price-discriminate. A $200 TV or a $400 treadmill might represent thousands of dollars worth of enjoyment. Or it might just gather dust. The products are necessarily mispriced for almost every buyer, and it’s hard to capture the difference in value between what consumers pay and what they ultimately get.
There are a few models that can escape this:
One possibility is a hardware product bundled with a subscription. This is what Fitbit has done ($, WSJ):
We saw incredible growth in our digital business or premium business. We reached 500,000 paid subscribers… We realized that people’s engagement with their health is lifelong, yet the way that we structured our business with device sales was very episodic. The way that we were making money was just not in sync with how people view the value of these types of products and services.
We felt to be better oriented with how our consumers view health, we needed to move to a model that engages with them constantly, whether it’s monthly or annually, rather having this transactional relationship maybe once a year or once every two years or three years. We needed to align the business model with the way that people viewed health.
Apple, of course, is one of the most successful versions of this. The company currently has 585m total paid subscribers for its various services, allowing it to earn revenue over the usage cycle rather than from purchase to purchase. For Apple in particular, tight hardware and software integration makes the services business a growth engine: Apple can add in hardware features that it later monetizes through software products, which is a key part of its health strategy and ties into other subscription products, too. (Higher-resolution cameras, for example, make users more likely to run out of storage space and pay Apple for more).
Another option is to reverse this, by offering hardware as a complement to an existing software business. This is one element of Google’s approach to Android and PCs (there are antecedents: the Microsoft Mouse sold 1m units by 1988, making it a serious hit and a good way to market other, higher-margin products). Bytedance recently did something similar, selling a smart lamp that targets the education market. This model has two benefits: it pushes other hardware manufacturers to reach feature parity, which usually makes the underlying software more valuable. And, in some cases, it subsidizes software adoption. A $650 Pixelbook makes Google a lot more than $650 in revenue if it leads to more search volume and more G Suite subscriptions.
A final option is financial engineering: turn a hardware purchase into a subscription by either leasing the product to the end user or financing their purchase for them. The economics of this vary based on the value of the product; in the case of a car, it’s an arbitrage between the cost to consumers of an unsecured loan and the cost of a loan secured by a valuable piece of collateral that the car company knows how to sell. For cheaper products, credit is just a way to encourage more purchases, at the risk of occasional bad debts. Peloton offers 0% APR 39-month loans to buyers, and while it doesn’t offer returns after 30 days, the company does have a guide to buying and selling used bikes.
There’s a massive gap between the cost of capital for companies and for consumers, and funding gaps tend to push assets onto the balance sheet of whoever gets the cheapest capital. The inevitable result of this is that the average person owns less of what they use, and pays for it as they use it.
In one sense, that’s a major change. Who wants to sell all their stuff and rent it back from giant corporations? But in another sense, it’s part of the process of economic growth. All gains from trade can be described this way: any time you buy something rather than making it yourself, you’re “renting” the fixed assets required to make it. As economies grow, supply chains get complicated, and diverse products proliferate, more and more purchases fit this mold.
And, in another sense, hardware companies that shift to a subscription model are pricing according to reality. If you buy something and use it every day for years, it makes sense that you’d pay for it as you use it rather than buy an uncertain amount of usage at one upfront price. The gradual shift to subscription-based hardware is just the economically sensible recognition that savings and consumption are separate activities, and mixing them together is suboptimal for everyone.
 Incidentally, one of the great mysteries of the world is how abominable microwave interfaces are. My last microwave used a scrolling LCD display. You could set the power level, and it would beep. And then the following would slowly scroll across the screen:
PL - 8
It was the “PL” that did it for me: not only does the microwave include the only relevant information last in an unskippable sequence like some sort of pre-iPhone voicemail system, but it gratuitously repeats the fact that it’s a power level. Obviously, very few people buy a microwave based on the UI, and appliances are a sufficiently low-margin business that it doesn’t pay well to spend a lot of time on design. But still, that interface goes out of its way to be bad.
We moved recently, and the new microwave has a fancier, higher-resolution display with the power level next to the timer at all times. So, progress happens, but at a glacial pace, and the microwave was not a significant factor in our choice of home.
 In the 90s, one of the practices that helped Enron initially and got them in trouble eventually was their habit of capitalizing the gains from long-term deals. Enron would strike some bargain—a multi-year contract to buy or sell natural gas at a fixed price—and then estimate the profits from that transaction and book them upfront. Since those profits are realized over a long period, estimating the profits was an art form, and they’d adjust the modeled value of contracts to hit earnings targets. Valuing a durable purchase like a Fitbit based on its upfront cost is the rough equivalent. Either it improves your health and is worth many multiples of its upfront cost, or it goes in a drawer after a few weeks.
 Peloton is doing some ad hoc credit underwriting because of what it sells. The kinds of people who regularly use an exercise bike are more conscientious than the ones who don’t, so they’re selecting against the worst credits.
A Word From Our Sponsors
Here’s a dirty secret: part of equity research consists of being one of the world’s best-paid data-entry professionals. It’s a pain—and a rite of passage—to build a financial model by painstakingly transcribing information from 10-Qs, 10-Ks, presentations, and transcripts. Or, at least, it was: Daloopa uses machine learning and human validation to automatically parse financial statements and other disclosures, creating a continuously-updated, detailed, and accurate model.
If you’ve ever fired up Excel at 8pm and realized you’ll be doing ctrl-c alt-tab alt-e-es-v until well past midnight, you owe it to yourself to check this out.
The Muni Crisis
A serious recession divides all governments into two categories: the ones that can deficit-spend their way out, and the ones that can’t. At the federal level, the US can (and is) running a large deficit to offset some of Covid’s economic impact, but the picture is worse at the state level, with an aggregate expected deficit of $434bn through 2022 and limited capacity to fund it by borrowing. For high cost-of-living states, that’s especially painful: they can try to close the gap by cutting spending or raising taxes, but their highest earners now have more choices about where to live.
In a purely technocratic sense, local business clusters like New York, Chicago, San Francisco, and Los Angeles are valuable, and a federal bailout of economically impaired state governments is a way to keep them working, or at least ensure that they recover quickly. But the states where these clusters are located have practiced some bad financial habits—Illinois, for example, has a $230bn pension deficit (and that’s using the state’s generous return assumptions; the real deficit is higher).
There’s one sense in which moral hazard arguments make zero sense during a pandemic: nobody in the hotel, restaurant, airline, or cruise ship business should have operated on the assumption that they’d need enough cash to pay a year or two of expenses with almost zero revenue. But at the state level, some states are clearly operating either a) on the assumption that 2008 was the last recession the US would ever face, and they could grow their way out of chronic pension underfunding, or b) that in the event of a crisis, every state would be bailed-back-to-even, and states with rainy-day funds would end up in the same fiscal position as states with the opposite outlook.
One tie-breaking argument is that the US political system gives proportionately less power to populous states than to mostly-empty ones, and the low-density states generally kept their books more in order. That affects whether a state bailout gets every state on the same footing, or just tries to reset them to where they were in 2019.
Upwork has some interesting survey data on workers' intent to move: people in big, expensive cities generally plan to move to smaller, cheaper ones. The survey indicates that almost 2% of the population has already moved because of the opportunity to work from home, while almost 6% plan on it.
But in another sense, the survey shows that the impact of work-from-home is more limited than that of other Covid trends. These surveys show that roughly 2-3x the usual number of moves are planned. That’s a smaller impact than Covid’s effect on e-commerce, which was about five years of growth in three months, some of which, undoubtedly, was pulled-forward demand and not a permanent shift.
More mobile workers have an effect on the margin, especially for regions entering a fiscal death spiral. But, just as it’s startling that GDP only dropped 10% sequentially in Q2 of 2020—it seemed that everything was shut down, and yet 90% of the economy was still there!—a more mobile workforce was just a few years' worth of typical worker mobility. The regional impacts will be large; some of the workers who can’t move are the ones who provided services to the workers who could, so the remaining workers in those large cities face much worse employment prospects, can’t can’t Zoom to their jobs as Uber drivers or waiters. But the aggregate impact, for now, is smaller than it looks.
Scott Sumner points out that the Covid recession is better described by economic theory than by recent history. It’s a supply shock, whereas most US recessions are demand shocks. In a demand shock recession, the purpose of economic stimulus is to get people spending at their normal level again. In a supply shock, the goal is not to do that—it would be very bad news if the government stimulated the economy so effectively that restaurants, bars, and cruise ships were packed—but to ensure that workers in the affected sectors of the economy can continue to pay their bills until they get back to work.
Demand shocks are more of a postmodern phenomenon; they’re a recession about nothing, or a recession whose largest single cause is a drop in spending caused by fears of a recession. Supply shocks are much more real: if some part of the economy physically can’t function, then of course overall output drops, and no policy can perfectly offset those real-world problems. Policy can, however, ensure that a supply shock in one part of the economy doesn’t turn into a demand problem for the rest of it.
The Diff is testing out a referral program. Recommend the newsletter to someone who becomes a paying subscriber, and you’ll get a free month.