Inside the Decline of Stack Exchange

Plus! The Other X; Refinancing China; The Activist Short Death Spiral; Rebundling; Fitness Bands; Diff Jobs

Today's free newsletter is brought to you by our sponsor, Antimetal. Antimetal makes cloud savings automated, effortless, and risk-free.

Inside the Decline of Stack Exchange

Stack Exchange is one of the greatest compendia of human knowledge ever produced. Where else can you ask questions and get answers from ten different Fields Medalists? (Not to mention Peter Shor of Shor's algorithm explaining why French and English renaissance poetry use different metres? You can even use it to see a snapshot of the early efforts behind a once-thriving e-commerce site.) Plus, all of this knowledge is Creative Commons-licensed, so anyone can "remix, transform, and build upon the material for any purpose, even commercially," as long as they give credit. As a consequence, it's a big chunk of the programming-specific data ingested by AI models.

And, as a result of that, Stack Exchange has seen a material decline in usage. The first sign was the most ominous Google Trends graph I've ever seen:

so_vs_chatgpt

(This one cuts off at the beginning of December, because if it's extended any further it re-scales until Stack Overflow is just a flat line.)

Q&A sites like Stack Exchange combine two kinds of businesses that are usually distinct, but have historically been incredibly powerful together: two-sided networks and SEO plays. The two-sidedness of two-sided networks is always a bit blurred (some fraction of Uber riders own cars, the vast majority of Airbnb users have homes—with at least one unoccupied bed when they're staying at an Airbnb), and, at least for programming, that’s the case here as well. Stack Exchange’s most well known forum, Stack Overflow, must maintain a critical mass in two groups: people with questions about programming and people with answers. And, luckily for them, programming is an incredibly heterogeneous space, where many people end up mastering one set of technologies but get stuck using another one. So, if a site is big enough, someone can be a beginner in one set of questions and an expert when it comes to providing a different set of answers.

On the SEO side, this kind of content naturally generates pages that rank well in search (and Stack Exchange has done its work to ensure that they get the credit they deserve). In long-tail SEO, there are often either zero good results for a query or exactly one. And SEO has many feedback loops. For example, search engines use autocomplete to guide users towards the queries they're trying to write. And those autocompletes are at least partially influenced by what information is available. So queries tend to converge on the phrasing used by the most prominent site that answers them.

One takeaway from this is that if you want to look at the health of a company with this kind of model, you actually have several options: revenue and traffic will be fairly backwards-looking—the way to get revenue from job ads is to have had traffic in the past, and the way to get traffic is to have had good answers and questions even further in the past. By looking at specific kinds of questions we can even look at the historical shape of technology trends and at the granular impact of AI on developer behavior.

Very fortunately for us, Stack Exchange is committed to transparency, and offers a quarterly data dump of all questions and answers. So, if you're truly curious, you can track trends in tags over time. For example, here's a chart of the volume of questions about programming languages:

so_pl

There are a few things worth remarking on: question volume generally grew until about 2014, and then stabilized for a while before going into a gentle decline. There's a seasonal pattern that's common for informational, non-commercial queries: a big drop associated with the holidays, a moderate rise in late spring and in mid-December (coinciding with exam season).[1] There was a surge in question volume during Covid, as a combination of pandemic hobby projects and hasty reworking of existing infrastructure to accommodate work-from-home (for B2B software) and the shift from brick-and-mortar to digital (for consumer-facing products).

And then ChatGPT hit: the holiday lull never really ended; question volume in January 2022 was running at basically the same pace as December 2020, but in 2023 the question volume only recovered about half of the holiday drop, and then went almost straight down from there. Question volume in May 2023 was about half the level of November 2022, was below the holiday drops typical throughout mid to late 2010s, and was around the level of late 2011.

The earlier slowdown makes some sense. Like the durable goods business, question volume is some function of 1) new features, new edge cases, new performance bottlenecks, etc., and 2) the one-time effect of answering questions everyone has when they're new to a language. "What is the difference between append() and extend()?" only needs to be asked once, and will be answered forevermore, so any Q&A site that wins its vertical should, in fact, show a peak in question volume that doesn't imply a peak in utility.[2]

You can see this pattern clearly by looking at languages that have been in use for a long time. Here's Java, for example:

so_java

Questions about Java had already declined by almost three quarters from their peak before the introduction of ChatGPT, and are down another 25% since. For something that was adopted a bit later, like React, question volume had just managed to reach the plateau where all the questions beginners ask had been thoroughly answered when ChatGPT hit, and the result was a larger drop of around a third year-over-year:

so_react

But there are some counterexamples, with telling commonalities. Consider this set of technologies (in this case, we're zooming in on recent months and normalizing so the various trends are readable):

so_proprietary

There really isn't much to see here. Which is surprising! A new competitor was introduced, rapidly got massive mindshare, and yet it didn't have much of an impact on user behavior! What's probably going on is that these are ecosystems controlled by individual companies, which has a paradoxical effect: it means there's an economic incentive to provide excellent up-to-date documentation, but it also means that some knowledge is locked up inside organizations rather than being posted about on Stack Overflow.

Now consider another set of queries: the ones revolving around cloud hosting services like AWS, Azure, and GCP. In this case, what we see is that question volume is roughly flat year-over-year.[3]

so_infra

Two features set these apart:

  1. They're products with a fairly high launch cadence, whose offerings have materially changed since GPT-4's September 2021 knowledge cutoff. ChatGPT is great at answering "How do I do X?" questions, but if X was invented after the model was trained, it's flummoxed.
  2. Infrastructure problems are often more situational. If writing a program is analogous to designing a plane, the infrastructure-related questions are a lot closer to asking how to repair an engine in mid-flight. It's harder to refactor something that's already running at scale, so many of the questions that arise will need more context, and perhaps more back-and-forth. (There's a somewhat similar split with questions about Git, because the two categories of questions are 1) I have no idea how to use this, and 2) I thought I knew how to use it but it's hopelessly broken.)

And there is one other category that has helped Stack Overflow AI-proof itself: AI tools themselves!

so_ailibs

It's a delightful-but-temporary irony that the biggest thing Stack Overflow has a competitive advantage in is helping people build clones of the tools that are causing the site so much trouble in the first place. These products are new enough that there weren't many answers in the training data, or, in the case of Langchain, any answers whatsoever (it launched in October 2022, making it a colicky newborn of a library).

ChatGPT isn't just changing the way people ask questions about languages. It also seems to be changing the pace of adoption. One obstacle to switching platforms is that, in general, you know the one you're switching from better than you know the one you're switching to. The basic trade is to accept something like an 80% drop in productivity for a few months in exchange for, say, a 20% increase in productivity indefinitely thereafter. If new frameworks are faster to learn, and in particular if there's a trivial way to answer "How do I convert this code using library X to code using library Y instead," that switching process is accelerated. And even though this is happening in part because people are using ChatGPT rather than Stack Overflow, it shows up in the Stack Overflow data. Here's how the transition from Tensorflow to Pytorch is going:

so_pytorch_tensorflow

Stack Overflow remains a great tool for knowledge creation and aggregation. It's unfortunate for the company that the single best way to search that priceless corpus of knowledge involves using someone else's service, probably paying them $20, and sending exactly $0 of that back to Stack Overflow. But it's hard to imagine Stack Overflow existing in the first place without being vulnerable to this: the site taps into what is, in pure dollar terms, one of the most valuable pools of human capital in existence. They knew that they weren't in a position to dictate terms to their users, and ensured that users were getting credit and could freely share the information they had. This has happened before: encyclopedias were a nice, slow-growth business for about a century. And then Microsoft licensed the contents of the Funk & Wagnalls Encyclopedia, put them on CD-ROM with color illustrations, and started selling Encarta in 1993. And, soon after that, they realized that the near-zero marginal cost of CD-ROMs meant that computers could be pitched as educational, a nice bit of market segmentation that coincidentally obliterated the encyclopedia business' pricing power. And then the same problem happened to Microsoft, when Wikipedia supplanted Encarta.

It's just fundamentally hard to be in the business of selling information at a time when information aggregation and delivery mechanisms are somewhat hardware-bound and hardware keeps improving. This risk is generally invisible until it's unavoidable; generative AI spent a long time as a novelty product whose most useful product was surreal humor. And then suddenly, they could write code, and a few months after that, they could write code at a human level, faster than people could. (Though, as with the surrealist humor, it took some handholding and iteration to come up with the right prompts and edit things into good working order.) The company is launching its own AI tools, mostly around search. Better search does help Q&A sites directly, by making it more likely that someone will find the answer to the question they should have asked and less likely that they'll add a low-quality question that gets a rude answer in return. They're also making it easier to incorporate answers into the usual flow of work.

But they seem to be sticking with human-written text as the core offering. When someone's asking a question because they want to be in a position to independently produce the answer and to fully understand the principles behind it, a good Stack Overflow answer will beat what ChatGPT produces. But even though that's the most socially-valuable service Stack Overflow offers, it's not a good description of the typical user interaction, where the task at hand is less "I want to finally understand the Rust borrow checker" and more like "I want to fix this bug so I can finally log off." And LLMs are a better way to access Stack Overflow's knowledge base in order to provide that answer—it's visible right there in the data.


The data analysis for this post would have taken much longer without Stack Exchange. And, if the site didn't exist, ChatGPT would have had a lot more trouble answering questions and providing code to build the web app you can use to interact with that data. Paying subscribers can access the tool that generated these charts, and generate their own charts of Stack Overflow trends.

Disclosure: Long Amazon, Microsoft.


  1. Students aren't the main users of Stack Overflow, at least in my experience, but shifts in their participation will still show up on the chart even if the typical user is a full-time professional; the seasonality of the overall economy is the sum of non-overlapping seasons for other businesses: homebuying is stronger at the start of the year, travel and construction towards the middle, and retail in the fourth quarter. ↩︎

  2. This pattern also shows up at the individual level: when you first start studying a topic, you can passively absorb knowledge without necessarily having many questions; a good tutorial is probably one that answers every question it would raise for the typical reader, unless the question is deliberately left as an exercise. As you advance, you have enough knowledge to notice that things don't seem to make sense, or are done in counterintuitive ways. And then, once you've learned something really well, you have fewer questions about the topic as such and more about applications, intersections with other domains, etc. Often the difference between intermediate and expert skill at something is being able to answer a question with “Yes, that way would have made sense and if we were starting over we would have done it like that, but now we’re stuck with it.” ↩︎

  3. There's some kind of weird seasonal effect where search volume tends to drop in April relative to earlier in the year. The reasons for this are mysterious, but it repeats. ↩︎

A Word From Our Sponsors


Save up to 75% on your AWS bill.

Antimetal makes cloud savings automated, effortless, and risk-free. You can start saving in less than 2 minutes with zero code or engineering required. Best of all — Antimetal only takes a small percentage of the savings they generate.

They are already helping 700+ companies including Politico, Polygon, Mercury, and others save an average of 62% on their bill.

To get your first six months free, request a demo and mention “The Diff”.

Elsewhere

The Other X

Steel company Cleveland Cliffs has made an offer to acquire US Steel at a 42% premium. If this offer goes through, it will end US Steel's life as a public company, which started with a bang in 1901 when it was put together through the largest buyout ever and then taken public as the largest IPO ever, making it the first company with a $1bn enterprise value ($1.4bn, to be exact). It's underperformed a bit since then; the company's current enterprise value is $6.5bn.

The last few years have been an amazing cycle for the steel business; they've responded to mostly-low demand by curtailing investment, so when demand finally picked up during the post-Covid boom, earnings followed. One result of that is that steel companies are, optically, incredibly cheap; the offer still values the company at just 8.6x earnings, and that excludes any cost savings from a merger. (Which, in fairness, could be limited; the press release notes that the United Steelworkers have approved the deal, presumably because there aren't too many layoffs planned.) Of course, the cycle is a cycle, but even so, consensus estimates put the deal at 10.5x 2025's estimated earnings. One of the things that drives low long-term returns for cyclical businesses is that, when the cycle is good, it's easy to pencil out a plan for a merger where it's immediately accretive and the excess debt from the acquisition is paid down in just a few years. But that also means that a cyclical upswing has two simultaneous forces: companies use their excess cash to finally pay down debt (since the start of 2020, US Steel has reduced its net debt from $3.6bn to $1.3bn). Meanwhile, low leverage and high earnings make companies a tempting target for acquirers, so their financial leverage can end up rising right when operating leverage starts cutting in the wrong direction.

Refinancing China

Last year, The Diff noted two problems with China trying to create a reserve currency: there's a shortage of renminbi-denominated risk-free assets, and there's far too much off-balance-sheet debt issued by entities associated with lower-level governments. The post proposed that China could centralize some of this debt, issuing bonds on behalf of the central government in order to pay off province-level borrowing (a negotiation that would presumably involve some transfer of decision-making ability in the other direction). China isn't doing that, exactly, but is pushing provinces to raise $139bn in order to pay off local debt. One function this serves is to move debt onto the balance sheet, which means it makes China's financial system more legible to outsiders—but also means that insiders get to see which local governments achieved their growth targets through good management and which ones just borrowed their way to success.

The Activist Short Death Spiral

In January, short seller Hindenburg Research issued a negative report on the Adani complex of companies. That's cost the companies some market value, but hasn't led to a full-blown crisis just yet. But the crisis is getting closer: Adani's auditor, Deloitte, has resigned, saying it can't properly understand certain inter-group transactions ($, FT). This is sometimes the way large, complex, levered entities die: once skepticism gets expressed, it becomes riskier to deal with them; even if the auditors believe that the odds of fraud are only, say, 5%, that makes it a poor risk-reward to audit them. And a second auditor has less information than the first, and thus a wider confidence interval for what Deloitte thought the odds were. Meanwhile, lenders will also rethink their loans—any given lender might be confident that the company is fine, but also might worry that if other lenders don't think so, and might decline to roll over their loans accordingly. Just as many companies, from coffee shops to phone designers, eventually evolve into banks, some companies continuously add complexity until they're vulnerable to a bank run.

Rebundling

Some of the large-but-not-largest streaming companies, including Paramount and Warner Bros. Discover, have mused publicly about bundling with other companies' streaming offerings. One of the surprises about the streaming business is that even though it's existence for over a decade and a half, we're still learning about how the economics actually work. Part of that is because the streaming bundle exists in the context of other bundles whose economics are more mature, meaning that certain IP libraries were late or intermittent to streaming and that some of the industry's structure was determined by the cohort of determined contrarians who made streaming work in the first place. (They were right about trends in home broadband and in customer expectations for on-demand entertainment, but turned out to be wrong so far about whether or not ads worked.) That legacy creates legal and economic friction around the streaming business, so companies are still iterating.

Fitness Bands

The Verge has a story in an undersupplied category: products that used to be popular and slowly faded away. In this case, they're writing about where all the fitness bands went. It's a classic durables story: for a while, the products kept getting cheaper and better, which meant that every seller had an incentive to clear out their inventory as quickly as possible. Past a certain point, sensors were so cheap that fitness bands could be an app in a more general-purpose smartwatch, and it was easier for Apple to incorporate the relevant subset of Fitbit than for Fitbit to somehow recreate the whole Apple ecosystem. The other option was subscription, but even there it's difficult to get the economics exactly right (though Apple is, naturally, trying): every subscription fitness product wants people to keep all of their fitness in one ecosystem, but that either means lots of integrations across different platforms that may have the same incentive for lock-in, or requiring customers to do a lot of data entry in order to use the product.

Diff Jobs

Diff Jobs

Companies in the Diff network are actively looking for talent. A sampling of current open roles:

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.