In this issue:
- Ring, Cloudflare, and the Supply Chain of State Capacity—Deeply horizontal companies, which offer products and services that a lot of GDP passes through even if little of it is captured, end up playing a regulatory role similar to that of the government. In some cases, they're a substitute, and in others they're a complement.
- Work Trials—It's cheaper to pay someone to work with you for a week than to hire based on remote interviews that someone might have cheated during.
- The Mezzanine Tranche—Buying GPUs before knowing precisely who will desperately need them turns out to be a decent business.
- Politics by Other Means—It's a terrible idea to even slightly normalize assassinations.
- Lawyers—The impact of AI on lawyers' productivity is tricky, because tokens are both the output and the user interface.
- Token Shortages—And, meanwhile, we all need more tokens.
Chat with this post on Read.Haus!
Ring, Cloudflare, and the Supply Chain of State Capacity
A simplified but useful model of what governments are for—the kind you'd use to explain to your kids what you mean by "the government"—is that there are some tasks that benefit everyone, but that individuals would benefit from even if they didn't pay for them. So we use taxes to fund things like public parks, fire departments, roads, and defense, and leave things like cars, video games, and destination resorts to the private sector. This model is intuitive enough that people are sometimes taken aback to learn that, in terms of dollars spent, the US government is mostly in the business of providing the same services as UnitedHealth and Metlife, i.e. taking a slice of everyone's earnings and using it to smooth out differences in healthcare consumption and to fund annuities for retired people. And then it's complicated further when these government services are provided, in part, by using private-sector companies as suppliers.
But this neat model of the world has gotten even more complicated today, due to the growth of extremely horizontal companies that play a government-like role in assorted sectors, and it's further complicated by companies that operate at such a big scale that they can actually capture the upside from addressing broad social problems. It's the kind of classic illustration of why we have government in the first place—standardized weights and measures are useful, and it's also useful to punish people for violating norms. But there's private upside in having a thumb on the scale, and the personal benefit from dealing with some slightly crooked business counterparty is small compared to the social cost of tolerating mildly crooked businesses.
E-commerce platforms do, however, have a strong interest in policing dishonesty on their platform. eBay was one of the first to encounter this problem at scale (Amazon started doing this a few years later). eBay had to have decentralized feedback mechanisms, where buyers provided quantitative and qualitative feedback. And then eBay had to solve meta-moderation problems, like the fact that customers vary in how picky they are about shipping speed, packaging, precisely where they draw the line between "very good" and "like new" condition, etc.
Paradoxically, this rules-making setup works better in markets that are more concentrated. In a fragmented market, there's an incentive for smaller players to compete by having laxer standards for who can sell on their platform: it's an automatic market for lemons were an iBay that lets fraudsters get away with selling counterfeit goods will be able to advertise lower prices than eBay.[1] That's not a sustainable equilibrium, because it implies that the only ways to sell online are either through whitelisted, authorized channels, or by selling such a cheap variant of every product that it's economically infeasible to rip customers off. If a new coffee pot costs $100, it might make sense to sometimes sneakily sell someone a secondhand one. But if they're $20, as the Amazon Basics coffee pot is, the cost of sourcing ripoffs is probably higher than the cost of just selling the genuine product.[2] And that relationship between moderate industry concentration and customer lifetime value-centric thinking has a compounding effect: as platforms get bigger, they can set implicit standards for how that kind of platform works, which means that competitors have to do more to assemble a viable alternative.
There are a few fun examples of companies acting this way:
- In one sense, Anthropic benefits from releasing powerful new models and charging a lot for them. In another sense, if those models are good at detecting software vulnerabilities (and if Anthropic is a little short on compute given their other breakout hit product), they can narrowly distribute it to people who will use it defensively first. Meanwhile, those initial users will be busily informing us of all the terrifying ways things could have gone very badly in the cyber risk department, and we'll have time to figure out if making it easier to detect vulnerabilities actually discourages finding them, because their half-life is shorter so they can't be stockpiled.
- One social problem that worries Amazon deeply is theft. Specifically, people stealing packages from other people's porches—a disproportionate number of these packages will have a logo that looks a little bit like a smiley face connecting the A to the Z in "Amazon." They could lobby for more police presence in upper middle class suburbs, but they can also sell a fancy camera meant to detect these, and even validate that the videos are authentic and not AI-generated. This isn't a substitute for law enforcement, but is a great complement: not only does it help with those crimes, but Ring cameras can identify the perpetrators of more serious crimes, and trace the movements of fugitives. It is a little bit dystopian that we've implemented, if not the panopticon, maybe at least a hemiopticon in neighborhoods full of early adopters. But a general feature of growing economic complexity is that more stuff means more stuff worth stealing, that the magnitude of value-destruction from breaking laws is higher, and thus that everyone gets less privacy. Cheaper surveillance on the part of homeowners increases the effective supply of policing from a given number of police in the same way that a community norm of cooperating with the police and turning in friends and relatives does.
- The Internet is amazing both in the scope of what it provides and the fact that we got it to work at all. It's all descended from standards that were set many orders of magnitude ago. I got dial-up for the first time when I was seven years old; when my oldest child was seven, it was a given that any Disney movie could be streamed instantly in high resolution, and she first used ChatGPT around the same age. And all of this is built on a system that, at the time it was created, operated under the assumption that every user was either a grad student or part of the US government. It's a miracle that continuously iterating on that has gotten us this far.[3] And one part of that miracle is that Cloudflare explicitly aims to build the architecture we would have used had the Internet been designed with the expectation that several billion people would be using it constantly, and that at least a fraction of them would be some variety of jerk or saboteur. A network that big can have crazy fluctuations in the demand to reach an individual server, which is also not the kind of problem that would be seriously considered when there were a few dozen servers, total. Cloudflare is not the only company that does this, of course; email is less of an open protocol and more a product that's provided by a small number of private companies, and standards do evolve over time. But the nature of their business is that they end up being critical infrastructure; they can't quite ban sites, but they're important enough that when they stopped working with 8chan, it took a few months for the site to come back online. The question of how writers and artists should be compensated if their output gets used to build AI models, especially if those models displace them, is also the kind of question regulations try to answer. In this case, Cloudflare is writing regulations of its own to let people put a price on their tokens. Their entire business is basically a fully-privatized, extremely technical chunk of the legal system devoted to enforcing de facto laws about how the Internet can be used.
These products all produce a constant stream of weighty decisions, and they tend to be focused on the sorts of questions that legislatures and supreme courts resolve, like the tradeoff in weapons laws between protecting self-defense and reducing unwanted offenses, or how to balance the safety and privacy impacts of surveillance systems, or figuring out the exact boundary between "annoying" and "unacceptable" for everything someone might do with an Internet connection. In these domains, there's an opt-in private legal system.
Looking backward, economists of the future might describe the default rich-world government system of the 2020s as social democracy with a sprinkling of anarcho-capitalism. And this is a pretty stable system! The platforms that exert law-like power tend to have a lot of pricing power, so in their capacity as quasi-governments they tend to tax at the Laffer maximum. But if they get big enough, they end up in competition with the government, and they wind up being compared to one set of companies that went through this exact cycle—public utilities.
Disclosure: long AMZN.
This problem is endemic in airlines, particularly for leisure travel, where the default consumer behavior ended up being "sort by price, then complain about quality." The airlines have done a decent job of de-commoditizing themselves, helped out by consolidation, which raised the odds that frequent travelers would be repeat customers for the same airline. ↩︎
In the coffee pot case, Amazon may be pricing this based on some estimate of the attach rate of mugs, perhaps the probability of a subscribe-and-save to coffee grounds, etc. High-but-not-100% market share forces companies to assume that they can get repeat customers, and that this should inform how they treat their current customers. ↩︎
And that "continuously" applies to the network itself. Aside from a few BGP hiccups and the odd AWS outage, nothing has happened that approximates the whole thing going down. ↩︎
You're on the free list for The Diff. Last week, paying subscribers read about how two of this year's big IPOs are at companies going through some kind of crisis ($), thoughts on Mythos and uneven distribution of frontier models ($), and how a new tool that systematically tracks how good pundits' predictions are also explains the nature of economic growth ($). Upgrade today for full access.
Diff Jobs
Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:
- Well-funded, frontier AI neolab working on video pretraining and computer action models as the path to general intelligence is looking for researchers who are excited about creating machines that learn from experience, not text. Ideally you have zero-to-one pre-training experience and/or are a high-slope generalist who’s frustrated that the big labs aren't doing this. (SF)
- Series A startup building multi-agent simulations to predict the behavior of hard to sample human populations is looking for researchers and engineers (ML, platform, infrastructure, etc.) to improve simulation fidelity and scale the platform to hundreds of millions of simulation requests. Problem-solving and genuine interest in simulation matter more than pedigree. Experience working with languages with an algebraic type system is a plus. (NYC)
- A Fortune 500 cybersecurity company with decades of proprietary security data is running an internal incubation with a pre-seed startup mentality and a mandate to build something new in AI. They are looking for a founding engineer who can ship fast, an engineer with a security background who’d be excited to contribute to OpenClaw’s security efforts, an AI researcher, and a generalist (ex-banking/consulting/PE background preferred) who wants to wear a bunch of different hats. Comp is FAANG+ and cash heavy. If you want to build something new in AI, but also need runway, this is for you. (SF/Peninsula)
- High-growth startup building dev tools that help highly technical organizations autonomously test and debug complex codebases is looking for senior product managers who enjoy defining developer-facing APIs and abstractions. Experience with fuzzing or property-based testing a plus! (London, D.C.)
- A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring experienced forward deployed AI engineers to design, implement, test, and maintain cutting edge AI products that solve complex problems in a variety of sector areas. If you have 3+ years of experience across the development lifecycle and enjoy working with clients to solve concrete problems please reach out. Experience managing engineering teams is a plus. (Remote)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.
Elsewhere
Work Trials
Earlier this year, The Diff noted that more software engineering jobs involve a work trial as part of the hiring process ($). This has been ratified by Business Insider, which cites a few more examples. Work trials introduce a mix of friction and adverse selection into hiring—either someone has to burn some vacation days at their current employer, or you're only hiring from a pool of unemployed people, which is going to include some people who have good reasons for being unemployed. It's a symptom of AI productivity shear: it got easier to cheat in short interviews, so companies moved to a longer process where they could actually measure output.
This phenomenon is a great reminder that it's incredibly hard to model the overall economic effects of AI. If you'd asked someone a year ago if better AI would lower transaction costs, particularly the search cost for finding new employees, they probably would have said yes: it's easier to identify good candidates, send them a lightly-customized outreach email, etc. But, as it turns out, AI actually made it harder to find good people, by making it a lot easier to fake 95% of being a good candidate, long enough to luck into a job offer. We should have similarly low confidence about which other areas will shrink to nothing instead of 10xing in a more intelligence-abundant world.
The Mezzanine Tranche
Meta has committed to another $21bn of spending on CoreWeave's GPUs. Part of the CoreWeave model is that until one of them gives up on competing in AI at all, there will always be at least one lab that's relatively short of compute. Meta and Google are both companies that can use the same infrastructure to support a wider range of business tasks, some of whose success is measured in increased ad dollars rather than subscription- or usage-based revenue. So they'll tend to be more stable bidders, whereas demand from the pure-play companies will tend to swing more wildly. Right now, demand for one pure-play company’s products happens to be growing at a particularly breakneck pace, so it’s no surprise that they just signed a first-time, multi-year deal with CoreWeave too.
Politics by Other Means
In two separate incidents, someone apparently threw a molotov cocktail at Sam Altman's house, and someone fired a gun at it. One of them had participated in the Pause AI Discord (Pause AI has, of course, condemned the attack). While it's a decent guess that the other attacker was motivated by the same concerns, it's a mistake to argue that anti-AI activists bear the primary responsibility here. The Diff has argued that the real phenomenon at work is that assassinations are hard to pull off, and also mostly counterproductive, so the typical person who succeeds at one is going to be smart but with bizarre political views ($). If there's a narrative that's worth blaming here, it's the one that's sympathetic to other attempted or successful assassinations. It is in some sense true that there are people out there who are so destructive to society that we'd be better off if they were dead. However, it's also true that we all disagree on where the line should be drawn, which is why "this person is so bad that they deserve to be killed" is a question resolved by the judicial system rather than by the nearest person with a deadly weapon. Any sympathy with assssination attempts as such makes assassinations in general more likely, and the typical gun owner does not put the high cost of healthcare at the top of their list of political issues. It's just a very shortsighted thing to cheer for.
All of this is procedural, rather than object-level: it's healthy to have a norm that ideologies aren't responsible for people who kill in their name, because if this kind of thing continues, everyone's going to end up believing in some cause that somebody, somewhere, tried to murder someone over. The tension here is that AI risk is articulated as a life-or-death situation, though many prominent figures in that community spoke out against random violence well before this happened (and Pause AI requires volunteers to sign a no-violence pledge before joining. On the other hand, having a form that says “I will not commit terrorism on behalf of this group” does raise some questions; presumably your local softball league doesn’t include this one on the membership application form). There hasn't been any violence (that I know of) motivated by people angry at the invisible graveyard of people who died because of slow FDA approvals. So it's entirely possible to make the point that some people make decisions that risk, or cause, death, without anyone deciding to shoot at them in response.
Lawyers
In my recent interactions with lawyers (all in the category of "let's get the paperwork right" and not "time to litigate!"), I've relied on the assistance of my in-house counsel, ChatGPT, Esq., to flag potential issues in contracts. This is apparently a common practice now, and means that layers are spending more time than they expected on fixed-price contracts ($, FT). In one sense, AI tools are rapidly making life easier for lawyers, by making it more convenient to search through large volumes of text. But their clients are, in relative terms, getting legally sophisticated much faster. Meanwhile, because LLMs are imperfect and because clients are not using the prompts a practicing lawyer would, some of the results will be pointless busywork. We're still early enough in the deployment of AI that we don't know for sure what the norms will be; some people will eschew lawyers and just have LLMs draw up and review their contracts, and some legal services will be priced in such a way that if you want to DIY your project alongside a pro, you're paying for every time you get in their way.
Token Shortages
Amazon's autos business is is expanding ($, WSJ); it's closer to their third-party marketplace model, where they have listings from existing dealers who get a new sales channel in exchange for paying Amazon a cut. Given that the automotive industry has been such a big chunk of TV advertising, this is the kind of business where it's very high-signal if an online model is taking off: it means that either the sellers are getting a higher ROI from using Amazon for marketing rather than using TV, or that they're more willing to pay for leads when they know they work rather than paying for more general advertising and wondering if it made a difference at all.