In this issue:
- Are the AI Labs Ready for the AI Backlash?—The founders of the big AI labs were probably being completely earnest when they argued that AI is an immensely risky technology. Unfortunately, the public narrative is catching up to some of the risks people were talking about on LessWrong a decade ago. It's instructive to compare this to previous general-purpose technology deployments—there's something special about this one.
- Barter—In private markets, the most liquid asset isn't necessarily cash.
- State Hypercapitalism—The Intel trade turned out well so far, but that's partly because some of the costs aren't properly marked to market.
- AI Pricing—When software has a marginal cost, pricing has to reflect it.
- Reflexivity—High gold prices create an environment where people want to buy gold.
- Complements—A lively market in compute.
Chat with this article on ReadHaus.
Are the AI Labs Ready for the AI Backlash?
One of the strangest features of the modern AI business is that most of the people who got in early were convinced that what they were building could literally end the world—but that it was inevitably going to be built, so humanity's only hope was for them to build it first. This is quite unusual; it's as if Rockefeller realized that oil was an incredibly efficient way to move potential energy around, deduced the impact of greenhouse gasses on the climate, and started Standard Oil specifically because monopolistic pricing would reduce oil consumption and give us time to figure out alternatives.
Of course, AI people also have the usual mix of curiosity, power-seeking, and interest in having lots of money that characterizes people who run successful tech companies.[1] And there are people who go into fraud detection, anti-botnet infrastructure, cybersecurity, etc. because they worry that these are serious problems. But they aren't rushing to build the world's biggest botnet so they can take over all the world's poorly-secured Smart Devices ahead of more malevolent spammers!
This has led to a strange dynamic, where the most cogent criticisms of the big labs come from the people running them. There are still commentators out there who think that AI is a confidence trick, though that argument gets harder to make as capabilities improve. But if you're hearing a technically-informed case that in the near future, AI will let hackers find vulnerabilities in all the world's software, allow DIY bioweapons, or rapidly wipe out all white-collar work, you'll probably hear them from someone working at OpenAI, Anthropic, or DeepMind. These people will also tell you how nice it will be when AI replaces legal, accounting, administrative, and software development expenses, though they acknowledge that what's an expense to the buyer is revenue to the seller.
All of the major labs have had some kind of apocalyptic element to their pitch, since long before the media or politicians were paying attention to what they had to say. Some of this is probably historically contingent: if one lab starts talking about the risk of AI, another lab that downplays them will sound less serious, or less optimistic about capabilities. And having a coherent vision is very motivating—JFK's famous Rice University speech wouldn't have been quite so inspiring had he said that before the decade was out, we'd either put a man on the moon or miss the moon entirely and send a man hurtling off into space.
But it also means that now that they're big enough to be political targets, opposition research is pretty simple: just listen to their latest podcast appearance, and you'll hear about how AI could wipe out half of entry-level white-collar jobs, or all jobs, or that AI is as big a risk as climate change (many other people in the industry will take the over). If you take the perspective that AI is a big deal, will be net beneficial, but will have some unevenly-distributed costs, it's perfectly coherent to talk like this: what they're trying to do is to get everyone to prepare a bit, and in particular to ensure that governments don't face unpleasant surprises. What they probably would have liked was some kind of framework for mitigating those losses—a growing GDP plus a reduction in wages can only add up if either some companies are piling up enormous profits and only reinvesting them in low-risk assets, in which case deficit spending is the only way to stave off a deflationary collapse, or if those super-profitable companies are paying unusually high taxes in order to fund redistribution.[2] AI is politically homeless right now because while politicians increasingly use it to write speeches, it's easy for people on the left to dislike big companies and for people on the right to dislike polyamorous vegan nerds.
Which is pretty understandable for a general-purpose technology. Those technologies tend to rewire the entire economy, eliminating some jobs, creating new ones, and transferring a lot of wealth around.[3] But there isn't really a political constituency for GDP growth: almost everyone likes it (even if they complain about some of its consequences), so to make it the kind of thing that gets 51% of the electorate excited to thwart the other 49%, you have to leaven it with some kind of partisan-inflected giveaway or some way that growth can be bad for the other side. It's not that voters don't want to live in a richer country, but that telling them they can have this doesn't really differentiate them.
The AI labs, in their pathological honesty, have set themselves up so that the big opportunity for policy entrepreneurs is to come up with left- and right-coded ways to dislike AI. That's worth doing even though AI has slight net popularity in polling data, and even that is thrown off by awareness—voters 65+ say they're negative on AI, but a brief visit to Facebook will demonstrate that this demographic absolutely loves AI-generated images, particularly of artists sobbing next to implausible sculptures. They tried outsourcing the task of keeping the macro situation in balance to politicians, but politicians didn't take AI's potential impact seriously until voters started getting worried, and now a more blandly technocratic approach won't work. This presents an interesting contrast with the Internet, where load-bearing, industry-shaping laws started getting passed in the mid-80s (ECPA, CFAA), passed laws against online media piracy in the late 90s, a decade before broadband penetration hit 50% and piracy was tenable. It's an underrated feature of the Internet boom that so many of the big questions got settled early, instead of being debated in the context of travel agents, print journalists, video rental clerks, brick-and-mortar retail workers, and bank tellers losing their jobs.
It makes some sense that it happened this way. The Internet gradually evolved from a government-funded research project to private sector infrastructure, so it was already tied to the government. And once AI labs got going, the constraints in front of them were talent and infrastructure, with regulation as a distant risk and a popular backlash as a pure hypothetical. That approach worked, but the decisions companies make to scale fast take effect on a lab, so their business today is a function of decisions they were making in the political climate of a year or two ago.
But that's the general lifecycle of an effective organization: it solves the problems it's good at solving until the only problems left are the ones it's bad at. Automating away jobs was a great pitch to VCs, and building those capabilities was a great one for researchers. But if it's worth it to pay someone $5m/year to automate away jobs, it necessarily means that there are more votes in appealing to the soon-to-be-automated than the automators, even if that automation eventually makes us better-off overall.
Disclosure: long META, GOOGL.
There are plenty of places to go if you're optimizing purely for curiosity, and the people who did that well are getting pay packages far in excess of those offered to people who optimized for money and went into PE. But even though the people at the big labs are excited by the problem of solving intelligence and deploying it at scale, there's a necessary selection effect where they have to be reasonably good at fundraising, and at coordinating a very opinionated workforce, too. ↩︎
In macro terms, these are basically the same scenario: government spending ramps up, and it's funded by profitable AI companies writing enormous checks to the Treasury Department. The only difference is whether they get to or have to do this. ↩︎
Andrew Carnegie is a fascinating case study here. He got his first job as a telegraph delivery boy in part because his father couldn't support the family as a handloom weaver. Carnegie would go on to put plenty of iron puddlers out of work as expertise shifted from being deployed while the furnace was in operation to being deployed in its design and construction. ↩︎
You're on the free list for The Diff! Last week, paying subscribers read about how LLMs let you unravel the empirical backing, if any, for folk wisdom ($), featuring pre-salted steak and toxic potatoes; why SpaceX wants an option rather than ownership in Cursor ($); and Reddit as the RL gym for persuasion ($). Upgrade today for full access.
Diff Jobs
Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:
- Lightspeed-backed team building the engineering services firm of the future is looking for founding members of technical staff excited about working alongside civil engineers to translate their domain expertise into the operating system that powers the next era of great American infrastructure. If you’re an engineer with strong product intuition, who's energized by access to users, and excited by the prospect of transforming how we design and construct our built world with frontier AI, this is for you. (NYC, SF or Remote)
- AI Transformation firm with an ambition to build an economic world model capable of running swathes of the private, unstructured economy is looking for FDEs, Platform Engineers, and business generalists who understand how to solve valuable problems with technology. (NYC, SF or Remote)
- Well-funded, frontier AI neolab working on video pretraining and computer action models as the path to general intelligence is looking for researchers who are excited about creating machines that learn from experience, not text. Ideally you have zero-to-one pre-training experience and/or are a high-slope generalist who’s frustrated that the big labs aren't doing this. (SF)
- High-growth startup building dev tools to help highly technical organizations autonomously test/debug complex codebases is looking for a senior design engineer to own their design system and build the visual abstractions customers rely on to simulate their software systems, find bugs, and quickly remediate them. A compelling portfolio, a rare blend of design and engineering chops, and a deep understanding of how the internet and browsers work required. (D.C.)
- Series A startup building multi-agent simulations to predict the behavior of hard to sample human populations is looking for researchers and engineers (ML, platform, infrastructure, etc.) to improve simulation fidelity and scale the platform to hundreds of millions of simulation requests. Problem-solving and genuine interest in simulation matter more than pedigree. Experience working with languages with an algebraic type system is a plus. (NYC)
- A Fortune 500 cybersecurity company with decades of proprietary security data is running an internal incubation with a pre-seed startup mentality and a mandate to build something new in AI. They are looking for a founding engineer who can ship fast, an engineer with a security background who’d be excited to contribute to OpenClaw’s security efforts, an AI researcher, and a generalist (ex-banking/consulting/PE background preferred) who wants to wear a bunch of different hats. Comp is FAANG+ and cash heavy. If you want to build something new in AI, but also need runway, this is for you. (SF/Peninsula)
- Newly-minted unicorn applying AI to prediction problems is looking for a head of talent who can source and identify candidates, and also build out a formal hiring process. Ideal profile includes anyone who was the first talent hire at a high-growth company, or a former founder who’s built a team before and can do it again. (NYC)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.
And: we're now actively deploying capital into early-stage companies through Anomaly. Our focus is on defense, logistics, robotics, and energy. If you'd like to chat, please reach out.
Elsewhere
Barter
If you were explaining public equities to a venture capitalist who wasn't familiar with them, you could use a simple thought experiment: what if everybody had exactly the same deal flow, and you could invest arbitrary amounts whenever you wanted? It is an impressive achievement that we've synthesized assets that are perfectly interchangeable at scale—Nvidia's market capitalization is just over $5 trillion, with 24.3 billion shares outstanding, and it's hard to think of any other category where we have 24.3 billion discrete units of it and they're all both worth something and perfectly interchangeable.
In private markets, access dominates; neither buyers nor sellers are interchangeable. Taking an investment from Sequoia sends a different signal than taking one from Softbank, and buying from the company is not the same as buying from an employee. The cost of the transaction is high enough already that adding complications doesn't cost too much, and can sometimes lead to gains on both sides. So: an investment banker is trying to sell a house in exchange for shares of Anthropic. Given the effect big IPOs typically have on SF real estate, the seller is doing the equivalent of reducing their stake in Anthropic in exchange for an index fund that's also betting on OpenAI, Stripe, and Databricks, and that happens to pay a dividend in the form of housing. The offer has some slightly financial engineering-flavored terms, like promising the seller 20% of the upside from what they sell through the end of the lockup period (why not just sell it at a lower price?). Which might be part of the point: the seller's bank helps tech companies raise capital and sell themselves. There are many companies that do this, and now there's one that everyone once or currently at Anthropic has heard of.
Disclosure: long NVDA
State Hypercapitalism
The US government has put up some good numbers in its national defense-focused long-only equity book, with a 300% return from its $9bn investment in Intel last August. This was pretty good timing, though it presumably was driven less by some view that agentic AI is relatively CPU-bound compared to previous iterations, and more driven by a view that America's share of the global chip fabrication business could mean-revert with enough money thrown at it. The main reason for caution is that it's hard to tell how much of Intel's returns come from the company's standalone performance, or from its counterparties all recognizing that it's a business the US government has made a big bet on. It's entirely possible for some of this return to constitute redistribution from Intel's competitors to Intel; as the nicotine vape industry once discovered, sometimes a competitor who owes a large fraction of their economic profits to the government actually has an advantage.
AI Pricing
AI has an interesting usage-based retention curve. Not only do people tend to use more tokens over time, but anecdotally the heaviest token users are also the ones whose usage grows fastest. So it's very hard to sell at a fixed price, and SaaS companies with AI features are increasingly moving to a flat subscription fee plus usage- or outcome-based pricing ($, The Information). One way to look at pricing is that it has to be set to recoup some set of costs, and historically the main variable cost in SaaS was sales and marketing. Companies had to spend money on R&D, but if they doubled in size it was probably because they doubled the number of people selling the product. But now they have an ongoing cost from inference, and that cost declines per token but increases in the aggregate as people find more ways to use tokens. So, to keep their economics roughly aligned with their customers, they have to charge for it. This ultimately flows through to demand for infrastructure: as inference becomes a bigger line item for more companies, there's more of an effort to make sure it produces a real return.
Reflexivity
The NYT has an investigation of illegal gold mining operations that end up shipping their gold to the US mint. One thing this illustrates is the reflexive nature of bull markets in gold: people buy gold when they're feeling uncertain enough that a low-rate deflationary recession and an outbreak of inflation both seem like they're worth hedging. But this raises the returns from illicit gold mining, eroding property rights in the developed world and thus making the whole world fit a bit better into that high-risk mode. (Oil prices follow a similar kind of reflexivity: worries about supply disruption make oil exports more profitable, and make it harder to sanction oil exporters, which gives them room to do things like invade their neighbors.)
Complements
Last year, The Diff noted that there's one pair of companies with complementary AI needs and limited competitive overlap, and that Meta would benefit from using some of Amazon's infrastructure. That deal is happening, albeit at a smaller scale; Meta has committed to using "tens of millions" of Graviton cores (ChatGPT's Fermi estimate is around 150 MW and $5bn in spend, with a wide confidence interval around both). These kinds of deals should get more common over time, even though the biggest AI labs have more control over their infrastructure than they used to: the average of everyone's estimate of the supply and demand balance is almost certainly more accurate than any one company's estimate of its own future inference needs, and the more transactions like this happen, the narrower the bid/ask spread will be.