Longreads
- Oliver Bateman asks if we're entering a post-literate world, or if we never were all that literate to begin with. There aren't that many societies that have gotten universal adult literacy, and it's hard to export. If you transplant the norms of Puritans into a culture that isn't terrified of eternal damnation if they misinterpret the Bible, the reading-focused culture won't catch on in the same way. This piece is fun and cynical, but it's a good reminder of selection effects: history is mostly that which was written down, which means it's implicitly a history of literate people.
- Dwarkesh Patel interviews Jensen Huang. This is a wonderful artifact, because it's a case where the rationalist subculture responsible for so much of how we think about AI, and for so much progress in AI, encounters the pragmatist-CEO culture that built the hardware necessary for that to happen. For the rationalists, the most important trendlines have always been smooth exponents, but Jensen is the veteran of many extreme cycles, and simply doesn't believe that the rules change all that much. Jensen is clearly good at communicating a shareholder-friendly message, where Nvidia (disclosure: long) is all-in on a small set of tasks, none of which can be replicated anywhere else, and then outsources the rest. But one thing Nvidia can't outsource is its narrative: the company comes across as much more cynical than the labs that build on it.
- Henrik Karlsson has a delightful piece about the hacker mindset, with examples from making absurdly cheap movies and from video game speedruns. It's really talking about two closely-related phenomena. One of them is that if you ignore the stated heuristics and look at the underlying rules, you can find some interesting shortcuts: a videogame is an interface, but what's happening underneath is that variables are being assigned, locations in memory are being modified, etc. The moblins, tektites, etc. are just a representation of this. Thinking that way leads to a related advantage: you can get a lot done if you can keep an entire complex system in your head. So in the end it's memory management all the way down.
- Doug O'Laughlin considers the Engels Pause in light of AI. The Pause was a period in the early industrial revolution during which GDP growth accelerated, overall wage growth was slower, and wages for some formerly higher-earning jobs like weavers were crushed. One part of this story is the supply of capital, but another one was the redefinition of labor: in the absence of child labor laws, there was suddenly a vast new demographic with minimal labor bargaining power, and there didn't seem to be a limit to how many of them could be hired before wages had to rise. (In fact, one contributor to the supply was that some families lost their primary earner, and anyone in the family had to take any job for all of them to eat).) One reason the process could happen faster today is that the capital investments are more fungible; a textile plant in Birmingham takes a while to affect the wages of weavers in Paris, while tokens can be continuously redirected to whatever their optimal destination is. But this time around, the default expectation is that governments intervene in the economy, and in particular that they redistribute from the rich to the poor. The optimistic version of this downside scenario is that we continuously tweak the safety net to keep things in balance; the pessimistic one is that we compress all of the social upheaval of the Engels Pause—riots, revolutions, and wars—into a tenth the time.
- Rob L'Heureux investigates the complicated reasons for the transformer shortage. The basic sketch is that total energy consumption in the US is still roughly flat, but more of it is shifting to electricity, so we need more transformers. Which we have most of the ingredients for, other than electrical steel. And the reason we don't have much of that is that one of America's big heavy industry success stories of the late twentieth century was Nucor, who made cheap steel by using scrap metal. And while there are alternatives, there's also regulatory risk—which is exactly the opposite of what you want for a specialized component with a small number of customers. An Operation Warp Speed-style big order for domestically-manufactured transformers, to be delivered over some plausible point in the future, might be enough to bootstrap this domestic supply chain into existence. Realistically, though, we'll wing it, and either some combination of Elon Musk and China will step in and fix this or we'll have to make some annoying tradeoffs.
- This week in Capital Gains, we outline a theory of M&A premia. If you start with the discounted cash flow model, you get many levers for understanding why a given company might be much more valuable as part of another. And if you think about incentives and selection effects, you can also predict that buyers will tend to overestimate it.
- A Read.Haus user asks why AI agents will use stablecoins. There are two reasons this is the way to bet: first, they're an Internet-native currency, with all of the programmability that implies. If they don't need backwards compatibility with centuries of banking and currency, they can be simpler and faster. But the other reason is that they have a newer regulatory regime, and as long as there's a human at either endpoint, it's probably okay for agent-to-agent stablecoin transactions to be treated differently from fiat. It's okay if something mildly bad happens, as long as you have someone to sue.
You're on the free list for The Diff. This week, paying subscribers read about the model of using an operating business as collateral for the risk that AI will make mistakes ($), why AI for life sciences is a complement to AI for anything else ($), and how the browser is evolving ($). Upgrade today for full access.
Open Thread
- Drop in any links or comments of interest to Diff readers.
- What’s new in the world of logistics? The Diff has covered trucking, drone delivery, and varieties ($) of warehouses ($). But not recently! What’s happening now?
Diff Jobs
Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:
- Well-funded, frontier AI neolab working on video pretraining and computer action models as the path to general intelligence is looking for researchers who are excited about creating machines that learn from experience, not text. Ideally you have zero-to-one pre-training experience and/or are a high-slope generalist who’s frustrated that the big labs aren't doing this. (SF)
- High-growth startup building dev tools to help highly technical organizations autonomously test/debug complex codebases is looking for a senior design engineer to own their design system and build the visual abstractions customers rely on to simulate their software systems, find bugs, and quickly remediate them. A compelling portfolio, a rare blend of design and engineering chops, and a deep understanding of how the internet and browsers work required. (D.C.)
- Series A startup building multi-agent simulations to predict the behavior of hard to sample human populations is looking for researchers and engineers (ML, platform, infrastructure, etc.) to improve simulation fidelity and scale the platform to hundreds of millions of simulation requests. Problem-solving and genuine interest in simulation matter more than pedigree. Experience working with languages with an algebraic type system is a plus. (NYC)
- A Fortune 500 cybersecurity company with decades of proprietary security data is running an internal incubation with a pre-seed startup mentality and a mandate to build something new in AI. They are looking for a founding engineer who can ship fast, an engineer with a security background who’d be excited to contribute to OpenClaw’s security efforts, an AI researcher, and a generalist (ex-banking/consulting/PE background preferred) who wants to wear a bunch of different hats. Comp is FAANG+ and cash heavy. If you want to build something new in AI, but also need runway, this is for you. (SF/Peninsula)
- A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring experienced forward deployed AI engineers to design, implement, test, and maintain cutting edge AI products that solve complex problems in a variety of sector areas. If you have 3+ years of experience across the development lifecycle and enjoy working with clients to solve concrete problems please reach out. Experience managing engineering teams is a plus.
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.