Longreads
- Robinhood users tend to invest in companies with simpler financials: fewer segments, less geographic complexity. This is good evidence for retail investing as a means of self-expression: people buy things they like to own and would like to talk about owning, and these are often straightforward. There's signaling value in having a complicated reason to own a complicated business, too, but the people who signal that will tend to use Interactive Brokers instead. (Incidentally, here's my referral link.) There's just something fun about being able to say that you're investing in a gold mine, or a GPU company, or a cannabis business, or whatever. Whereas if you're buying into some complicated conglomerate, it's hard to explain. Interestingly enough, there are some complicated companies that retail investors know and like—Disney, Berkshire, and in an earlier era GE. (Disney in particular was kind of a nightmare to model, because there were so many little bits and pieces that got periodically reshuffled. But in that vein, it provides a good argument for why familiarity actually might be a more salient variable than simplicity in driving what retail likes to own. There has always been a lot of money in figuring out what the most sophisticated consumers are doing and then betting that everyone else will catch up.)
- Dwarkesh Patel interviews Dario Amodei. Nothing to see here—he laid out a timeline in his previous interview three years ago, AI is, astonishingly, on track, and we'll have "a country of geniuses in a data center" in the next few years. (One of the striking things about AI is that it has closely followed the path people laid out when they started thinking about scaling laws.) One of the notable parts is that Dwarkesh asks him to justify why Anthropic is spending so little: if some kind of singularity is just a few years away, who really cares about another $50bn in the meantime? He gives a clever, AI safety-flavored answer, which is that these estimates have confidence intervals, and it would be a bad idea for Anthropic to overshoot. He also notes that for the labs, profitability today is synonymous with underbuilding in the past, and that as long as models keep improving, turning a profit is a last priority.
- Gappy Paleologo looks at an article Peter Muller wrote a century ago about trading and incentives. A very good piece. The consistent thread in this article is that trading is a search for information, and that information leaks all the time—through trades people make, comments they make, things they won't say, etc. And that extends to things like managing a portfolio of strategies: Muller doesn't like the idea of diversification across strategies. Or rather, for a given sharpe ratio, he'd prefer that a small number of strategies contribute to it than a large number of uncorrelated mediocre ones. With a small set of strategies, you can really understand the economic intuition behind them; with a factor zoo of different low-quality signals, not only are they hard to audit, but it's very hard to model their correlation. (Especially if someone else implements a bunch of them and levers up. Presto, they're correlated!)
- Gideon Lewis-Kraus has fun with Claude and Team Anthropic. Mechanistic interpretability refers to the practice of trying to figure out what's actually going on in otherwise black-box models, and it's a strange discipline because it's unclear whether it's the study of how statistical models approximate the outputs of intelligence, or the study of intelligence itself. And it's especially disconcerting for writers, because the most impressive and high-impact models have been the ones that write prose, and, well, that's how I make a living, too, for now. Lewis-Kraus is in the same boat: he knows he might be talking to the people who, some day soon, will be able to tell Claude "write a story about Anthropic in the voice of Gideon Lewis-Kraus," and get a decent one fifteen seconds later. But it's too interesting to dwell on these petty concerns. Some of this piece will be old news to people who've been following AI, but there are some interesting tidbits, like how culturally cohesive Anthropic is (and that "A disproportionate number of Anthropic employees seem to be the children of novelists or poets.")
- Jerry Z. Muller on the return on investment of college. This is really about the return on investment from serendipity, but college, done right, is a great way to engineer that. If you're optimizing for something other than the best job offer at age 22, your best bet is probably something like: sign up for every interesting class you can and wait for them to kick you out, in order to maximize your intellectual surface area. Used bookstores provide a smaller dose of the same kind of randomness: if you look for something in particular, it's on a shelf with other works that someone thought were relevant, selected from the books that wound up at a used bookstore in the first place. So, college is an expensive way to get this, but in other ways it's a unique one. If someone comes up with a good way to get this particular kind of benefit, without larding it up with all the administrative costs of a modern university, they'll be doing the world a great service.
- On Read.Haus, a reader had a question about the nature of wealth. There's a boring accounting answer, or some pablum/humblebrag about the importance of family and getting enough sleep. Wealth is just the way you get people to want to do what you want them to do, and the way it's created is by being on the receiving end of the same process. Some of this wealth, and an increasing share, is quantifiable in the form of asset ownership. And a lot of your wealth in your early career will consist of the present value of future earnings, which you can't directly capitalize on but should keep in mind when assessing risks.
- In this week's Capital Gains, we consider the natural warehousers of risk. One of the questions any financial product implicitly asks is: who wants to own the downside here (and what compensation do they want)? Viewing the financial system from the perspective of residual risk-takers is surprisingly fruitful.
You're on the free list for The Diff. This week, paying subscribers got thoughts on crypto cycles ($) (including a cameo from your author worrying about crypto mining as a zero-sum arms-race to buy compute, back in 2008), why you'll likely be the victim of secondhand LLM psychosis ($), and why people who work at hedge funds sound so smart ($) (there are very specific reasons that this particular smart-person industry would make you sound uniquely sophisticated when you talk shop). Upgrade today for full access.
Books
Count Zero: William Gibson wrote the science fiction classic Neuromancer on a typerwriter, and started Count Zero the same way. But the typewriter broke and so he bought an Apple II. Which makes Gibson, the guy, a character most often seen in fantasy rather than science fiction: a wise man who is both disconnected from society and scarily prescient when he predicts what will happen to it.
The book was written in the mid-80s, and there are little details that basically feel like how someone who'd gone into a coma then and just woken up would try to explain the way we live now. One of the characters uses a device described as "a rectangle of black mirror, edged in gold," but it's not a smartphone, it's a "credit chip." But if the first time you saw an iPhone was when someone tapped to pay, ad you were thinking in 80s technology terms, you might be impressed that they'd managed to fit all the logic, data storage, and battery necessary for a handheld payment device into something as small as an iPhone. Later on, a character puts on what sounds suspiciously like an airpod (a "speaker bead") to have a conversation. There's even a scene where one of the characters has a video call that uses what's basically a Zoom background!
In the book, AI exists, and it's somehow roughly similar to what we think of as AGI and far less relevant to Gibson's fictional world than current AI is to ours. It would be insane to fault Gibson for getting that wrong, in that his descriptions of console cowboys hacking through ice in the metaverse could easily be read as poetic descriptions of a black-hat hacker using an LLM to find their way around whatever defenses CrowdStrike has put up. Even if you could imagine that computers would exhibit the traits we associate with intelligence, you could still get the wrong idea by thinking of it as a human-like thread of consciousness, rather than on-demand sparks of intelligence available at different quality levels. (If you wanted to predict what that would be like, you'd want to read fantasy rather than science fiction: it's like a warlock summoning a familiar or demon to ask it a question, and then banishing it back into the void after getting an answer.)
Like many futures imagined from the vantage point of the 1980s, he assumes that the average American would have way more interactions with Japanese companies, investors, bosses, etc. than actually takes place. It's pretty reasonable for science fiction writers to take some high-salience trend and extrapolate it, rather than having to come up with some theory for why the trend reversed. (On the other hand, Gibson alludes to a Russian businessman at one point, implying that he did have a view on which way the Eastern Bloc was headed.)
That's a lot of the fun of these books: there's a definite vision of the future, some of it turned out to be factual, some of it didn't extrapolate far enough, and some overestimated how much progress we'd make. Gibson is probably disappointed that we're still wearing similar outfits made out of materials that people in the 80s would have recognized, but that's partly because Gibson really likes writing about textures and materials. Plenty of people write about how the future would look different, but Gibson is able to worldbuild more by imagining how it would feel.
In the 2000s, Gibson got tired of writing science fiction and started writing books set a few months before their publication date. The nice thing about good science fiction is that if you wait long enough, it becomes a sci-fi story about a slightly tweaked version of the recent past, too.
Open Thread
- Drop in any links or comments of interest to Diff readers.
- Which other science fiction classics have unique comments on the current AI situation.
Diff Jobs
Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:
- Ex-Citadel/D.E. Shaw team building AI-native infrastructure that turns lots of insurance data—structured and unstructured—into decision-grade plumbing that helps casualty risk and insurance liabilities move is looking for forward deployed data scientists to help clients optimize/underwrite/price their portfolios. Experience in consulting, banking, PE, etc. with a technical academic background (CS, Applied Math, Statistics) a plus. Traditional data scientists with a commercial mindset also encouraged. (NYC)
- Series A startup that powers 2 of the 3 frontier labs’ coding agents with the highest quality SFT and RLVR data pipelines is looking for growth/ops folks to help customers improve the underlying intelligence and usefulness of their models by scaling data quality and quantity. If you read axRiv, but also love playing strategy games, this one is for you. (SF)
- YC-backed startup automating procurement and sales processes for the chemicals industry, which currently relies on a manual blend of email, spreadsheets, legacy ERPs, etc. to find, price, buy, and sell over 20M+ discrete chemicals, is hiring full-stack engineers (React, TypeScript, etc.). Folks with exposure to both startups and bigtech, but also an interest in helping real-world America with AI preferred. (SF)
- Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out.
- A hyper-growth startup that’s turning the fastest growing unicorns’ sales and marketing data into revenue (driven $XXXM incremental customer revenue the last year alone) is looking for a senior/staff-level software engineer with a track record of building large, performant distributed systems and owning customer delivery at high velocity. Experience with AI agents, orchestration frameworks, and contributing to open source AI a plus. (NYC)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.