In this issue:
- Death Markets—There are many ways to bet on death, but in existing market these are often deliberately constrained. Prediction markets are reinventing some trades that we've learned not to let people make. But the same changes in information access that make these markets viable have also changed the nature of the downside risks.
- Deployment—If AI really is driving productivity growth, keep in mind that it's a technology whose benefits are pulled forward compared to previous deployment cycles.
- Table Stakes—If you're in the business of writing rules, you don't want to opt out of using the tools other people will employ to find loopholes.
- Structuring—Hedge funds as portfolios of bespoke derivatives on talent.
- Money Laundering—A dirty dollar in the right place is worth a premium, not a discount.
- Outsourcing—Liquid labor markets and open Waymo doors.
Chat with this post on Read.Haus.
Death Markets
Polymarket offers plenty of the standard wagers you'd expect from a general-purpose gambling venue—people have been betting on sports games and elections for so long that in 1591, the Catholic church threatened people who bet on papal elections with excommunication. The typical Polymarket user is betting on sports (which was ~45% of their total bet volume last month), but there are still niche markets of the sort that originally made prediction markets so interesting. For example, you can make a range of bets on NASA's Artemis II. These are mostly markets on when it will launch, but there's also one on whether or not it will explode. Polymarket has clarified that this is not a bet on astronauts dying, just a bet that a two containers with more than a thousand tons of rocket fuel right under them will rupture, possibly while they're traveling at thousands of miles per hour.
The first thing to note about this situation is that it's a bad market, not necessarily in the sense of "evil" but in the sense that it's a poorly-structured way to bet on what it purports to track, and will probably end up getting resolved in a way that bettors consider capricious. It presumably borrows its resolution criteria from markets related to SpaceX, and says that an explosion (defined as "a violent and catastrophic event resulting in the destruction of all or part of the vehicle, regardless of intent or context") counts as a "Yes" if it happens any time "from the start of fueling operations to 60 minutes after it makes contact with Earth upon landing." Since the boosters in question detach and fall into the ocean, the actual terms of the contract just involve parsing whether or not that constitutes the destruction of part of the vehicle (or whether or not it's truly catastrophic).
But the reason the contract is controversial is that it's pretty clearly a way to bet on whether or not people die. Which:
- Is pretty morbid, and
- Creates the obvious financial incentive to sabotage the launch.
These concerns are both worth thinking about, because prediction markets are growing quickly, are less regulated than other betting venues, and are run by people who have much less experience with deciding which markets should or shouldn't be created. Just as blogging led to a drop in the cost of publishing political commentary, and a decrease in its average quality but an increase in the amount of content at any given quality level, prediction markets mean that there are more ways to bet on everything, including bets we don't really want people to take.
Of course, there are already ways to bet on people dying. That's what the entire life insurance industry is! But, for very good reasons, the industry has long followed rules that require an insurable interest—a company can buy a life insurance policy on its CEO, for example, but can't buy a life insurance policy on a competitor's CEO.
Looking more broadly, there are already plenty of ways to bet pretty directly on human misery. If you think there's going to be a major terrorist attack, you can buy put options on airlines (one stock options tipsheet coincidentally suggested that subscribers buy airline puts two days before 9/11, which was spectacularly awkward timing but unrelated to the attacks). When a biotech short works, it works because the market thought that someone was going to get treated for an illness, and now they won't. If you can't make up your mind about what bad thing will happen in the future, but absolutely insist on profiting from misery, you could try buying out-of-the-money puts on reinsurers, whose shares will tend to take a hit whenever something really catastrophic but unexpected happens.[1]
The idea of a prediction market about the timing and cause of particular deaths actually predates the modern incarnation of prediction markets, having been floated in the 90s when Robin Hanson was first making the case for prediction markets generally. In 1996, crypto-anarchist Jim Bell suggested that a market allowing users to bet on the exact date on which someone would die would be a great way to organize a decentralized campaign of assassinations against tyrannical government officials. (Bell's personal ideology held that instances of government tyranny include activities like collecting taxes or telling people what to do.) This was clever (among other things). Just as an intellectual stunt, creating a way for strict anarcho-capitalists to wage total war is an accomplishment.[2]
An important mitigating factor in all of these ideas is that fatally sabotaging a rocket launch or gunning down an EPA official are already quite illegal, so these markets don't flip murder to being a positive-sum proposition so much as they provide a small financial inducement on the margin. On the other hand, lots of people carry wallets with cash and credit cards, so clearly society can function even if we're all carrying around a modest kill-me-right-now-and-take-my-stuff bounty.
Targeted murders are also harder to pull off as a practical matter. There are more Ring, Flock Safety, and Nest cameras, cars and phones produce location data, buying materials for an IED will also create a paper trail, your relatives' 23andme accounts will make it easier to identify you based on DNA, and just in case your plan is to outsource the assassination and collect a spread, you'll have to hope you didn't pick an online markets for hits that started out as joke and evolved into a sting operation. It's just tough out there for premeditated murderers, and the vast majority of fiction about murder would have obvious plot holes due to the factors above if it were set in the present day.[3] Plenty of people get away with one-off murders, though typically not murders of government officials, executives, celebrities, or other people who'd likely be targeted by an assassination market.
There are cases where tolerating this kind of malincentive is either the best compromise or has a cost that's small enough that people don't notice. The Diff has argued that whistleblower rewards that give them a cut of frauds they identify actually encourage them to maximize the size of the fraud, at least if they're confident they have unique information about it. If the government gives you a call option on an asset that keeps growing in value and that only you know about, you want to delay exercise as long as possible.[4]
Some costs are worth tolerating in exchange for better information—someone who's bidding up a disaster contract is also providing a clear signal that someone out there thinks it's likely to happen. And since market-makers don't want to provide a lot of liquidity in cases where the other side might have proprietary information, those trades will move the market a lot. Now that there's a narrative about people engineering outcomes to make money on prediction markets (a narrative that's a lot more common as a hypothetical than as an actual regular occurrence), uninformed traders might be cautious, too. And there's a case for keeping some things legal just because they're an irresistible honeypot for dumb criminals. Someone who's willing to commit a murder for five figures in an indefinitely traceable currency is not all that bright, but they are potentially dangerous. Giving them an opportunity to definitely incriminate themselves might be net beneficial.
This one's a little tricky for two reasons. First, it's mostly a bet on bad things happening to relatively rich people. If a hurricane destroys homes in the US, a reinsurer is probably ultimately on the hook somewhere. But if that hurricane counterfactually hit a poor Caribbean country instead, and killed more people, the hit to reinsurers would be smaller. This risk also has to be within the realm of reinsurable possibility, so they're on the hook at all. And last, disasters can increase the share of risks that get reinsured, and give reinsurers more pricing power, so they can be net beneficial to the industry. ↩︎
One of the paradoxes of libertarianism is that the more strictly you interpret the non-aggression principle, the more of your time you'll spend arguing about what does and doesn't violate it. The fewer laws you want to be bound by, the more of your time you'll spend making legalistic arguments. ↩︎
If you do want to tell a story about murder, it's going to be mostly a story about how to set up a digital paper trail for an alibi, how to case a neighborhood for home cameras, etc. ↩︎
I owe this point to Ace Greenberg, whose collection of memos from his days running Bear Stearns includes one memo outlining Bear's internal whistleblower bounty program, and another shortly thereafter capping the rewards. ↩︎
You're on the free list for The Diff. Last week, paying subscribers got thoughts on crypto cycles ($) (including a cameo from your author worrying about crypto mining as a zero-sum arms-race to buy compute, back in 2008), why you'll likely be the victim of secondhand LLM psychosis ($), and why people who work at hedge funds sound so smart ($) (there are very specific reasons that this particular smart-person industry would make you sound uniquely sophisticated when you talk shop). Upgrade today for full acccess.
Diff Jobs
Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:
- A startup is automating the highest tier of scientific evidence and building the HuggingFace for humans + machines to read/write scientific research to. They’re hiring engineers and academics to help index the world’s scientific corpus, design interfaces at the right level of abstraction for users to verify results, and launch new initiatives to grow into academia and the pharma industry. A background in systematic reviews or medicine/biology is a plus, along with a strong interest in LLMs, EU4, Factorio, and the humanities.
- Ex-Citadel/D.E. Shaw team building AI-native infrastructure that turns lots of insurance data—structured and unstructured—into decision-grade plumbing that helps casualty risk and insurance liabilities move is looking for forward deployed data scientists to help clients optimize/underwrite/price their portfolios. Experience in consulting, banking, PE, etc. with a technical academic background (CS, Applied Math, Statistics) a plus. Traditional data scientists with a commercial bent also encouraged. (NYC)
- Series A startup that powers 2 of the 3 frontier labs’ coding agents with the highest quality SFT and RLVR data pipelines is looking for growth/ops folks to help customers improve the underlying intelligence and usefulness of their models by scaling data quality and quantity. If you read axRiv, but also love playing strategy games, this one is for you. (SF)
- YC-backed startup automating procurement and sales processes for the chemicals industry, which currently relies on a manual blend of email, spreadsheets, legacy ERPs, etc. to find, price, buy, and sell over 20M+ discrete chemicals, is hiring full-stack engineers (React, TypeScript, etc.). Folks with exposure to both startups and bigtech, but also an interest in helping real-world America with AI preferred. (SF)
- Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out.
- High-growth startup building dev tools for wrangling complex codebases is looking for someone who can personally execute the SaaS bear case: review the third-party software they use and figure out what to keep, what to drop, and what to implement in-house. (SF, DC)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.
Elsewhere
Deployment
Erik Brynjolfsson suggests that AI is actually showing up in the productivity statistics ($, FT): he estimates that 2025's total factor productivity growth was 2.7%, which is a pretty astounding number for a year that doesn't include recovering from a recession. This roughly lines up with Brynjolfsson's J-curve model of technology deployment, where there's an initial adjustment cost and then high returns. But AI has some unique deployment features. One of the things that holds back general-purpose technologies is that they're so powerful that other parts of the economy need to be rearranged for them to reach their full potential: internal combustion engines need roads to be rebuilt and cities to be redesigned; your website won't change the world if nobody's online yet; electrification started with the killer app of better lighting, but still took two generations to be fully deployed in factories (and we still haven't exhausted the set of potential home appliances). But AI can be adopted in a one-sided way, and can actually route around lack of technology adoption: if a restaurant does delivery but refuses to use the apps, and you want to eat there but refuse to pick up the phone, a text-to-voice model can call them on your behalf, if you absolutely insist. (A more practical use case here is that it's getting very close to practical for someone who wants to electronically trade a voice market to do so, and to actually call traders in parallel.) And, unlike other advances, AI can tell you how to use it: ChatGPT is very happy to help you vibecode an app that will use its API.
What's bullish for AI's relative adoption rate is correspondingly bearish with respect to longer-term impact—even though there's more adoption ahead, it's still true that AI will spread faster and thus saturate available use cases sooner than other technologies. Which means that, compared to them, it has to keep improving faster to drive GDP growth over comparably long periods. So far, that's been working quite well.
Table Stakes
The European Parliament is blocking the use of AI tools by staff, including members of the parliament itself. Their statement is very EU-bureaucratic, in that it refers to unspecified risks, says it awaits some unstated level of clarity, and then announces the actual rule in the passive voice ("[I]t is considered safer to keep such features disabled"). This is, indirectly, an extremely aggressive deregulation plan. LLMs are incredibly good at absorbing and reasoning about text, and text is what lawmakers produce. If they're voluntarily giving the private sector stronger loophole-finding skills than their own loophole-closing skills, it will be impossible for them to keep up. So this produces two potentially great outcomes: either finding clever workarounds to existing rules, or having a high-profile EU governing body reverse course on adopting some new technology.
Structuring
Alpha is not quite specific enough to count as a commodity (for one thing, some forms of alpha are only alpha because nobody's aware that they exist). But the ability to produce it is getting a little more fungible over time; an investor who managed an $x billion portfolio using a given strategy can probably produce pretty similar profits at any of half a dozen big firms, and these producers are paid accordingly. But they're paid accordingly in two senses: they're getting more, because there are more bidders for their skills, but more pay is getting deferred to make it more expensive to leave. Pod shop hedge funds manage a portfolio of bespoke bets on particular talents, and since there isn't an organized market for these bets, there's a lot of upside in coming up with good nonstandard structures.
Money Laundering
Bloomberg has a detailed look at the new world of crypto money laundering. One of the surprising details is that there are people who have dollars in the US and want to get them out (drug cartels), but there are also people who want access to dollars outside of China. This pushed the fee for selling crypto for cash from 10-15% to, in some cases, zero. Currency controls create strange pools of economic potential energy, and eventually create weird off-the-books monetary unions.
Outsourcing
DoorDash has a new kind of gig: shutting the doors of Waymos. This is a fun example of the possibilities of a liquid labor market. Almost a century ago, Ronald Coase asked why we have firms at all when we could theoretically hire people ad hoc, and concluded that transaction costs make it smart to buy some kinds of labor in bulk even if that sacrifices flexibility for the employer and employee. So this is a very Coasian result: when there's a liquid market in labor, and there's data on who is good at doing their job, the optimal size of a job can shrink from forty hours a week to one task at a time.