The Economics of Pseudonymity
Privacy discussions come in two forms: there are nebulous worries, often articulated using the term "creepy," about how much companies learn about people’s desires and interests in the course of targeting ads. And there are also concerns about how much people can learn about one another through the Internet, and what they can do with that knowledge. Finance Twitter is full of people who don't use their real names; once, I had the disconcerting experience of meeting someone for the first time twice, once as a pseudonymous person with a cartoon Twitter avatar, and another time as a professional contact at a business meeting.
Navigating this second kind of privacy situation is complicated.
In the short term, changes in this kind of privacy are zero-sum, which means there's a constituency that will strongly resist any new system because they're doing fine under the existing one. More privacy for everyone means more privacy for bad people, and the more general the tools are, the more likely the bad people will be salient. Do you like political dissidents in Russia, Iran, and China more than you dislike terrorists or sex traffickers? Are "like" and "dislike" even commensurate? They have a syllable in common, but they're wildly different feelings, and people are uncomfortable trading one off against the other.
Why Do We Care?
Social norms haven't kept up with technology. Social shaming has always existed, and is, in fact, a pretty useful social technology; many norms have network effects. A norm against tardiness, for example, is infectious: it means that if you're going to be late, you'll be the sole reason something starts late, or it will start without you. A bit of shaming makes everyone better off. (Ecuador once ran a national PR campaign against tardiness, starring the country's Olympic gold medal winner, because of the high cost of late and uncertain meetings—if people might be upwards of half an hour late, a thirty-minute meeting requires a one hour block of time.)
Shaming works inside small communities, where it's a low-cost way to maintain cohesion. It works especially well as a distributed way to maintain it: anybody is empowered to point out deviation from the norm. Some organizations take this pretty far; I've read about two cases where military units gave an "award" to the worst performer at a task while training. When you need reliability, standardization, and coordination, that's one way to get it. The trouble with online shaming is twofold:
- The scale on which we need people to be reliable, interchangeable, and all working towards the same goals is probably not global, nor is it bounded solely by people who use a given platform and speak a given language. Since timelines and homepages prize recent content, the usual rule is that there's one person who is The Worst Person Ever To Exist for a period of about 24-48 hours, after which they're pretty much forgotten. As the classic tweet goes: "Each day on twitter there is one main character. The goal is to never be it" (there's also a recently-topical niche version).
- The mechanism of shaming is all against one, where "all" is defined as everyone who a) wants to maintain a certain behavioral standard, and b) is aware of a violation of it. It's probably good for a military unit to encourage the 20th-best out of 20 members to try harder, but it's a little excessive to identify someone as the worst person of the world's billion-plus English-speakers on the Internet. The Internet has magnified the collective nature of shaming, and warped the distribution. Instead of a handful of people each day getting dirty looks and mild jokes for various infractions, the one main character gets their life pretty much ruined. (Or, as in the case of Basecamp, things go roughly back to normal, and the only evidence that anything happened is when you do a Twitter search for the handle of the person in question plus any obscenity you can think of.) The Internet fattened the tail of the shame distribution; a Twitter pile-on is a bit like Oreos or fentanyl, a ridiculously concentrated version of something that might have existed in our ancestral environment, but that was orders of magnitude weaker.
Like a lot of other technology changes, social media takes a hardwired behavior and applies it in a new context. And this behavior is very, very hardwired.1 The impulse isn't going to change, but the rules might—in a few different ways.
Social Technology and Technology
Social media is not the first time a new technology has made previously acceptable behavior patterns unworkable. Clay Shirky's great speech on gin and television gives two: gin was much cheaper than other forms of alcohol, and that led to the gin craze, which seems to have absorbed a truly excessive amount of England's productivity until laws and norms changed and people stopped drinking quite so much. TV has also forced some changes in norms, because it's so absorbing (and, more recently, because ubiquitous screens with bright blue light-heavy displays mean that TV can keep you awake when you're too tired to do anything except watch another episode).
Plenty of modern lifestyle trends can be interpreted as reactions to technology that's a bit too rewarding:
- Paleo dieters, vegans, intermittent fasters, and users of Noom or MyFitnessPal are all aware that modern diets offer lots of rewarding foods.
- Anyone who sets their screen to grayscale or uses screentime tools is adjusting to the addictiveness of apps.
- People who sign up for a year of gym membership or who buy a Peloton are, ideally, leaning in to the sunk cost fallacy: they're hoping that the irrational desire to get the maximum return out of an unrecoverable investment exceeds the irrational desire to get less exercise than they should.
- Users of robo-advisors or automatic investments are subverting the usual tendency to a) undersave, or b) overestimate their ability to invest on their own (or, equivalently, overestimate their ability to tolerate the inevitable drawdowns that good investments often experience).
These kinds of solutions typically blend norms and technological fixes. Often, the tech fix is a way to bootstrap the norm: it's not a complete solution, but it's a regular reminder to exert a little willpower in the right direction.
There are two elements here:
- Normalize speaking and working under pseudonyms, while Government Names only get used when absolutely necessary. In this ideal world, the only person at your company who would know your real name would be the one who does the tax and immigration paperwork. (Even that could be tightened up: perhaps in the future, privacy-conscious HR software companies will offer strict assurances that no human being sees the plaintext of a user's real-world identity.) And when you tweet about politics, cultural issues, or anything else, you’d do it under a fake name. Your real identity is a primary key that links together many different parts of your life, and it's hard to undo those connections.
- Build identity systems that let people leak bits of information that give them credibility, without leaking their real-world identity. Anonymous people can pretend to be whoever they want to be, and there's no cost to this unless they get outed or get called out after making a meaningful mistake. "Employee #3 at a company that exited for over $1bn five years after founding" is a good credential to have, but right now verifying it means leaking a lot of identity; if there's a way to partially leak it—to sacrifice a few of your 33 bits without giving up all of them, that would allow more pseudonymous people more opportunities to try new things.
Here's how he describes that system:
Let’s say, for example, you have a large Twitter following, and you boot up a new pseudonym, that pseudonym starts with nothing. And so it takes you time to boot up a new following, that’s a whole effort, that deters a lot of people from doing it, they’re starting all the way from scratch. And I thought about this problem, because the thing is that with cryptocurrency, we’ve actually solved this where you can set up a username and another username and you can use ZCash to transfer money from one name to another pseudonymously. So money can be transferred, reputation though didn’t seem to be, until I realized that actually you might not be able to do it for followers, which are non fungible, that to say, one person is not, like, the same as the other, but you could do it on a site like Reddit, where you had Karma that was accumulated.
And so someone who had 10,000 Karma under one pseudonym just like you use ZCash to transfer digital currency, ZCash was basically like the truly anonymous version of Bitcoin, the truly private version of Bitcoin. Just like you use ZCash to transfer cash from one name to another, you could use ZKarma to transfer Karma from one username to another and thereby a reboot under a new username. And that username would have no comment history, but people would be like, “Okay, that’s a 50,000 Karma person or whatever the number is, clearly that’s somebody who our community esteems to some extent has been offered a lot. Perhaps we should listen to what they have to say because they felt they had to go into a pseudonym for this.” And of course you could do this with not just a global Karma, but subreddit specific Karma.
This is a very promising sketch of an idea. There's a soft social norm that someone with credibility in one domain can start talking about some other topic and be taken more seriously than the average person. People grumble about this: what does a genetic testing entrepreneur know about cryptocurrencies, anyway? What does a crypto founder know about novel respiratory infections? As it turns out, some kinds of expertise are pretty fungible, and social media could reflect this.
Life in the Pseudonymous Economy
What would the world look like if reputations were fungible in this way—if you could back claims with the authority of your previous accomplishments, without actually identifying yourself?
For one thing, we would have known about Covid a lot earlier. One of the barriers to early Covid-panic was the fact that it was low-status to worry about disease, and very low-status to do things like hoarding food, refusing to shake hands, and putting copper tape on doorknobs. Plenty of people were privately worried well before they were publicly worried. Many of these people worked in tech, and had track records that led to substantial net worths (some of my evidence for this is anecdotal, and some of it is from the fact that Zoom's stock started outperforming the broader market in February, as Covid paranoia discreetly ratcheted up. The people who were reacting financially to Covid—using a markets as a decentralized, anonymized prediction market about current events—were disproportionately likely to phrase predictions about the future in terms of asking “which high-growth software company benefits from this trend?”). If more of them had been able to say "I won't tell you who I am, but I'm credible at the job of predicting the consequences of exponential trends, and let me tell you about a really big one," that could have shifted public discourse.
But it will also lead to similarly bold predictions that don't pan out. Being able to link aggregate reputation to a view, without the reputational downside, converts an opinion from a future to a call option, and the lack of downside risk encourages the pursuit of high-variance outcomes. There's a historical precedent for this: the Tulip Bubble reached its maximum velocity when a legal ruling changed tulip contracts from futures to calls. Do the same thing for way-out-there predictions, and you get the same results: there will be a massive bull market in wildly speculative theories.
One benefit of this bull market is that stale consensus will have a shorter half-life. More pseudonymous endorsements of ideas will lead to faster preference cascades—you won’t know quite who agrees with some idea, but you will know if it’s secretly more popular than you thought, and that will encourage people to update their views more frequently. As it turns out, one of the real-world problems crypto can solve is the emperor-has-no-clothes issue; we’ll all know that a majority of courtiers believe the emperor is nude before we know which individuals have that belief.
A pseudonymous economy would also lead to radical compartmentalization, which would be challenging. Part of the fun of some jobs is who you get to work with, and who you get to hang out with. Separating work and social domains would be untenable; some companies form a cluster of like-minded people who are independently fun to spend time with, and under a pseudonym economy that either means irrevocably linking some of your pseudonyms or having some work-hard/play-hard pseudonyms that are tied to social networks rather than tasks.
This world would also change the signaling value of showing face, in a literal sense. Someone who makes a prediction under their own name, or on video, is implicitly agreeing to own the downside as well as the upside, so it's a way to show higher confidence. Making a pseudonymous bet backed by an anonymized reputation is a way to express preemptive regret about the consequences, which colors what kinds of statements will be made that way.
Taking bold reputational risk without back-linking reputation to where it came from would also turn into a Martingale-style trading strategy: each individual statement has some upside and capped reputational downside, but over time, patterns might emerge. If the system involves copyable karma, a diligent analyst might notice that 50k Karma account A stopped posting right when 45k Karma account B started, and that they both wrote at the same time of day and cited some of the same sources. If reputation is transferable, then the rule is to look for debits in one account that coincide with credits in another. (Reputation tumblers will help, but they provide statistical, not absolute, safety. And if all this activity is happening on a blockchain, then the bet is not just that the current action secures pseudonymity, but that no future action breaks it.)
Right now, trusting pseudonyms is a proof-of-work system. We respect Satoshi's judgment on some topics because he did, after all, figure out Bitcoin. And we respect Banksy's ability to game the art market because we literally see the results. It could function as a proof-of-stake system instead, but, as with other proof-of-stake systems, there's a tradeoff in trust.
There's some evidence that the world is moving in this direction. As Balaji points out, services like Gmail and Twitter already support multiple accounts. Part of the use case for these services is that you want to tie different things to different identities. The new round from Privacy.com, now renamed Lithic, is also evidence that people want to detach spending from identity. Lithic lets people spin up single-purpose credit cards, which a) keeps them safe from merchants who make it hard to cancel subscriptions, and b) means that they can pay for things with another intermediation layer between their payment identity and their real world identity. So there are growing layers of insulation between different personas.
But the difficulty of transferring reputation without sharing identity may be a load-bearing bug in the current system. It still means reputations are somewhat fungible, but it also means that you can earn a reputation in one area and then completely trash it by doing something else. To the extent that that's a problem, it's a problem with the reputation-updating function, not with the underlying protocols. And that, like the gin boom, television, the standard American diet, or smartphone addiction, requires behavioral norms that can be boosted by, but not replaced with, new technologies.
Fungible Energy Jobs
"Re-skilling" is sometimes proposed as a solution to the decline of industrial jobs: sure, there are fewer factory workers in the US than there used to be, but they can always learn to code! This is theoretically true, but my heuristic before reading any whitepaper on the topic is to Google the author and see if a think tank hired a laid-off coal miner to write it, or if they hired someone who has spent most of their career writing whitepapers. This saves time.
But there is a case for reskilling happening, at least in one industry: the former RigUp, which used to focus on getting contractor workers jobs in the oil industry, is now raising more money and rebranding to WorkRise as it broadens its target industries. Notably: "Last year, Workrise placed more than 4,500 workers, or nearly a third of all its workers placed in 2020, in renewable-energy jobs. Specifically, the company says in total, it placed 8,000 unique workers in jobs in 2019, with 13% in renewables." One nice thing about WorkRise is that its entire business is built around making workers more fungible, by handling some of the administrative tasks involved in hiring them and by connecting them to multiple companies. So it's in a position to find which skills transfer, and to move people to where the growth is.
Robinhood's Other IPO News
Underwriters say that retail investor demand is hard to predict, but I suspect that someone at Robinhood has already found the correlation between a) the size of an IPO price pop, and b) the number of Robinhood users who look for a stock quote for the to-be-public company. Because Robinhood users are doing more of the research on the app itself, Robinhood may know more about their demand than a bank knows about institutions' plans.
Retail investors are less valuation-sensitive than institutional ones, so an IPO process that cuts retail investors out will lead to bigger IPO pops for recognizable companies and smaller ones for less famous or glamorous ones. Figs, which did lots of advertising to promote its clothes, is a good example of the kind of company whose name recognition could lead to an IPO pop regardless of what the ultimate price is, so it's a natural company to test this. And once there's an additional source of buying power for companies that plan to go public, it's hard for them to ignore it.
Financial Engineering in Home Improvement
Hearth is a SaaS product that helps contractors manage their business, including offering their customers financing; it's just raised a round. This is very similar to the economics of revenue-backed lending: the useful life of what contractors build is years or decades, so it makes sense to align the customer's payments with that. But the contractor's costs are all upfront, so it doesn't make sense for them to do the aligning. Putting a well-capitalized company in the middle lets both sides get the financial structure that makes sense.
The Case for Steeply Progressive Ad Taxes
Paul Romer makes an argument for a steeply progressive tax on online ad companies' revenue, mostly as a way to make powerful companies a bit less powerful. The tax is structured with marginal rates that kick in at $5bn, and rise to 72.5% for revenue over $60bn. To the extent that any plan that immediately wipes out a trillion dollars of market cap can be considered elegant, it is pretty elegant: it gives companies the flexibility to either spin off businesses (breaking up Google and YouTube, or Facebook and Instagram) or to accept punitive financial consequences.
The implication of this tax is that large digital advertising businesses are, by their nature, a negative externality. If so, it's entirely appropriate to punish them in a way that scales with size. The risk is that they're also a positive externality: a company that benefits from network effects has an incentive to expand the network, and in tech companies' case that means getting more people online and making the Internet better. The same traits that make these businesses hard to dislodge, and very profitable at scale, give them an economic justification to subsidize complements that people like—good browsers, good analytics, cheap or free Internet access, etc. To calibrate a digital ad tax correctly, those benefits need to be quantified, too. And that can be a challenge, because the big ad platforms use these complements to extend their competitive advantage, which can make them reluctant to disclose details.
This might be the real benefit to a proposed digital ad tax: tech companies have an incentive to under-disclose some of their strategy because the FTC wouldn't like it, but this gives them a reason to disclose more so there's a weaker case for subjecting them to a steep tax.
Like OPEC In a Good Way
OPEC—or, really, Saudi Arabia—used to function by adjusting the world's supply of oil in ways that benefit oil producers. The most salient way this happens is when they cut output to make prices go up; since oil consumers outnumber oil producers, this is generally unpopular. But OPEC has another function: to raise production when prices spike. This is not just a short-term profit-maximizing approach, although it looks a lot like it. One thing that makes oil assets more valuable is long-term adoption of oil-burning technologies, and that adoption is partly a function of how confident people are in the stability of prices. A massive run-up in oil is good for producers in the short term, but very bad for them in the long term if it encourages fixed investments in alternatives. So part of the cartel's job is to blunt price swings in both directions to make internal combustion engines a good risk-adjusted investment as well as a good absolute-return one.
TSM occasionally gets compared to Aramco, in the sense that it's a critical input into the global economy in a geopolitically inconvenient place. And TSM is following the playbook, by increasing its output of automotive chips by 60%, at the cost of bumping other customers down in the queue ($, Nikkei). The dominant producer in a category has a different set of incentives from everyone else: they're uniquely positioned to affect how customers think about their reliance on that category of goods. And, as in this case, they're taking an economic hit in order to ensure long-term demand.
Elizabeth Marshall Thomas' wonderful book, The Old Way, has a brief aside in which a lecturer at Harvard describes witnessing some violent Chimpanzee behavior without qualifying that the violence was, in fact, bad. The students are horrified. Thomas: "This scene, too, was older than our species, wherein a group of primates mobs a conspecific who has temporarily fallen in status." ↩