I recently had the pleasure of attending a Bloomberg Ideas panel discussion on the AI. Don’t miss a chance to visit Bloomberg HQ at least once. It’s a good way to understand Bloomberg, as a person and as a mayor. When you go to Bloomberg, you can’t get lost: I walked out of the elevator and a security guy read my badge and told me where to go next. I hesitated once at the bottom of a stairwell, and another security guard immediately spotted me and directed me onward. Also, the elevator doors close faster than any other building in Manhattan.
The whole environment is designed to make you feel like wasting time would be some weird aberrant behavior. I don’t know what it costs to buy souped-up elevators, but I’m sure it’s worth it. Some people might find the setup creepy, like working in a live-action version of that old Life of Julia ad, but I liked the energy.
Also, I finally had a chance to meet Tyler Cowen and tell him that his blog played a bit part in how I ended up dating my now-wife. Back when we were messaging on OKCupid (to clarify: my wife and I were messaging; I have not contacted Tyler Cowen on OKC), I wanted to establish my Internet-nerd bona fides, so I mentioned that I’d been linked by a prominent economics blog. She mentioned that she had been linked by a very prominent economics blog. It was Marginal Revolution, both times. (Her post: on taking oneself seriously. My post is lost to history, but I believe it was about the causes and consequences of onion futures being illegal.)
Since Cowen is an expert on many topics, it should come as no surprise that he’s an export on MR lore, so he informed me that at least one couple has gotten married on the site. One economic story you can tell about the last hundred or so years is that, as economies globalize, we compete head-to-head with more people, and need to define our domains ever more narrowly if we hope to be #1. Apparently “used Marginal Revolution to get married” was, in fact, far too broad a domain for me to have any hope of excelling.
Bloomberg’s panel consisted of contributors to the excellent Bloomberg Opinion. “Excellent” should not be read as an endorsement of any specific editorial — I disagree with them probably 95% of the time, but I usually learn something. The panelists were all members of what I think of as the Tofu Fields of knowledge work: journalism, economics, and law. They all absorb the flavor of whatever context they’re used in (“tech journalism,” “labor economics,” “environmental law” etc.). What the Tofu Fields (which include, in addition to the ones I mentioned, consulting and accounting) have in common is that you join as a generalist, and then you specialize for pretty arbitrary reasons. It’s like imprinting among ducklings: you join an investment bank, the first big deal that comes along after you join is an energy deal, and you spend the next forty years of your life as an energy investment banker.
I have mixed feelings about this. On one hand, I’m a generalist, which is another way of saying that I don’t see any reason to limit the number of topics about which I have beginner misconceptions. Shouldn’t an AI panel have a bunch of Pytorch contributors, DARPA challenge winners, former professional Go coaches, and other people with a nitty-gritty understanding of exactly what AI can do?
Not really, no. A lot of the debates in AI are over when, not if. Even the “if” debates are mostly driven by people who assign a low annual probability to a given problem being solved and a high annual probability to global thermonuclear war. If the entire debate is over how many terabytes of data we need per mile of road before self-driving cars are a win, it’s going to be a really boring debate between people with strong opinions, but no way to prove them, short of the most optimistic person actually going out and solving the problem. Whereas debates about consequences are both eternally useful and a good intuition pump. “If a magical being were to eliminate the most boring 90% of my job — and tell my boss it had done so — would I characterize it as a benevolent genie or an evil wizard?”
One notable thing about the Tofu Fields is that over the last hundred years, the day-to-day nature of the work has changed massively — stockbrokers have transitioned from “runners” hand-delivering orders to telegrams to phone calls to IMs and are continuing past that, for example. And yet a time traveling stockbroker from 1918 would see many familiar names on business cards, although he’d wonder why they still called it Wall Street when it’s mostly Midtown. A time traveling accountant could well have known one of the Ernst brothers, or Arthur Young. He’d be confused as to why the move from pencils and ledgers to Excel didn’t manage to eliminate 90% of the worker or 90% of the workday, but the actual institutions would be comfortingly similar.
There is something about these industry non-specific service providers that allows them to gracefully survive massive technological change. We may not know what, exactly, will happen, or when. But they can tell us what else will happen whenever whatever’s going to happen actually happens. That makes this panel the perfect cast of generalists to answer the specific question of how AI will affect the workplace.
Early in the discussion, Shira Ovide noted that white-collar middle-office automation is a bigger deal than factory workers literally getting replaced with robots.
This is absolutely true. For one thing, US manufacturing employment peaked in the late 1970s, and in every economic cycle since its high point has been later (and, incidentally, the lag between when it starts dropping from its cyclical peak and when the economy goes into a recession has been getting longer. Perhaps at full employment, factory workers trade up into other jobs).
For another, large swathes of the white collar economy are “knowledge work,” ie using a single human being as an informally specified ad hoc API between two products. If any part of your job can be described as “Enter it from X into Y,” or “Summarize the results in a paragraph,” or “Refresh the pivot table in Excel, then paste the chart into Word,” expect it to get automated at some point.
Third, most interestingly, white collar automation forces every single Tofu Field to completely upend its hiring strategy and career cycle. In the glory days, the way it worked was something like this (I’ll use banks because I’ve read more about them, but I understand that other fields are roughly similar): every bank hires an enormous class of new analysts, bigger than last year’s class by far. After two years, some of these analysts will quit, some will burn out, and some will rise to the occasion and become associates. After two to three years, associates rise, similarly. At some point you go back to school and get an MBA. After some variable number of decades, you’re at the top of the heap: a managing director.
All the while, you’re facing pressure to quit. Jobs in industry are a little more stable and not so crazy. You’re making good money after a while; live below your means in New York for a few years, and you’ll have the resources to live indefinitely somewhere cheaper.
But some people persist, and they make it to the top.
The “demographic pyramid” of a growing firm in a services industry is steep. Few MDs, armies of analysts. So over time, some combination of three things needs to happen:
- Title inflation: everyone gets promoted to “MD,” but it means less and less. Some of this has happened, but it seems to have slowed down. I do have to code-shift when I talk to tech people; in finance, “VP” means “someone who joined the industry after college and is now in their late 20s or early 30s,” and in technology it means someone one step below C-level.
- High attrition: if you continuously raise the bar, not only do you get better people, but you lose more people. As long as yours is a prestigious business (see #1 and #3), you’ll still get new recruits, so calibrate your attrition based on how many very senior people you’ll need in ten to twenty years.
- Growth: Suppose you’re a very tracked company: First-year analyst to MD takes exactly fifteen years. Every MD has two VPs reporting to them, every VP has two associates, every associate has two analysts. (No org chart is quite this linear — or exponential.) If you retain each of those eight analysts for the entire sixteen-year period, you need 64 analysts in your incoming class that year. That works out to around 15% annual growth. You can start to see why Yuppies were so excited by these jobs in the 80s: it was entirely plausible that white collar knowledge industry companies could at least promise to grow that fast, which means a senior job is available to everyone who wants one enough to stick it out until the last promotion.
So, what do those eight (or sixty-four) junior analysts do all-day? What do first-year lawyers do, or cub journalists, or staff accountants? They do the most repetitive, commoditized work. For whatever reason, the model many service firms have settled on is that you bill by the hour, with the hourly rate mostly reflecting the prestige of the firm and secondarily the value of the specific employee in question. Often the bulk of what clients pay for is stuff they could have done in-house for less, but it’s bundled with services from senior people (advice, connections) that they can’t get anywhere else. It may say something about the culture of corporate spending that “I asked a smart person for advice, and paid him a million dollars” doesn’t fly, but “We got advice and a bunch of grunt work for $5 million, which is $2 million more than the grunt work would have cost us if we’d done it ourselves” does. This is very strange, like finding out that Michaelangelo made money by charging per pound of marble and selling the Florentines one statue and a zillion miles of countertop.
This model means that Tofu Field employers don’t have to optimize for every trait they care about when they hire. These companies need senior employees who are smart, hard-working, honest, personable, well-connected, cool under pressure, and willing to put in long hours when necessary. Some of these traits are intrinsically hard to measure, or even hard to have, at the entry level; what does a “well-connected” recent graduate even mean? Others are pretty trivial to track, since they’re what schools admit and grade on.
What happens is that employers satisfice at the entry level and maximize when promoting. Smart and conscientious employees will always be useful, even if they’re not senior management material. So it’s no big risk to hire them and expect them to eventually leave to work in-house with a client. But early on, you can showcase the traits that you need at the top. The 23-year-old analyst who doesn’t let an associate steal credit for her work is exactly the sort of person who, as managing director, won’t let a competitor steal the big IPO mandate.
Automation changes that, because automation gets rid of the boring work these companies charge for and use to test their employees. This should radically change the way people get hired and promoted:
- Companies will need much smaller starting classes of junior employees. Building software carries an upfront cost, but scaling software is cheaper than scaling humans, so software will eat all of the tasks that untrained humans do in parallel.
- Sales and management skills matter from the beginning; instead of satisficing and then gradually maximizing, employers maximize from day one.
- Related to this, promotions are faster and more fluid; once you’re in, you can rise fast.
- As each company develops a thicker software stack, institutional knowledge (of coding styles, proprietary libraries, etc.) becomes more valuable.
- Companies will be less risk-averse for entry-level jobs. Since they’re hiring for some skills schools measure and other skills schools don’t, they may select more outliers with odd backgrounds.
- All of this conspires to make tenures longer for the people who do make it in.
Historically, white-collar service firms have employee attrition, but only when they don’t need it. During boom times, they’re losing employees to clients, competitors, and other ventures; during bad times, the only people not anxious to hunker down at Bigco are the ones hunkering down at B-school instead. If most of the junior work gets replaced by robots, expect a painful transition. Maybe the deeply-indebted late-20s business school student, rather than the deeply-indebted mid-20s art school student, will be the face of the next wave of student loan debates.
There will be an even more awkward middle stage, when the bots are up and running but hungry for data. In AI terms, we might call data “labeling the training set,” i.e. you show an algorithm 1,000 examples of spam email, 1,000 examples of legitimate email, and it learns what the signs of a spam email are. For an investment banking AI, you might show your AI 100 10-Ks and 100 pitchbooks, and have your AI sort out which financial datapoints matter from that. The sad thing is, structuring this data is exactly the kind of tedious, time-consuming work entry-level people have had to do since time immemorial. A whole new level of training your replacement.
Towards a More Nietzschean Antitrust Paradigm
Later in the debate, the conversation turned towards Big Data, Big Algorithm, and the pressing social need to ensure that all AI advances are ruled by the Department of X instead of X Inc. The idea is, roughly, this: the first good self-driving car algorithm will save countless lives. If it’s deployed on just one company’s fleet, it will save fewer lives than it could. And what if two self-driving algorithms happen to interact in a way that increases the risk of accidents? If mine says “when in doubt, swerve left,” and yours says “when in doubt swerve right,” then we’ll collide. In economic terms, it makes sense to take something with zero marginal cost, like software, and make it a public good, although as Noah Smith noted, it’s not a slam dunk since the algorithms are somewhat excludable (you have access to safer roads whether you pay or not, but have access to safe driving algorithms only if you buy).
I am somewhere between dubious and freaked out by the prospect of nationalizing data and algorithms once they’ve been demonstrated as valuable, and I elaborated a bit on why. My question, cleaned up a tad (l’esprit de l’escalier but for fast elevators):
The discussion of AI successes as a private good is a fairly pre-2010s approach. What we now know about large technology companies is that they create or capture a monopoly, throw off enormous consumer surpluses while providing their somewhat monopolistic service, relentlessly commoditize the complement, and lead to large accumulations of wealth — but that wealth doesn’t tend to get consumed, or redistributed to people with a high marginal propensity to consume. Instead it gets invested in pure research, mostly exploring space or curing all disease. So it seems that we already have all the benefits of government involvement, except that the people in charge are competent and they care about the outcomes. So shouldn’t we have a more Nietzschean attitude towards antitrust and regulation, where we promote rather than demote the success of these companies, instead?
The dreaded “more of a comment than a question.” Sorry.
But I can comment further.
The history of big tech can be told as a history of privatizing a commons, making it vastly more valuable, and then skimming off profits from that. Sufficiently transformative technology precludes Pareto Efficiency because the set of goods and services available post-transformation doesn’t match the set available before, so the two states are incommensurable.
You can see this happen in tech history, over and over and over, at all sorts of levels:
- Ride sharing privatizes public transportation, massively expands it, then adds ancillary products that can free-ride on the existing point-to-point infrastructure.
- Search privatizes the web by replacing URL bars and hyperlinks with searches and shallower hyperlink journeys.
- Bill Gates briefly privatized Harvard’s very nice computer lab to write an early version of BASIC.
- Mark Zuckerberg privatized Harvard’s cachet instead.
You can even model technology businesses as a Coasian exploration process: given this new change in what’s possible in the world, how should property rights be rearranged? In that model, the tech company’s monopolistic market value comes from homesteading a previously-nonexistent form of territory. We should be very leery of discouraging this by threatening to expropriate the results; that’s a tax on innovation.
The pushback I got, from the panel’s excellent emcee Noah Feldman, was basically: people will still innovate. And anyway, venture capitalists already make a lot of money, and if there’s a chance they’ll lose some lucre, they can factor that into their discounted cash flow model and proceed accordingly. No big deal, really.
I don’t want to adopt an entirely Marxist view where someone’s class consciousness determines their economic outlook. But I’ll just note that Noah’s two day jobs, Professor at Harvard and Editor at Bloomberg Media, are two professions that are funded by the increasing financialization of the market, but don’t directly participate in the ups and downs thereof. So if there were anyone I’d expect to naturally view the returns of investors as a big source of funds whose size is fairly inelastic with respect to being redistributed to worthy causes such as, for example, scholarship and journalism, that’s who it would be.
But that’s not really what’s going on here, because my issue is not that venture capitalists would close up shop. “Well, boys, it’s going to take me three years to pay off this yacht rather than two. I’m calling it quits and opening up a food truck.” No, my concern is that we’re making the distribution of rewards to investors different from the distribution of rewards to society, and in the wrong direction. Already, we underinvest in some cool stuff because it’s so hard for companies to internalize the externalities they generate. (In fact, the life cycle of a mature tech company is that it gradually internalizes more externalities with respect to its core products, while finding new sources for step-function growth in value. So to some extent I could be wrong by way of being early; maybe the long-term story is that they do capture asymptotically close to 100% of the value they create.)
You can imagine the power-law curve of social value created as having a slightly longer tail than the curve of market value created. And yet venture capital is a bet on future market value; if the LPs are going to donate to charity, that donation comes out of VC fund’s IRR, not from the side effects of its investments. In that model, a potential tax/regulation/expropriation of transformative tech further shortens the market-value curve. And that doesn’t just mean less tech investment; on the margin, it means more boring tech investment — more VCs writing checks for coffee shops and grilled cheese chains, not for radical life extension and asteroid mining. More “Yo,” fewer flying cars and pseudo-sentient robot cats.
In short, more money spent optimizing the present instead of building the future.
You want a weird future, not a normal future. A normal future is just the like the present, with a little less oil in the ground and a little more carbon in the air. Why bother?