The Rise of Single-Result Search Products

Plus! Mango Manipulation; The Other Emerging Labor Shortage; China's Labor Shortage; Unspinning; Bad News; Diff Jobs

The Rise of Single-Result Search Products

Back when Google first redefined search, the definition of search switched to a list of links to webpages, ranked by their relevance to a string of text suggested by the user. People who work in SEO still sometimes wax nostalgic about "ten blue links" (in part because it used to be much easier to get a site to #1 on the list). That was never a natural end state for search, just a convenient way to split the differences between a) giving everyone the best possible result, b) giving users more choices (and, not unrelated, collecting data on which results they clicked to inform their rankings and overall rankings), and c) having a format that eventually lent itself to ads.

The direction Google wants search to go in is away from "results" and towards "an answer," and other companies in the search business, the ones not bound to legacy decisions, often start there. A financial data product like Sentieo is partly trying to be a nice way to display lots of information from and about a company in an easily-navigated way, and is trying approach the asymptote where it can answer questions like "What have they historically said about seasonality (and does it line up with their excuse for missing numbers last quarter)?" A product like Uber or Lyft is also search—consider the space of cars in the network, narrow it down to the one that gets you, the customer, to your destination as quickly as possible. And you can even think of a financial exchange as a kind of search product, where you instantly find the most favorable price to buy or sell by putting in a market order, and then produce the revenue that pays for that search product when the order executes.

What newer search products have in common is that they're closer to giving exactly one result. And it's not just an answer, but an answer-and-transaction, because the question is "what's the best version of X to buy?" and the natural response to getting an answer is to buy X.

At the same time, search interfaces are changing in a way that approaches the one-result asymptote. Googling "Coffee Machine" on my desktop shows me six ads and four "organic" results (every one of which is a story from a news site; half are numbered lists of top coffee makers and half are reviews of specific coffee makers, i.e. I can click on six ads and four landing pages for ads). On mobile, I see three ads and no other results above the fold. Some of this is because the search results page is more crowded, with images rather than pure text. And the page gets more crowded still when Google extracts the actual content you're searching for; a search for "when was Microsoft founded" instantly gives the answer, with a link to more context.

And that's still a typing-based interface. Voice search with audio results makes it basically impossible to deliver a broad set of search results, and trivial to offer just one. (The interaction between this and small children has led to a steady stream of royalties for artists who have figured out what kids will yell at Alexa—clearly, SEO is not dead.) Audio interfaces for search are getting more common in part because they're a good way to keep people searching while they're busy driving (Americans spend roughly 55 minutes a day traveling), and those are monetizable minutes!

One-result search works a bit differently from returning multiple results:

One-result search gets even more interesting when it collides with other emerging technologies. Jon Stokes made this point very well a few weeks ago: Every input you give an AI is really a search query. Instead of searching the space of existing content, you're searching the space of possible content—a sufficiently good function for mapping queries onto results can describe those results well enough that they can be generated when they don't exist yet. So you can ask GPT-3 something like "What joke did Lenin make in response to the 2016 election?" and get a serviceable answer ("I've been dead for almost a hundred years, and even I can see that things are going downhill.")

The nice thing about AI search results is that they solve a subset of the quality problem: sometimes people are looking for something that doesn't exist yet. AI results don't always work here; if you're Googling song lyrics you misheard, it probably isn't helpful for Google to invent a catchy new tune using your mistaken lyrics (though Weird-Al-As-A-Service might find some takers). But for searches where the answer could exist in principle and nobody has specifically written it down, AI can produce the right answer ("Best restaurant near me for a lunch for six people, with both vegan and paleo options on the menu, with an available reservation tomorrow at 12:30pm" is the kind of query that composes data in lots of different places, but it's data that mostly exists online right now—or, at least, enough of it exists to produce a decent result if not the best one.)

And why restrict search results to the world of the digital? A search for "iPhone case illustrated with Van Gogh's 'Sunflowers' covered in soup" could just return a design, but could also spin up a 3D printer to produce the product itself. It's possible to imagine other software-hybrid outputs, too; a middle ground between building an app yourself and hiring a team might involve using GPT-3-assisted product managers, Github Copilot- or Mutable-enhanced developers, and running partly automated QA with a human checking the results. Since the total human input to this would be smaller, it makes sense to plug this kind of request into a more liquid labor market instead of hiring a full-time team. This process might be noisy at first, but it's not crazy to imagine that at some point, you'll be able to input a search—by voice, text, a photo, or neuralink—and get, as your search result, a functioning car.2

While most search engines tend to approach one-result search, the economics aren't clear-cut. There are a few obvious direct benefits: low-friction results mean more frequent searches, and a de facto ~100% clickthrough rate on the first result means that more of those searches can eventually be commercialized.

But one of the drivers of search economics is that there are multiple paid results for the highest-monetizing queries, and that means more abundant information, both for advertisers and for the search engine, on the economics of those searches.3 With less information, search engines will have more limited ways to capture the profit from high-value searches.

There is a solution: search engines can go deeper in the verticals where single-result search is popular. Spotify did this backwards, by monetizing listening and then building voice search later. Amazon goes very far down the stack monetizing product searches, and Google has made progress monetizing informational searches, but both will probably aim to control more of the high-value results themselves over time. So single-result search ends up being a bigger market than traditional search, but also one with higher costs and operational risks.

Beyond the economics of search, it's worth thinking about the social impact. My kids will not remember a world where search isn't ubiquitous, where you can't just Google song lyrics when you don't know the title of the song or use Google Books to track down an obscure quote in minutes. There's cross-generational mutual incomprehension sometimes: I'm impressed when Google can track down a movie they've heard about secondhand with only the vaguest of clues, and my kids are gobsmacked when we can't find something they're looking for. Perhaps by the time they're adults, the experience of looking for something digital and being unable to find it will be foreign, the way kids who grew up with smartphones don't know what it means to be lost.

But that also means they won't have the experience of poking through search results to get multiple views on the same topic; if they don't know how to ask for an alternative, they won't be able to find it. This can happen already in cases where an event gets covered widely and Google weights recency heavily—if a company gets fined and you're interested in other times when they've been fined, you sometimes have to know details of the previous event to get it to show up in search. And in cases where the topic is controversial to some people but uncontroversial to the search engine operator, results will skew strongly towards one view.

What starts to happen over time is that the knowledge graph gets partitioned into a bunch of knowledge line segments, where there's a strong connection between a question and a single answer, but no connection at all between that question and alternative answers. For topics where there is one good answer, that's fine, but in other cases it's hobbling. And it gets even worse if search engines pick up bad data. Siri once told me that dinosaurs first evolved "thousands of years ago," for example (though I haven't been able to replicate this query since).

Search engines are continuously getting smarter, and that's made them much more useful. But the question with any sufficiently smart tool is always whether or not it makes users dumber, and, if so, what to do about it.

Thank you for reading The Diff. This post is public so feel free to share it.

Disclosure: I’m an investor in two companies mentioned above—Amazon (public) and Mutable (private).

A Word From Our Sponsors

Tegus is the first port of call for M&A professionals and institutional investors ramping up on an industry or company.

Get access to a database of 35,000+ expert call transcripts, spanning 5+ years, or schedule expert calls through the platform for a fraction of the usual cost.

When thousands of research analysts are pooling their expert calls into an on-demand database, using Tegus is table stakes. It's the leading platform for due diligence and primary research.

See the power of a Tegus subscription, and get up to data parity with your competitors, with a two week free trial through the Diff.

Elsewhere

Mango Manipulation

Last week's Diff covered the Mango caper, where a trader took advantage of an automated lending protocol and a thinly-traded market to a) inflate the value of a token, b) borrow against the inflated value, and c) keep the borrowed money after the price collapsed. A Substack post identified the perpetrator soon after, and over the weekend, the trader issued a public statement. It begins with "I was involved with a team that operated a highly profitable trading strategy last week," the kind of line that can be read in a tone of either contrition or extreme glee, with the upshot that he believes the trades were legal, he gave back enough money to make depositors whole, but he didn't give back all of the money.

This raises some good questions about how laws and norms evolve in different markets. Every market needs to exist in a space where two things are true:

  1. Participants do not believe that their counterparties are actively defrauding them, but

  2. Participants also believe that they can get something right that their counterparties get wrong.4

There have been many profitable strategies that arise from someone misunderstanding the nature of a product they're trading. And there are plenty of traditional market analogies to this case, where the mistake was in modeling second-order consequences of market behavior: the traders who recognized that far out-of-the-money put options were mispriced prior to 1987 (i.e. who realized that something causing stocks to drop 20% suddenly would increase volatility a lot along the way, and that it would be hard to dynamically hedge an options position in the conditions where those options were in-the-money) did well, and since then options prices have reflected this insight. But someone who makes a similarly advantageous trade with an unsophisticated and politically sympathetic counterparty, like a retail investor or a local government ($, WSJ), fines and bad PR can claw back all the profits.

The sticking point will be market manipulation. It's usually hard to catch and punish manipulation because it requires intent and people can argue that they had some intent other than to manipulate the price when they made particular trades. In this case, there are Discord posts specifically saying things like "You take a long position... And then you make numba go up... And then you withdraw all protocol TVL."

But the question on this is: should niche crypto products have an expectation that markets won't be manipulated? Some markets are legally manipulated, as when banks stabilize prices after an IPO. In most markets, manipulation is not accepted. But crypto is a tricky one, because it's global and pseudonymous—if someone is manipulating US stocks, they're working through a broker that's regulated by the US government in some way, and that broker will not want to be party to illegal behavior. But in a decentralized-by-design market, there's no way to exclude bad actors. So cryptoassets that can be profitably manipulated will be, by someone, and banning particular market actors from doing this won't make the problem go away. (If anything, it will make the problem worse, because the manipulators will specialize more, and get better at it, whereas if some manipulation were done opportunistically by people in another line of business—Turney Duff's memoir talks about pumping up stocks on the last trading day of the year to make his fund's annual numbers look better, but that was a small part of what he did.)

The Other Emerging Labor Shortage

One persistent source of alpha comes from counterparties being slow to update quotes after the arrival of new information. If a stock is trading at $20.10, and a trader has a limit order to buy at $20.01 in order to get a better price, and there's suddenly new information that moves the stock to $15, that limit order a free money for whichever trader is able to hit it first. Usually, these opportunities are small, fleeting, and the result of sloppiness. But sometimes they're bigger, and driven by regulation.

For example: some pension plans allow people to cash out with either a fixed payment or a lump sum. The value of that lump sum is determined by an IRS calculation, and the result of that calculation is announced well in advance of when pension recipients can elect to retire and get a lump sum. So some experienced executives are retiring this year to take advantage of 2021 interest rates ($, WSJ).

A lot of the labor shortage narrative this year has been driven by high turnover in entry-level jobs, which certainly leads to economic frictions. But turnover among experienced people, especially if they're all leaving at once, means that companies will be deprived of tacit knowledge and accumulated professional networks. Per lost worker, that's a bigger problem.

China's Labor Shortage

Sudden retirements aren't just a policy mistake; sometimes they're a policy tool. One result of the US's new restrictions on chip exports to China is restricting US nationals from working in the Chinese chip sector ($, WSJ). The first-order impact is that a lot of them presumably plan to quit and move back to the US, once again taking tacit knowledge and networks with them. (And as various high-dollar failures to keep up with TSMC have demonstrated, chip fabrication is very tacit knowledge-intensive.5) But the second-order impact is that China's policymakers can only pursue their current goals if they hoard talent as well as capital, which changes their incentives for a long time.

For an earlier Diff look at how sanctions are increasingly all-or-nothing, see this piece from March ($).

Unspinning

News Corporation and 21st Century Fox, which split up in 2013, are considering merging back together again. The media industry seems unusually fertile ground for this kind of transaction; Paramount's corporate history includes: Paramount owning half of CBS in the 1920s, CBS spinning off Viacom in 1971, Viacom acquiring Paramount in 1994, the combined Viacom/Paramount buying CBS in 2000, Viacom spinning CBS off in 2006, and then re-merging in 2019. Value capture in media shifts between content and distribution; that sometimes makes it optimal to spin off a unit in order to get a higher multiple for it, and sometimes means companies want to merge back together to have enough scale to compete and to control more of their own destiny.

Incidentally, this is a good example of a topic where Google's emphasis on timely results gets annoying: a search for "news corp spins off fox" has seven basically identical stories about the proposed merger, followed by a link to the Wikipedia article on News Corp, the text of which gives details on the original spinoff.

Bad News

Meta is ending support for the Instant Articles format, replacing them with standard links to publishers' mobile sites. When Instant Articles started, it was a way to reduce the friction from opening articles through Facebook; the Facebook app was zippy, but the median news organization didn't invest the same resources in responsive design, and Facebook didn't want users switching out of their app because of someone else's slow code. But now, it's better for Facebook-the-product to increase the friction of reading news and decrease the relative friction for things like sharing family photos. So it's still a story about Meta retrenching in its core apps, but maybe 1% of it is about reducing the engineering resources used to maintain a feature while 99% of it is controlling user attrition from the app.

Diff Jobs

Companies in the Diff network are actively seeking talent! If you're interested in exploring growth opportunities at unique companies, get in touch!

Are you hiring and looking for access to a unique pool of passive candidates? Please reach out if so!


  1. Mostly. A weird edge case is Spotify, which has lots of recordings of music and associated lyrics. i.e. They have a giant corpus of speech that's matched to text and has all sorts of distortions and background noises to remove.

  2. One can imagine darker versions of this "novel pathogen, r-nought of 10, 30% CFR, trending on STATnews."

  3. A search engine mostly cares about the average incremental profit per click on a given search, but also cares about the distribution. If there's a query with a very smooth distribution—if you to fill 100 ad spots for "credit card," you could—then the search engine can use autocomplete to nudge searchers toward better-monetizing long-tail searches based on the same term, and can encourage more searches for the general topic. If there's a steep dropoff, where the product is a local monopoly, then the search engine operator knows that additional search volume won't produce much profit for them. But in some cases, they may find that the distribution of searches is amenable to changes! If e-commerce profits are partly a function of the ability to cost-effectively offer 2-day delivery, for example, a savvy search engine will want to commoditize delivery and increase the number of competitive merchants. That's such a big task that you might not even think of that search company as being in the search business, mistaking it for a retailer, a cloud computing company, streaming video platform, a bookstore, or something.

  4. This holds true even if the counterparty is a pure liquidity provider: the bet they're making is that they can sell liquidity at a favorable price. If they didn't think so, they'd be providing liquidity in some other market instead.

  5. The fun examples of this all date back to earlier in the industry, when chips were bigger and the phenomena affecting them were things that non-physicists were aware of. Early Intel found that pollen levels could hurt yields, and that they could statistically determine which employees didn't wash their hands regularly. Now, the potential errors are smaller and subtler in scale, which also means they're harder to spot. If you say "why are yields so low today?" and someone sneezes, you might suddenly realize you got the answer, but if it's some complex interaction between half a dozen automatic and human-driven processes, the intuitions are only available to people with experience.