Who was First to Situational Awareness?

In this issue:

  • Who was First to Situational Awareness?—In retrospect, Google was building a full-stack AI company all along.
  • The Short-Selling Cycle—Short sellers have had a tough decade and a tougher year, but their business has a cycle of its own.
  • Maturity-Matching—If a fund can get investors to commit for up to half a decade, they can expand into strategies that take advantage of that kind of stable funding.
  • Fundamentals Manipulation—You really don't have to worry about manipulation in "mention markets."
  • Tariffs and Meta Mean-Reversion—A market-based approach to achieving left-populist redistribution by way of right-populist policies.
  • FDEs—If you can add another coefficient to your growth model, it's a big deal.
The Diff November 3rd 2025
0:00
/1300.845714

Who was First to Situational Awareness?

In the Sam Altman biography The Optimist, Altman recalls playing around with his first computer as a kid: "I just remember thinking that someday the computer was going to learn to think." Before Facebook, Mark Zuckerberg and Adam D’Angelo worked on Synapse, a music recommendation service. They chose the domain “synapseai.com” to host it.[1] A profile of Demis Hassabis quotes him saying "I wrote AI opponents for Othello, as chess was too complicated for the machine to run, and it beat my younger brother." In a 1993 interview, Bill Gates was asked about what’s coming in the future, and his answer was “Well, if you look out far enough the computer will eventually learn to reason in somewhat the same way that humans do, so called "artificial intelligence.””

So, for many of the people and companies involved in AI, it’s a bit of a homecoming: they were interested in computers at a young age, were impressed with what computers could do, wondered what the outer limit of that was, and decided that whatever that limit was, it would some day pass through a state where it was human-level.

And then they started social networks, wrote video games, shipped a BASIC interpreter that could run on the Altair, etc. AI would have to wait, though once it was more viable they and the organizations they founded went increasingly all-in.

As we discussed in our piece on AGI, the nature of the AGI bet, and the likely value creation mechanism: AGI is not just about Nobel Prize winners in a datacenter; that’s a cool side effect of building a digital twin for the entire economy—the economy itself being a distributed computing system that is admirably effective at matching wants and needs to the best available solution despite the drawback of being made entirely of fallible, imperfectly-informed human beings. Improvements in this superintelligent system, in the form of physical capital formation and productivity growth, allow for better, more complete solutions to a larger number of more complex problems over time. To the extent that this process is automated, it’s a more measurable, improvable one.[2]

Our argument in the piece is that with the app-use/router evolution OpenAI announced a few weeks back, the aggregation and matching mechanism that will make the plurality of problems and solutions in the economy legible to computation has been created. The core world-changing insight has been achieved! Now comes the hard part. That router is only as good as the consumer intent data it can capture and the legibility of available product offerings. You need some way to get people to explicitly express intent or implicitly hint at it, a comprehensive way to access the economic endpoints that can meet those needs, and the compute necessary to match them.

The router thesis is that in the end, the social dividend of the AI buildout will show up increasingly in more efficient commerce; LLMs that can write code and emails are a nice proof-of-concept, but not the final product. (This is an echo of the early development of the computer, where initial use cases were more self-contained—running payroll or selling the seats on a plane flight were already information-processing tasks that could be neatly digitized, but

What that router argument didn’t discuss is what will lead to maximal value capture. We know that AGI, if possible, will create lots of value in the form of lowering transaction costs across the economy, but which company currently has the highest likelihood of capturing the most value from AGI, or cheap intelligence? In our view, the company that will win will be the one that effectively deploys capital and talent to most comprehensively execute across 3 axes:

  1. Building the highest quantity, quality, and variation of intelligent sensors that produce valuable information about, and legibility, into the supply and demand of everything. Not just the easy stuff, like figuring out the market price for a ton of nickel or an ounce of silver, but the dark matter of identifying the right exchange rate between on-the-job satisfaction and the approval of peers, or the point at which the wealth-maximizing option is to call in sick and spend the day hiking. When two parties transact, the terms on which they do so create a low-dimensional projection of infinitely complex inputs—OpenTable sees a dip in your second-week-of-February clickthrough rate for emails about snagging a Valentine’s Day reservation, receiving a few lossily compressed bits of information about heartbreak. The decomposition of these higher-order representations into specific actions is not easily priced or understood, especially by computers, but sometimes also by the actual people involved.[3] This economic dark matter must be made legible across all scales—individual, institutions, the entire economy, etc. Running that principal component analysis on all human behavior is not a trivial problem. Even tiny corners of it get complicated! YouTube has produced an immense amount of consumer surplus in the form of entertainment and the transmission of tacit knowledge. YouTube can increasingly, first with recommender models, now with video models, and LLMs understand and decompose that dark matter (tacit knowledge) and make it useful to a computable coordination system. On the enterprise software side, intelligent sensors are more primitive still. B2B Software companies (Salesforce for example) produce a lot of information about economic dark matter—email tone, response time, call length, etc.—but wasn’t originally built from first principles with the intelligent models that can really understand that dark matter and put it to work. This is changing fast, of course, both through incumbents and through AI-native challengers.

  2. Building the general intelligence system that has access to, processes and makes useful all the information produced by these intelligent sensors of economic dark matter. And that powers and incentivizes the creation of new sensors to collect more, most completely aggregates and computes this increasingly legible economic dark matter, and coordinates it in more and more effective ways.[4] Every step of this Great Legibilizing is both a step towards AGI and an incremental improvement in total factor productivity.

  3. Access to and ideally ownership of the most performant, reliable, and largest computers to power the intelligent sensors and this general intelligence system, as well as be used to make improvements in their underlying capabilities. As computer intelligence gets integrated into more products, both in the production process and the final output, it becomes a more fungible good, like energy.

The meta-axis is vertically integrating/owning as many of the discrete pieces/inputs involved in the first three. We can call this vertical integration axis 4. This is the most important because there are increasing returns to scale to each owning each of these axes: whoever has the best data can pay for the most compute, and whoever has the best distribution can monetize the other two even better. The more roles the company plays in its ecosystem, the more it can also subsidize the creation of whatever the limiting factor is—maybe building custom hardware in part to suit its own needs and in part to have pricing power when it’s buying from third parties, perhaps by strategically under-monetizing some of its businesses to lubricate a data flywheel. There will be more and more complementarity, efficiency, and utility generated for every component they own. This is what will allow them to execute across these axes of AGI without relying on elements outside of their control—and that certainty means that they can put a lower discount rate on investments that further entrench them. The alternative is to be an economic captive of whoever controls whatever the critical complementary product happens to be.

In thinking about these dimensions of AGI competition, one company comes to mind as the one that’s trying to lock all of them down—OpenAI has a chatbot that users share their darkest secrets with, in addition to lots of more prosaic questions that sketch out both the topics they care about and the limits of their own reasoning, and it’s trying to expand into video, too. They have good-to-best-in-class models. And they’ve locked down unprecedented amounts of capacity, or at least gotten other companies to promise they’ll provide it in exchange for which OpenAI promises to raise or earn (in that order!) enough to pay for it.

But in a sense, all of that is just playing catch up to Google. On the three axes of knowing what people are looking for, having models to interpret it, and having the computing power necessary to solve for these answers, Google is better-positioned than anyone else. And, on the meta-axis of vertically integration across these, they are the only player at all. In a sense, it’s because that’s what Google has been building all along..

Larry and Sergey were the first to situational awareness: Google was always a bet on cheap, abundant intelligence. A decade ago, they tried to rebrand for investors as “Alphabet,” a holding company that did search among many other things. But many of those businesses are fundamentally closely-related. The Alphabet structure was a way to separate the accounting for their core businesses and their miscellaneous science projects, and thus a way to signal to investors that they were going to be disciplined about capital and return it to shareholders rather than frittering it away on science projects. This synthesis of Montessori and Morgan Stanley assumed that Google’s various businesses would have different priorities over time, but it turns out that one moonshot, artificial intelligence, would be integrated into everything else, and that another winner, autonomous vehicles, was also closely (and not coincidentally) complementary.

Larry Page said in 2000 that artificial intelligence would be the ultimate version of Google, the computer that understands what you need, and gives you exactly that thing when you need it. Search itself is a proto-AI product. A search is either “find what I’m thinking of” or “find what I would be thinking of if I thought a lot harder.” So it’s replicating either human memory or human cognition, and the better it gets the more it reaches superhuman skill at these.

Over the past three years, shrewd investors began to believe that Google fell behind on models (at least models they were willing to publicly release), after the GPT-3 API launched, were beginning to fall behind on distribution after ChatGPT launched and scaled, and after the announcement of OpenAI’s partnership with Microsoft and the Stargate initiative, not to mention the last few months’ blitz of deals, were falling behind on infrastructure as well.

The story goes: OpenAI was the first company to truly scale up and productize LLMs and in doing so were able to create, power, and own the best new intelligent sensors as well as the general intelligence system that could understand previously illegible information and make it useful in a profoundly new way. With every research breakthrough and product release, OpenAI accelerated improvements in the intelligence system and increased the quantity and quality of sensors their system powered, owned, or had access to and the information they produced. On the underlying intelligence side, they scaled up the models more and more, invented and implemented reasoning, and pioneered tool use and routing. They also made more and more economic dark matter legible to the system. Individuals began interacting with the system and more and more of them continue to interact with it at an increasing rate and in more contexts. Other firms began powering their sensors with OpenAI’s API. With the introduction of reasoning and external tool use, intelligent machines (some with access to lots of information/sensors of their own, like search and code interpreters) began interacting with it as well, and most recently with app-use/routing entire firms/systems are making their products/end-to-end services and all the intelligent sensors and the dark matter they understand accessible by OpenAI.

They were then able to parlay that lead into capital, and turn that capital into infrastructure spending. There is a qualitative difference in spending as the numbers get bigger: at first, it’s OpenAI paying up to get capacity, but after a while, it’s OpenAI locking down enough capacity that there simply isn’t more to go around. It was increasingly easy to extrapolate to a world where OpenAI was a default share-taker, that made them the default partner for everyone else in the value chain, and that aggregation made them unbeatable.

All of that is fair, and they do have incredible momentum, but investors have also repriced Google. What changed? Since April 2025, Google’s stock is up 100%, significantly outperforming peers and potentially even outperforming OpenAI’s valuation increase over the same time period. Investors have started to come around to the idea that Google may not actually be behind at all. In fact, sentiment after the Q3 print is starting to resemble something very different: the idea that Google is ahead in the areas that matter the most, are increasing their rate of execution, and that this will be a marathon that requires more balanced progress across all the axes of AGI over a very long time period to finish and win.

OpenAI started an all out sprint on models and distribution, but is still relatively early to infrastructure and especially to vertical integration. On the other hand, Google has been running the marathon since 1998, and is showing signs that they are not just ahead, but also that all of the running over the last 27 years has gotten them to a nice corporate VO2 max. They just have a lot of operating cash flow, compounding technical advantages, and talent to throw at the problem of winning the AGI race. It’s basically a high sharpe-ratio portfolio of AI bets, where they risk less from disintermediation than anyone else. If models are a commodity and customer count wins, then having a portfolio of billion-user products is great. If models are the limiting factor, Gemini is a contender. If it’s about compute, Google is very efficient at turning dollars into tokens, relies on very few external parties to do so, and will only get more efficient as it amortizes chip design R&D over more spending. They’re making all the economic bets at once.

One quick pushback people have is on talent. Google may have the best existing portfolio of bets, but do they have “founders” who can effectively and intensely steward those bets for maximum value creation and capture in the future? Herein lies a misunderstanding. Google’s bets on AGI (as per our axes: models, distribution, infrastructure) almost all retain their founders. And unlike other companies, Google has this odd feature of doing a really great job of attributing certain inventions and initiatives to people outside the “true” founding team. This was also part of the goal with the Alphabet restructuring. Jeff Dean and Greg Corrado created the first distributed infrastructure system for training and running deep learning models, called DistBelief in 2011. Jeff Dean also (4 years earlier, in 2007) re-architected Franz Och’s original, winning DARPA machine translation algorithm to run in parallel on Google's distributed infrastructure. This materially sped up machine translation, allowing it to be productized as Google Translate and marking the first "large" n-gram language model used in production at Google. Dean and Corrado also founded Google Brain, which, with the legendary “cat paper”, TensorFlow, and later “Attention is All You Need” catalyzed the commercial deep learning, recommendation algorithm, LLM revolutions. Techniques invented at Brain and first productionized by Google still make up a large percentage of monetizable deep learning workloads in production today (think TikTok, Reels, ChatGPT, etc.). Demis has led more breakthroughs in reinforcement learning than perhaps any other researcher. Sundar led Chrome, which has probably done more than any other Google product to increase the company’s information-capture surface area.. Norm Jouppi was the technical mastermind behind the TPU. These people are all leading Google. Sundar as CEO, Norm as technical lead of the TPU unit, Noam and Jeff as Gemini co-leads, Demis as CEO of the entire AI unit, and even Sergey Brin working closely on Gemini (in the office every day!)

What really makes this work is that aside from their fairly straightforward hardware and vertical integration moats, Google is having a surprisingly easy time handling the strategic disruption inherent in integrating LLMs into search, a business plenty of people have speculated LLMs will kill. One of the risks, for example, is that AI search results would answer the questions that used to be answered by the paid ad a user clicked on. Google has long (not always) been an implicit bet that organic search quality is a complement to ad clicks.[5] If AI-generated search results are worth the cost for Google, they’re getting an incredible return on investment, because all that activity means that Google is collecting more training data. If things that would be worth doing at a loss for strategic reasons are turning a profit instead, that’s a strong economic signal that they’re surviving the transition quite nicely. But in the end, that might be the default explanation—how would you handicap a race where one participant has a quarter-century head start?


Disclosure: Long GOOGL.


  1. He also benefited from the fact that really smart people have found the idea of AI compelling long before it was practical; he took an intro AI class at Harvard, and later hired the class’s TA, Andrew Bosworth. ↩︎

  2. It’s hard to argue that we’ll ever solve economics, because improvements in information technology tend to create more complex wants faster than they optimize solving for existing ones. As always, if sufficient hardware is the limiting factor on a planned economy, you’ll see it in Amazon’s 1P/3P mix. And if you don’t see it, it’s not there. ↩︎

  3. And it’s not as if one of those two parties has a strict advantage in every instance! One of the bracingly unpleasant features of algorithmic feeds is finding out, to your dismay, what kinds of things you’re objectively most likely to keep coming back to. ↩︎

  4. This happens economy-wide all the time, as unspoken understandings slowly get digitized and optimized. Status as “a regular” used to be binary, and existed in people’s heads, but if your favorite restaurant uses Toast, SevenRooms, or OpenTable it has a much more precise, rigorous, and contribution profit-maximizing notion of how loyal a customer you are. ↩︎

  5. In a way, this is an incredibly optimistic argument. Specifically, it’s true to the extent that regulations stop bad behavior (e.g. selling products that don’t work) and that companies with a high customer lifetime value can raise money to go out and capture it. In that efficient market, winning bidders deserve to win, and search is just the mediator. That reference to customer lifetime value implies that in this world, search profits are at least partly a one-time windfall as everyone gets sorted into all the recurring revenue relationships they’re ever going to need. But that’s also a world with enormous financial incentives to create the next alternative—one where the profitable existence of a Blockbuster creates a niche for a Netflix. It gives Google an oddly public-spirited set of incentives if good government and efficient markets directly benefit their business this way. ↩︎

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

  • YC-backed, ex-prop trader founder building the travel-agent for frequent-flyers that actually works is looking for a senior engineer to join as CTO. If you have shipped real, working applications and are passionate about using LLMs to solve for the nuanced, idiosyncratic travel preferences that current search tools can't handle, please reach out. (SF)
  • A hyper-growth startup that’s turning the fastest growing unicorns’ sales and marketing data into revenue (driven $XXXM incremental customer revenue the last year alone) is looking for a senior/staff-level software engineer with a track record of building large, performant distributed systems and owning customer delivery at high velocity. Experience with AI agents, orchestration frameworks, and contributing to open source AI a plus. (NYC)
  • Well funded, Ex-Stripe founders are building the agentic back-office automation platform that turns business processes into self-directed, self-improving workflows which know when to ask humans for input. They are initially focused on making ERP workflows (invoice management, accounting, financial close, etc.) in the enterprise more accurate/complete and are looking for FDEs and Platform Engineers. If you enjoy working with the C-suite at some of the largest enterprises to drive operational efficiency with AI and have 3+ YOE as a SWE, this is for you. (Remote)
  • Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out.
  • Fast-growing, General Catalyst backed startup building the platform and primitives that power business transformation, starting with an AI-native ERP, is looking for expert generalists to identify critical directives, parachute into the part of the business that needs help and drive results with scalable processes. If you have exceptional judgement across contexts, a taste for high leverage problems and people, and the agency to drive solutions to completion, this is for you. (SF)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

The Short-Selling Cycle

Trading strategies are sometimes cyclical, because capital flowing into a strategy will benefit the positions of whoever's already implementing the strategy, and this works in reverse, too. So it's an interesting indicator that profiles of shorting-focused investors make them sound so beleaugured right now. The absolute level of retail investor participation in the market is part of what fuels short sellers' returns, which is one reason the shorting business got tougher as markets got more institutional. But an increase in retail participation tends to be bad for short sellers, since retail investors tend to be less valuation-sensitive than institutions (or, at the other end of the spectrum, to be more sensitive to GAAP earnings and dividends and less to free cash flow and buybacks). And it can lead to short squeezes. If both of those traits are true, then retail investors are making negative-EV bets with the occasional positive payoff, i.e. they're taking exactly the same return profile a casino offers. So one bull case for short selling right now is that there are more casino-like options than before, and they'll inevitably cut into equities' market share.

Maturity-Matching

Millennium is launching a fund that focuses on less liquid assets, like "corporate and asset-backed debt, real estate and low-correlation strategies." The Millennium model tends to focus on liquid strategies that can be unwound quickly, but over time they've extended how long LPs need to keep capital in the fund. There's a cost to providing more liquidity than your portfolio can support, because when there are enough withdrawals it leads to a run on the bank. But there's an opportunity cost to locking up capital over long periods and not taking advantage of that time horizon.

Fundamentals Manipulation

Ahead of Coinbase's Q3 earnings call, there was a Polymarket market on whether or not their CEO would use certain words or phrases. And, at the end of the call, he rattled off some of the words. The usual pattern here would be that if someone with a large audience talks about an asset in a way that moves its price, there's a decent chance they were manipulating the market. But in this case, it was manipulating the fundamentals. This particular category of market is obviously prone to information asymmetries, but those only matter to the extent that they have meaningful liquidity (this was a five-figure market, and the CEO in question is worth about $15bn). For rigged mention markets to be an important problem, there would need to be a cohort of public figures who are famous enough to justify a market, not so rich that it's worth trading it, but so money-motivated that they're willing to bother. For everyone else, it's a fun minor prank.

Tariffs and Meta Mean-Reversion

The WSJ notes that tariffs haven't had a big impact on inflation, in part because they have more exemptions than expected, but also because companies are absorbing the cost ($, WSJ). And one of their explanations for that is that post-pandemic, margins went up for just about everyone. Back in early 2023, The Diff asked how permanent the post-pandemic margin reset was, and noted that one of the forces that pushes it down is politics: if revenue is rising faster than costs, and labor is the biggest component of costs, then workers will complain that they aren't participating in economic growth. That would normally lead to a more left-populist swing among voters, but a right-populist got elected, made a policy decision whose economic cost could be faced by either companies or consumers, and let those companies make the call that it was politically safer for them to lose a point or two of margin than to be part of the political narrative.

FDEs

AI labs are hiring more forward-deployed engineers to actually get their products into the hands of businesses ($, FT). The FDE model of having an embedded engineer/consultant who finds every possible way to use a product is nicely complementary to a usage-based pricing model. In an ideal case, the FDE finds some way to, for example, make salespeople 10% more productive and capture 10% of that through more token use. Then the company they're working with will add sales staff, and as the new salespeople get better at figuring out what to outsource to AI, their individual token consumption can grow. If the usual enterprise formula is that you combine positive net dollar retention with new logos to achieve growth that's the product of those two trendlines, AI labs get a third coefficient to multiply because they can also aim for higher revenue per user from the same set of products. With all that upside, it makes sense to take a temporary economic hit by paying some very expensive engineers to patiently dissect customers' business processes and figure out which ones could use a little speeding up.