Longreads + Open Thread

Longreads

  • A Letter a Day has Gordon Moore's first shareholder letter from Intel, in 1975. It talks a bit about the company's early strategy of using memory to scale production, and then expanding into logic. And some things never change: the letter talks about how chip buyers were temporarily overstocked, but had mostly worked off their inventory, and that demand would improve soon. One notable thing about this is that for the first few decades, buying and holding Intel while waiting out the cycle worked well for Intel shareholders—but only because Intel didn't passively respond to the cycle, and had to actively manage its own timelines and costs to maintain growth.
  • Tanner Greer writes in Palladium about America's cultural inheritance from before the managerial revolution, when institutions were more synonymous with the individuals who made them up. One striking claim: "For this generation of state-builders, the distinction between “state” and “society” so central to the modern conception of the public sphere felt awfully thin. Little wonder that nineteenth-century Americans resorted to vigilante violence so readily: there was little outward distinction between turning a criminal over to their community’s police forces, judicial officials, and jurymen on the one hand and turning a criminal over to their neighbors on the other—the two groups of people were exactly the same."
  • Richard Hanania argues, correctly, that the rise of generative AI will make established media stronger since we can no longer assume that photographs and videos are genuine. The technological solution is to have some kind of cryptographic chain of custody, but only a tiny demographic will say things like "I'm just going to run a couple shell scripts before I retweet this," and anything disproportionately appealing to that audience is disproportionately non-appealing to a much larger audience. Instead, we'll have the social chain of custody: did an institution most people trust decide that this is real? It's a pessimistic outlook, since reputable media outlets have been hoaxed before. On the other hand, this lower-trust environment only affects people who make up their minds about important issues based on photos and video clips. As Hanania points out, we've had "deepfakes for text" for a long time—you can type whatever wild claim you want.
  • Jon Stokes writes a highly entertaining piece on the ethnography of AI safety. One fun and significant fact about AI is that opinions on it don't neatly collapse into a red/blue tribe divide. In fact, there's fractal tribalism all the way down. It's impossible to write such a piece without one's own biases showing through—none of us are above the fray!—but with that in mind it's a fair look at the many, many sides to the debate. I suspect that AI debates, like Covid, will start out with a confused partisan valence and then end up with some kind of ideological flavor after a while, but it's hard to tell who will be on which side. Or, if you're optimistic and/or catastrophist enough, you might argue that AI will be the only issue, and that all other concerns will be subsidiary to it.
  • This piece from Stripe on fraud detection is worth reading both as an overview of how big ML projects get built and rebuilt, and as a look at what software development will increasingly become. The goal is not just to minimize fraud and minimize false positives, but also to have a model that can explain its mistakes. The more elaborate an ML model gets, the less any single feature has a human-understandable explanation. They ran into this even with model improvements: "[W]e introduced a Boolean feature capturing whether the business was currently under a distributed fraud attack. This feature didn’t improve our model’s performance as much as we’d anticipated. As it turned out, our ML was already incorporating these patterns, even though we never expected it to."
  • In this week’s Capital Gains, we look at factors like momentum and value, and how investors can think about them even if they aren’t quants.

Books

  • The Confusion: Neal Stephenson's Baroque Cycle is up there with The Years of Lyndon Johnson in the pantheon of hefty multi-volume works that explore a storyline but really constitute a long meditation on more abstract historical forces. In the case of the Cycle, the big themes are the importance of the Enlightenment and the nature of money. The book toggles back and forth between a picaresque novel about piracy and an exploration of how finance and banking worked in the early 18th century. All of which is surprisingly modern—an important plot point revolves around what happens to the economy when a major bank's obligations are no longer seen as money-good. The book is technically historical fiction, and it does have a plot, but one of the main purposes of the fictional characters is to create an internally-consistent story that involves as many actual historical figures as possible from the time period.

A Word From Our Sponsors

Tegus helps investors keep a pulse on investments, source new opportunities, and map markets— all at a fraction of the cost of other research tools. In fact, Tegus is the only investment research platform that gives you…

  • Access to custom-sourced experts at an average of $300 per call, compared to legacy networks’ exorbitant $1000+
  • The ability to flex your call length— 30 min, 60 min, or more— based on your needs
  • Deep private company data from seed to large and mature companies
  • Robust benchmarking, charting and comps

Sign up for a free Tegus trial and get the insights you need today.

Open Thread

  • Drop in any links or comments of interest to Diff readers.
  • Which jobs are good candidates for being eliminated by AI, and which are better candidates for seeing significant productivity improvements instead? It's certainly easier to find a job as a programmer since the advent of high-level languages, even though they've replaced a large fraction of what used to be the programmer's job.
  • The Diff AngelList syndicate, Inflections & Co., just wrapped up its latest investment. (If you're an accredited investor, you can join here.) I'm interested in talking to other startups in the Diff network, especially anyone working on something weird and ambitious that touches the world of atoms. Please hit reply if you'd like to chat.

Diff Jobs

Companies in the Diff network are actively looking for talent. A sampling of current open roles:

  • A company building ML-powered tools to accelerate developer productivity is looking for software engineers. (Washington DC area)
  • A VC firm using data science and ML to source and evaluate opportunities is looking for a software engineer to lead their data engineering efforts. (Menlo Park, CA or NYC)
  • A VC backed company reimagining retirement wealth and building a 401k alternative is looking for a founding CTO. (NYC)
  • A company building zero-knowledge proof-based tools to enable novel financial arrangements is looking for a senior engineer with a research bent. Ideal experience includes demonstrations of extraordinary coding and/or math ability. (NYC or San Diego preferred, remote also a possibility.)
  • An early-stage startup aiming to reduce labor costs by over 80% in a $100bn+ industry is looking for a part-time technical advisor with robotics experience; this has the potential to evolve into a full-time role. (NYC)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.