- Jon Stokes has a more fleshed-out look at the analogy between H.P. Lovecraft and AI. There's definitely an uncanny valley effect when interacting with generative AI art or text: it gets some things exactly right and other things very wrong, in a way that's sometimes human-level but inhuman. It's convenient that fiction offers such good models for understanding new developments, though it's usually ideal when that fiction doesn't revolve around often-apocalyptic horror.
- This is an older piece, and a good look at how far we've come since 2015: the unreasonable effectiveness of recurrent neural networks. It might be a good signpost for when our ability to do interesting things with AI, at least given enough input data, started to seriously outstrip our ability to reason about it. We do have effective rules of thumb, but they're more derived from observations than from an underlying theory. Which is not disqualifying; sometimes practice runs ahead of theory for a while, and then generates enough data to improve the theory instead.
- Forbes has a profile of Scale AI CEO, Alexandr Wang. Scale specializes in one of the more annoying parts of the AI business, finding human beings who can annotate data to feed into algorithms. This, of course, does not scale especially well, and it’s a business that's fundamentally driven by price. But Scale is collecting some data for meta-models; when labor is cheap, one form of QA is to have a given annotation performed by two people, and then to double-check when they disagree, but predicting what the optimal number of labelers is can be partly automated. And there's a lot of value in finding a business that competes on cost and creating a structural cost advantage.
- Gizmodo interviews Victor Wong, senior director of product management for Google's Privacy Sandbox. It's a fun interview because Privacy Sandbox is 1) a way to improve consumer privacy, and 2) a way to increase Google's relative and absolute data advantage in order to sell more ads at higher margins. Both parties to the interview know this, and they also know which parts can't be said out loud. Fundamentally, the problem with online privacy is that the vast majority of people love everything about getting their privacy violated—the customization, the ad-subsidized services, continuous improvements in accessibility, spam detection, etc.—except for the part where it entails companies collecting data on everything about them all the time. (A minority of people think very seriously about privacy and have well-considered reasons to dislike all of this stuff, and I will admit that I'm skeptical of a lot of it—I'd be better-off if my feed was what I wanted to want, not what I empirically want—but this is not the usual experience.) This creates lots of potential energy for aggressive media cross-examinations, and the results are informative.
- One of the most fun genres of profiles is when a writer spends a long time collecting a comprehensive dossier on a total fraud. Economic considerations being what they are, the sub-sub genre of this that gets subsidized the most is identifying perpetrators of accounting frauds in public companies, but this profile of a crime expert who faked his crime expertise, but did get to help put people in prison for long periods, is a worthwhile read. Lots of the information was available in public records and through Google searches. Which raises the question: what was the base rate for this kind of behavior before it was easier to catch?
- In this week's Capital Gains, we look at price discrimination, and how it subsidizes fixed-cost industries like airlines and ride-sharing by extracting every last penny from people who are willing to, or have no choice to, pay.
- Full Faith and Credit is a nice Washington memoir from William Seidman, who chaired the FDIC during the 1980s S&L crisis. Seidman has a great sense of humor about how the system works (as in many other cases, you'll get an honest look at some games people play from someone who didn't especially enjoy playing them). The book is a good reminder that bureaucracies work the way they do because they're subject to so many constraints. Sometimes the obviously correct policy doesn't get implemented because there are non-obvious institutional forces pushing against it.
- Drop in any links or comments of interest to Diff readers.
- The share of Diff content that directly or indirectly references AI has been rising fast. Extrapolate a few months, and 200% of the newsletter will cover the latest advances in artificial intelligence. If there's a trend like this that affects many people in many different ways, what's the optimal approach to allocating time and attention? Spend more and more of your time on AI, so you stay on the bleeding edge? Or try to focus as much as possible on things that AI will have a smaller effect on, since many improvements will be winner-take-all but will be complements to less automatable work? (Readers may recall this post on engineering effectiveness, which can now be applied to non-engineers.
A Word From Our Sponsors
Elevate your investment research with Tegus
Designed to keep you ahead of the curve, our research platform empowers you to source new opportunities, map markets, and pressure test your investment theses at a fraction of the cost of other tools.
What sets us apart?
With custom-sourced expert calls at an average of $300 per call and the flexibility to customize call lengths, deep private company data that spans seed stage to mature companies and robust benchmarking, charing and comps, Tegus delivers the competitive edge you need to make informed decisions with confidence.
Trial Tegus today for free and experience the power of streamlined investment research.
Companies in the Diff network are actively looking for talent. A sampling of current open roles:
- A company using Web3 to decentralize customer loyalty programs is looking for a founding senior engineer with Solidity experience and an interest in brands and the arts. (Brooklyn)
- A firm using NLP and other ML tools to give retail and institutional investors access to custom-tailored portfolios is looking for a data engineer. (NYC)
- A profitable AI startup is looking for a product designer for its new services that help small companies accelerate their growth. (SF)
- A VC firm using data science and ML to source and evaluate opportunities is looking for a software engineer to lead their data engineering efforts. (Menlo Park, CA or NYC)
- A company building ML-powered tools to accelerate developer productivity is looking for software engineers. (Washington DC area)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.