Internal Software as a Competitive Advantage
In 2019, the company then known as Facebook had one of its many privacy-related PR blowups: they had created an app with a built-in VPN that allowed Facebook to monitor all the other sites and apps users accessed, and they were paying users, including teens, up to $20 monthly to run it. This violated the App Store's rules, but they'd come up with a workaround: the app was installed through Apple's Enterprise Developer Program, which was meant for internal company apps, so it wouldn't be reviewed by the App Store.
Apple, naturally, was deeply disappointed in this violation of user privacy rights (or deeply excited by a chance to tweak a competitor while getting plaudits from media and regulators—who knows?), and responded by revoking Facebook's developer certificate.
Chaos ensued: this shut down Facebook's internal beta tests of new apps. It also broke the internal apps company employees used for things like ordering lunch and getting rides on the company shuttle. All of these apps were tied to Facebook's enterprise developer certificate, so all of them broke at once.
This is a classic case where something's importance is only illustrated when it breaks. "Software companies" produce plenty of visible, user-facing software, and that's generally how to think about them. But they also have lots of internal software, which is rarely used outside of the company. If the user-facing products set the current revenue run-rate potential—i.e. there's a market for the product and getting it out there is a matter of, well, marketing—internal software sets the speed limit on improvements.
There's an opportunity cost to building these internal tools; someone adding appetizers to the food-ordering app is not spending that time adding more purchasing options to an e-commerce feature on Instagram. Given tech companies' hiring priorities over the last few years, and their firing priorities over the past few months, it's clear that engineering talent is still a scarce resource.
A good model for how big companies think about this resource is to revisit what might be called the depth framework for managing growth. It goes like this:
- Shallow companies manage towards whatever the next reporting period is. That might be a month, a quarter, or a year, but regardless they're willing to sacrifice overall expected value if it means that revenue lands in the current period, or if it pushes expenses out to the next one. (If you're ever negotiating a big purchase with a big-enough-to-be-public company, it can be worthwhile to time negotiations so that they're finalized in the last few days of the quarter. If nothing else, salespeople are very responsive then.)
- Mid-depth companies try to target a growth rate over time, and execute a controlled descent towards growth at roughly the pace of GDP.
- The deepest growth companies are the ones that optimize in year N for being able to grow at a fast pace in, say, year N + 5. It's not just "what can we do to hit the numbers this quarter and this year?" but "What can we do today that will be accelerating our growth years from now, and accelerating it enough to overcome the natural course of entropy and of slower growth at scale?"
This is a strict criterion! It means not just modeling the future state of the world, but modeling your company's ability to profitably influence that state. And there are basically two things a company can do to optimize for this: it can constantly start new ventures, with the hope that as its core business levels off there will be other businesses right at the accelerating part of the S-curve that can absorb cash flow from mature parts of the company and reinvest it profitably. (This has been Amazon's model.) The other broad plan is to constantly look for ways to increase internal efficiency and leverage. There are downsides to a relentless focus on efficiency—it's exhausting to justify every kind of spending, for example, though sometimes necessary. But in a software business, or really any business with high fixed costs and low marginal costs, the real goal is to increase the leverage of the most productive employees.
And that leverage can be achieved in two ways: first, by giving them better tools. And second, by having them build better tools. Paul Graham once suggested that this was a promising way to partition work in a software company: the best developers would build tools, which the rest of the company would use to build products. In practice, it's hard to find a company where the builders of internal tools are considered the superstars while people who build things for external consumption envy them. On the other hand, the model does sort of exist in the sense that companies whose products are complementary to developers—Stripe, Twilio, AWS, etc.—have done well for themselves.
There are some interesting case studies of companies using internal software tools as a competitive advantage. Google has an internal code search tool—yes, a search engine is one of those products that gets so big it needs its own search engine. The company has built code review and version control software that's largely in-house. (This piece from Sourcegraph has some good background on how it came about: as with good customer-facing products, the origin was an engineer dissatisfied with somebody else's solution.) There's Cider, their web-based development environment. There's even an internal meme generator, whose content has leaked once or twice and which tends to be a Schelling Point for harsh employee feedback.
They also have internal no-code tools for non-technical employees who need to build dashboards and automate processes. And users can also query databases with internal tools. They have an internal CRM for their sales team, too.
Some of this looks excessive. Surely there's an external tool that can offer the same features without burning lots of engineering hours. On the other hand, at sufficient scale the math reverses somewhat: suppose a fifth of Google's 190k+ employees work in some kind of sales and customer support capacity. A CRM product priced at around the per-seat cost of Salesforce's Enterprise Sales Cloud would run Google about $34m per year. So if we assume Google's CRM scales with the size of its business, then we can think of that internal tool as a CRM company with $43m in ARR and a 20% growth rate. Quality SaaS companies with those characteristics tend to trade at a mid- to high-single digit multiple of revenue, so this is plausibly an asset worth around a third of a billion dollars to the company.
And there are strategic reasons to build such things in-house. There's always the possibility of turning them into independent products; it's not a coincidence that an email-driven company like Google ended up creating the world's most popular email service: dogfooding is a powerful force! It also means that Google can tightly integrate internal data with internal tools, and that it's not paying for things it doesn't use.
Most internal products, though, won’t be released outside the company and won’t be widely-used if they are. Customer-facing products are more prestigious than internal ones, which tends to attract superstar hires. But this, too, can be beneficial for the company building internal ones: a company that can only hire superstars has to be risk-averse in its hiring, which makes it very hard to scale. If the company has room for merely very good programmers as well as truly excellent ones, though, it can more sensibly take risks on early-career hires who will initially be tweaking an internal CRM or messing with an employee review system, but who might end up building something more revenue-accretive later on.
But maybe using a software company as a case study in the advantages of internal software is cheating. Of course they're good at building internal software! It's like pointing to a PR company with really good PR and saying that this is proof that PR matters.So let's look at another example:
SpaceX is well-regarded for hurling heavy objects into space at surprisingly affordable prices. They've gotten pretty good at this, and space is livelier for it. Part of the SpaceX model is to push for extreme vertical integration while being willing to use external vendors if there's a general solution. These are not as contradictory as they sound: the goal is to own every aspect of the process, and to be able to do things in-house in principal, but not to gratuitously reinvent things that have been perfected in other contexts already. One prosaic consequence of this is that SpaceX is well-positioned in vendor negotiations, since they start from the perspective that the things they buy externally are somewhat commoditized, and since they have the capability of rejecting overpriced parts and building their own. But the more interesting result is that it's a tool for information flow: a problem that shows up only in launches may stem from manufacturing, and the more these are part of the same gigantic process—ideally with a thorough audit trail—the more solvable they are.
There turn out to be echoes of this in earlier eras of spaceflight: NASA switched from testing individual components to testing everything all at once when they discovered that many failures were due to surprising interactions between things that worked on their own ~100% of the time, but worked together a bit less.
Having these systems set up also enables a good permissions system. At a hardware business that's subjecting parts to hard-to-model stresses, which is important, both because there's potential downside from variance in quality among workers and because there's potential upside to testing something new—a top-down system will tend to forbid lots of experimentation by default, but a more fine-grained one can allow it.
(An interesting detail here is that SpaceX does not use very much machine learning. ML is great in cases where there's a large sample size and high tolerance for error; if you're doing spam filtering, the cost of mistakes is pretty minimal and there are plenty of messages to look at. But with rockets, it's better to insist that the system is deterministic, since determinism is an asymptote you can approach given enough sensors and a good enough model of the physics involved. And you don't want to run many A/B tests when the cost of a mistake can be in the hundreds of millions ($, WSJ). They do, however, use NLP to look at documentation and notes to find language that indicates uncertainty. When the stakes are high enough and the sample size is small enough, the best place to use statistical techniques is in spotting cases where people aren't being deterministic enough.)
Caring about internal software clearly differentiates companies in terms of what they can do. But it also has a subtle effect on what employees can't do: the more distinctive a company's stack is, and the more useful it is, the harder it is for people to go somewhere else and be equally productive. There's a U-shaped curve to this: entry-level people can still move around, because they haven't gotten locked into a particular technology stack and don't have the sunk-cost problem associated with it. And more senior people are generally solving more abstract problems; they benefit from the productivity upside of internal software, but don't directly depend on it. But in the middle there are people who have gotten very, very good at doing things a particular way, and who will be most productive when they can do things exactly that way somewhere else. And when company growth slows, as it inevitably does, the size of this middle cohort grows: the incoming class is relatively smaller, and there are fewer promotion opportunities. This was almost certainly not the master plan when someone forked an open-source project a decade ago and started building their company's customized in-house version. But it is an added benefit: like lavish employee perks, good tooling makes it harder for people to imagine working anywhere else. And the longer this dynamic is in play, the stronger it gets. Though this, too, isn't a perfect win. Internal tools that make it hard for people to quit also make them harder to replace; tenure becomes a form of job security. And an expanding middle is a symptom of corporate as well as human middle-age. So the retention gain is really a retention tradeoff: more efficient than it otherwise would be, but less ambitious than it could be.
Disclosure: Long META, MSFT, AMZN. No position in SpaceX yet.
The Techcrunch headline is a good example of Scott Alexander's The Media Rarely Lies thesis: the headline says "Facebook pays teens," and the article notes, 29 paragraphs in, that fewer than 5% of the users who opted in were teens, and that these younger users had to have signed parental consent forms. It's technically true that there were teens in the panel, just as it's technically true that TechCrunch is a blog where coverage has been available by paying a bribe to a writer. It would be misleading to characterize the blog that way without additional context, but it wouldn't technically be false. ↩︎
On the other hand, one quick taxonomy of Google's non-search products can be divided into 1) things that drive more searches, or that is nice to have as infrastructure for a data-intensive business like search, or 2) failures. This is a two-sided testament to how great a business search is. First, it means that some products are more valuable specifically because Google makes them; email, for example, meant that more Google users were always logged in, which was great for data collection. And second, it means that even a well-considered new product has little hope of being a material contributor compared to search; it's easier to make search 1% bigger than to build something new that's 1% as big as search. ↩︎
Vertical integration is a good idea in new industries, both because the supplier and retailer ecosystem is smaller and less dependable and because it's never clear where the value will eventually accrue. For example, Henry Ford's River Rouge factory, which ingested iron ore and rubber at one end and spat out ready-to-drive Model As at the other. This was partly a big bet on the scale gains from having one giant factory, but it was also a hedge—what if the way the industry shook out was that fifty different companies could make equivalently-priced cars but one central parts company cleaned up making uniquely good brakes or transmissions? "Just do all of it in-house" is also a comparatively cheap option in a young and unspecialized industry, when "It all" isn't all that much. ↩︎
There's an imprinting process that happens early in some companies' existence, when they hire their first fairly-senior technical person from a company that has solidified its processes, and that person brings along a few of their smartest colleagues. One result of this is that the interview process at these companies is often a close relative of the interview process at the company from which they recruited a big batch of talent at some particular time, but often with a slower pace of change because they don't have time to evolve their processes. It's a bit like the way modern American accents are apparently more similar to Shakespearian accents than modern English accents are. ↩︎
A Word From Our Sponsors
If time is money, why are you wasting it?
Maximize investor research and diligence returns with an end-to-end platform that solves inefficiencies. Tegus streamlines the information investors need to move quickly, build conviction and make better decisions to outperform the market.
Right now, The Diff readers can trial the Tegus platform for free at http://www.tegus.com/thediff.
The very short bull case for Meta Platforms is that they're not committed to being the first company to offer a given feature, just to being the last one standing. They're quite open about this, and it makes sense at their scale: they have the highest opportunity cost of any social platform, so the impact threshold for new features is higher, and that means it's almost always more rational for a smaller player to test disappearing content, short-form videos, status updates, etc. and for Meta to reimplement them once it's clear that they're one of the standard ways people socialize online.
This now extends to some of the experiments running on Twitter right now: Meta is offering paid verification services similar to Twitter's, albeit at a higher price point. (Like many companies, they launched this in New Zealand and Australia: testing a feature in a small English-speaking market is another way to de-risk it, and anyone keeping a close eye on big platforms would do well to keep an extra close eye on new features in those markets.) Social networks are generally reluctant to charge power users, because the incremental content they generate creates more ad revenue than a subscription can. But one way to look at this is that it's a deposit, a way for some users to precommit to using the platform heavily in exchange for getting an engagement boost. In that model, the net impact of the subscription is still incremental ad revenue, and the actual subscription dollars are just a nice bonus.
A few weeks ago, The Diff wrote about the impact of Electron, which made it easier to write independent desktop apps but also meant that running lots of apps required a ridiculous amount of RAM ($). There's a tragedy of the commons here, where it's generally in the interest of the software business to ensure that hardware is not the constraint on running lots of applications, but it's in the interest of every app company to quickly ship something their customers can alt-tab into. That performance cost may be adding up: Microsoft is close to shipping a new version of Teams built on Edge rather than Electron, with less battery usage and better performance.
Microsoft may be one of the companies least able to internalize these externalities; their customers are typically running multiple Microsoft products at once, so when a Microsoft product consumes an excessive amount of memory, it's eating up memory that's needed for other Microsoft products. Monopolistic companies have many unhealthy incentives, but one of their more positive ones is the incentive to internalize externalities—because if they capture enough of the total value being created, they'll capture what would otherwise be more diffuse improvements.
The WSJ highlights the trend of companies that went public in the 2020-21 cycle going private again in the last year ($, WSJ). One way to think about the cycle is to think of the market's demand for companies relative to how soon they ought to go public. For example, the market in 1999 was happy to back companies that really should have incubated for another half-decade or so to figure out what their business model really was, but the market in the post-crisis period was only willing to touch companies that really should have been public years before. (The shorthand for this is that Microsoft went public when it was worth ~$750m, and Facebook went public worth $100bn.) If the market shifts from wanting companies public two years before they really ought to IPO to wanting them public two years after, the biggest swing in valuation will be for companies within that range; they'll be suddenly-good IPO candidates on the way up and suddenly-great LBO candidates on the way down, and many of them will revisit public markets later on at healthier valuations.
There's another story here, about the fixed cost of research: when there's a burst of IPOs, investment managers run into a problem: they can extrapolate the trend and add headcount, or they can just ask analysts to cover more names and hope that some stocks will fall off the tracking list before everyone's overwhelmed. When the cycle reverses before everyone can fully staff up in response to it, the result is that many companies end up under-covered. And that tends to create inefficiencies. Just as plenty of companies IPOed (or, more likely, de-SPACed) at a multiple of their true worth, some of the stocks that dropped 95% and aren't being tracked by anyone would have dropped by 85% instead if they'd gotten a little more attention.
Back to the Office
On Friday, Amazon announced that it's bringing most remote employees back to the office three days a week. The memo outlining this has a long list of the benefits of in-person activities—more structured interruptions in meetings, more room to riff on ideas in front of whiteboards after meetings, easier interruptions, and better team bonding. Most of these are marginal, but there's a cumulative impact. Turning to the "deep growth" point from today's main piece, Amazon is a great example of the model of launching as many small ideas as possible so there will always be something new approaching peak growth. For them, the difference between growing revenue in the teens five years from now and growing it in the mid-single digits may come down to literally one post-meeting whiteboarding session.
Why Private Equity?
One theory of high CEO compensation is that it's a classic case of concentrated versus dispersed interests: individual shareholders don't pay a very high cost if their CEO makes an extra million dollars, but the CEO gets, well, an extra million dollars. One challenge to this model is that private equity radically concentrates shareholder interests, but pays executives even better ($, FT). As it turns out, PE looks at CEO hiring very differently from public markets, preferring to hire from the outside rather than promote from within, and tilting compensation even more to equity and bonus payments rather than fixed salaries. And this means that private equity performs an important function in the talent marketplace: good companies will tend to accumulate good management talent, but they can't always get the maximum leverage from it because very independent senior management will end up clashing with CEOs. Buyouts provide an escape valve, where the COO of one company whose CEO isn't stepping down any time soon can run their own show for a while. And they serve another talent-management function, of providing a graceful and lucrative way for an underperforming CEO to retire on a high note. Executive talent is one of the longest lead-time products out there, and PE is one of the ways to keep it in balance.
Companies in the Diff network are actively looking for talent. A sampling of current open roles:
- A company building tools to enable zero-knowledge proofs is looking for multiple roles, including a fullstack engineer. (Remote)
- A VC backed company reimagining retirement wealth and building a 401k alternative is looking for product/GTM/bizops generalists. (NYC)
- A profitable startup is looking for SDRs to market its AI-based services that help small companies accelerate their growth. (SF)
- A well funded early stage startup founded by two SpaceX engineers is looking for a frontend engineer to help build the software stack for hardware companies. (Los Angeles)
- A hedge fund is looking for an experienced alternative data analyst who can help incorporate novel datasets into systematic strategies (NYC).
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up. We're always onboarding new companies, so the available roles change frequently.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.