The End of Localhost

The End of Localhost

All the Cloud's A Staging Env, and All the Laptops Merely Clients

·

13 min read

Featured on daily.dev

Dev environments should be cattle, not pets. It looks likely that in future, most development will not be done on localhost, the most precious pet of all.

See reactions on Hacker News and Twitter.

Aug 2022 update: I did an extended interview on InfoQ with Daniel Bryant!

Sep 2022 update: I did an interview with Richard McManus of The New Stack!


Make the ultimate developer experience wishlist for the average rich-country developer in 2030:

  • Fast gigabit internet is cheap and everywhere (5G or mesh wifi)
  • Dev machines (laptops, tablets, VR) are cheap and have multiday battery life
  • Your apps build in a second regardless of scale, with tests and staging environments/deploy previews ~instantly live after every keystroke
  • Your personal dev environment travels with you no matter which device you use
  • Any app's environmental dependencies - everything from a HTTPS cert to a sanitized, sandboxed fork of the production database - are immediately available to any teammate ramping up to contribute any feature. No docs, no runbook.
  • You can go from idea to first customer in a weekend, using a combination of low-code builders and backends-as-a-service
  • You can scale up from MVP to unicorn in weeks, using one of the serverless or "new Heroku" platforms, and auth/payments/database/communication needs handled by world-class SaaS teams

You will notice that most of these items enable (even require) you to run things "live" on the cloud, not localhost.

Perhaps most importantly, the time wasted fixing bugs between dev and prod environments goes from 1-4 hours a week down to 0, if you can simply eliminate the discrepancy between dev and prod.

Aside: Realistically, you will always have some discrepancy between "staging" and production environments, but the distance between them should be much smaller than between dev and prod. For example, working on https is always a pain in localhost but is a requirement on prod.

To paraphrase Bob Metcalfe, if the browser reduced operating systems to "a poorly debugged set of device drivers", then the cloud is reducing the dev machine to a poorly maintained set of environment mocks.

image.png

That's it, that's the blogpost. The rest of this article is working out subpoints, examples, trends, and anecdata.

But I Need To Code on a Plane?

Maybe stop flying so much. Or get a good audiobook and rest your eyes. Maybe even talk to your neighbor! (if they seem social)

The "Future is Just Not Evenly Distributed" Argument

Many Bigcos who have invested in their developer productivity already work entirely in the cloud. This will be news to some of you, and old hat to others, so I didn't know how much emphasis to place on this.

But to my knowledge, this is the first time anyone has collected public info about Bigco dev environments in one place:

  • Google Cider is Google's web IDE that "mounts the enormous Piper file system and provides a super tight integration with internal review, build and testing tools" without downloading any source code to the local machine (source, source)
  • FB On-Demand is Facebook's way to do "on-demand" environment provisioning that create a live feature preview with more fidelity than local. At FB, "local development did not exist". (FB alum tweet, YC startup clone)
  • Etsy has all development happening in cloud VMs. "I don’t have the repo even checked out outside of that." (tweet)
  • Tesla moved from local to cloud for their vehicle OS development (source)
  • Palantir moved to remote ephemeral workspaces (source, thanks Ben Potter)
  • Shopify is "moving the majority of our developers into our cloud development environment, called Spin." (blogpost on Spin, source, thanks David Stosik)
  • Slack moved to remote environments and reported saving "12 minutes of bootstrap time required for a new development environment every time an engineer reserves an environment for work", while still respecting editor choice and terminal configuration. "For new hires, getting started with webapp development was a breeze. Remote development environments eliminated much of the prerequisites for webapp development, bringing down the setup time from about an hour to mere minutes."
    • "[Tobi Lutke said] that the team’s role was to create abstractions that permitted developers to defer their understanding of development environment construction until they were curious about it. For example, no developer is required to deeply understand the ruby interpreter in order to write Rails applications. The same should be true about development environments."
  • Github "left our macOS model behind and moved to Codespaces for the majority of GitHub.com development". (source, podcast)

The standard response to bringing up a bunch of Bigcos is "sure, but does it work for my small team?"

The answer is invariably "maybe yes, but also probably not in this current form", because most of these moves were done after sizable internal investment and take advantage of lots of proprietary infrastructure. However, as this tech commoditizes, we'll see more and more of it spread out as we find commonalities among audience subsets.

The obvious first part of commoditizing preview environments has already been done - one of Netlify's early innovations was making deploy previews for frontend projects ubiquitous. Virtually every docs site now uses some form of deploy preview system, and Jamstack apps can also use them for their release process (example). With database branching becoming increasingly common, this workflow will make its way further and further up the stack. Let's look at this stack next...

Jobs to Be Done of Localhost

In my original tweet calling out this trend I actually conflated different usages of local development (as Anil Dash observed):

  • Editing code in a local IDE
  • Running code cloned to a local file directory
  • Spinning up a local database instance/cluster of services to code against

Conveniently, they are all under attack (did i miss any? please let me know):

So no matter what you're doing in localhost, there's probably a well-funded startup or Amazon/Microsoft tool that does it better in the cloud.

The Inevitability Argument

One of the ironic tensions of humanity is that we say we want free will, privacy, self sufficiency and decentralization, but our actions tend toward the hive mind, convenience, interdependence and central infrastructure. My theory for this is that social psychology, economics and technology are very powerful centralizing forces.

  • There are a long list of critical life essentials on which we are not self sufficient. Many historians mark agriculture as a starting point of civilization - meaning that centralizing our food source helped us move past a subsistence agriculture. Water supply and sanitation centralized in the 1700s. Electricity has basically been centralized from the start.
  • Closer to modern times we're also seeing everything we use move to the cloud, from movies/TV (from huge VHS and DVD libraries to a monthly Netflix/Disney/HBO subscription), games (from box games to free-to-play MOBAs and MMORPGs, Google Stadia, xCloud and PS Now), and knowledge (from Encyclopaedia to Wikipedia)
  • Even in the B2B domain:
    • Salesforce's "no software" move to SaaS was just the first in a long history of moving every imaginable application to the cloud
    • Box and Dropbox moved file storage to the cloud
    • Docusign/Hellosign moved legal contracts to the cloud
    • GAE/AWS/Azure moved the datacenter to the cloud
    • Most recently Suhail Doshi's Mighty app is moving even the browser to the cloud

To argue against localhost eventually going the way of the Dodo is to do the developer equivalent of asserting that most people want to run their own generators or grow their own food.

The Outer vs Inner Loop Argument

You might argue that developers take so much pride in their tools that they will go out of their way to be self sufficient in them. And yet:

  • Every Slack and GitHub outage is basically celebrated as a Developer "Snow Day" (unscheduled holiday due to acts of god)
  • Most companies run separate Build/CI/CD infrastructure anyway - in other words most apps don't get deployed without first going through some cloud infra as part of the critical path

I will grant that there's a difference between "We use CircleCI" and "Let's kill localhost". The term of art the industry has adopted to describe this difference in dev tooling is the Dev "Outer Loop" vs "Inner Loop" - the Outer Loop taking the git commit as the atomic unit of developer productivity, and the Inner Loop being all the developer activity that happens between commits.

image.png

So, to use those terms - we're fine with the cloud taking the slow Outer Loop, but people are concerned about network latency affecting the much faster Inner Loop.

  • With Sourcegraph, developers are effectively saying a remote service can do a better job of searching their code than they can locally
  • With GitHub Copilot, even autocomplete is being made better by being cloud-enabled

The degree to which the cloud can eat the Inner Loop is probably a question of reliability and latency - we are more than happy to hand over slow activities that take minutes, but nobody will tolerate 300ms latencies to see the result of a keystroke.

Aside: It's probably worth a future essay breaking down the various components of the Inner Loop, as there are orders of magnitude differences in the latency of various activities we undertake and so different ideal solutions for each.

The Potential of Edge Compute

Ultralow latency is the domain of edge compute, and likely the final frontier of how the cloud can eat that "last mile" of the developer Inner Loop.

Anil Dash, CEO of Glitch, put it best:

I think it’s more likely the rise of tech like CRDTs & edge compute will blur the lines of what we actually think of as “local”.

Cloudflare folks working on Cloudflare workers also agree:

"wrangler dev" actually runs the worker on the edge, and we use localhost as a proxy. it means you have access to resources/secrets from your real environment, and we implement copy-on-write on stuff like durable objects that gets discarded once your shut down your session

While not as low-latency, serverless folks like Tim Wagner (creator of AWS Lambda), Emrah Samdan (PM @ Serverless.com), and Tudor Golubenco (CTO Xata) also have a lot of sympathy with this because of how easy it is to provision/scale serverless resources.

Pushback: It's Still Not Good Enough

Don't get overexcited here. My caution on judging this movement with today's technology is that for this to succeed, provisioning must feel so cheap as to be "throwaway" - even a latency of 10 seconds to spin up a preview environment is too long for me, though your mileage may vary.

It takes a second to deploy a frontend preview with Netlify Drop and ~10 seconds with the Netlify CLI, but I still habitually use localhost for development because my iteration cycle is in milliseconds. I can and have moved part of that workflow to remote tools like Codesandbox, Gitpod and Stackblitz, but none of them are fully capable of replicating the full set of dependencies that I need for fullstack development. In fact, after one particularly bad livestream, I resolved to always use Netlify Dev (the Netlify local dev solution I used to work on) because the iteration loop of git-push-and-wait-for-deploy was so agonizingly slow (I had the same pain with AWS Amplify).

Other similar sentiments:

Localhost has been attempted to be killed for eons, until the network is as fast as my disk, and can the “remoteness” of it can disappear entirely, localhost is here to stay. (tweet)

Currently we have local development with impossible physics: assets that load immediately, APIs that respond in under a millisecond. So if dev goes to the cloud because the latency is acceptable, then we are finding a middle point that is acceptable for both devs and real users. (tweet)

But surely you can see that the latency question is a question of letting the Moore's law equivalent of cloud commoditizing infrastructure take its course. If it's not good enough today, then wait 5 years and check back again.

Other Notable responses

People have very extreme:

  • positive views
    • "This is already the case in many big companies and killing local dev is going to be a huge win for developers." (Roopak from Bolt)
    • "My work on Airflow has made it clear how much supporting local dev increases the code surface area, when that code has almost no value in production." - (James Timmins, Airflow maintainer)
    • "Since joining GitHub, I had no reasons trying to set up a local environment. It’s trivial to develop on other team’s repos via Codespaces." (Jaana Dogan)
    • "We don't believe that local development will exist in the future" (Sam Lambert - the inciting quote for this blogpost!)
    • "In the long run, I expect most/all of developers doing things locally will go away. Developing, testing, building, running, deploying, etc. When developers need to run things locally today it’s a sign that cloud tools aren’t there yet imo." - Erik Bernhardsson
  • and negative views
    • Literally all of Hacker News hates it (and Reddit too but edgily)
    • "You pry localhost from my cold dead hands!" (tweet)
    • "Out of my cold dead hands. This is the final step in the road to the inescapable surveillance dystopia." (tweet)
    • "Nobody wondering whether it's a good idea to hand over what small power we have left as devs to a few private platforms." (tweet)
    • "General purpose computation on your own machine is probably going to be illegal in 20 years. It will be our greatest accomplishment if we can liberate even 1% of humanity from this soul-stifling metaverse. We increasingly are moving from stone age to bronze age computing. We need a bronze age collapse and the beginning of iron-age computing. In particular, we need computing that escapes the massive centralized palace economy model, even if only for 1% of humanity." (tweet)
    • Podcast discussion criticizing this blogpost in favor of Elixir

Kelsey Hightower tries to explain it:

  • "Seems the process of writing software has become so complex that 10 cores and 64 GB of RAM isn't enough. Or maybe this has more to do with the growing number of external dependencies and the related configuration required to manage it all."
  • "I thought the ability to configuration an application to use remote services would offer the best of both worlds. Keep the inner loop local while still leveraging managed services remotely."
  • "I got a feeling working around red tape is the number one reason remote dev environments are taking off."

Dan Abramov predicted this happens in 5 years, not 10.

Simon Willison points out another benefit:

The killer feature of remote dev environments is when you mess something up in your environment and you can click a button and wait a few seconds and get a brand new environment that works

Paul Biggar sees a few drivers:

  • production is harder and harder to replicate locally
  • it's cheaper to pay for cloud dev machines than expensive laptops each year
  • services with high scale (eg Spanner) don't behave the same on localhost anyway - you just have an emulator

    Pretty sure we're going to do fewer things locally and a lot more developing directly against the cloud in the future

Patrick Mckenzie says:

It seems to me like all the bits needed to do this are already abundantly available and it’s waiting for a) one solid product team and, crucially, b) becoming The Right Way To Do It for one language/platform that rockets to mainstream success.

You’d want the “curl dockerinthecloudlets.go” to be the first line in all the tutorials and for that to be the last time users ever think about compute substrate or networking.

Further Reading