The Distance to Compute - Part 1
Layers of Abstraction and the Growth of Distance
2025-10-29When the first general-purpose electronic computers emerged in the mid-twentieth century, the relationship between human and machine was intimate, direct, and technically unforgiving. In the case of the ENIAC, completed in 1945 at the University of Pennsylvania, programming was done by physically rearranging cables and setting switches. Each problem required a reconfiguration of the machine's wiring. This was computing without layers — the user stood at the very edge of the machine's logic, manipulating its state by hand.
In that world, the distance to compute — the conceptual and procedural gap between human thought and machine action — was short in one sense: there was nothing between you and the machine. But that closeness came at a cost: only a handful of specialists, mostly women with backgrounds in mathematics and engineering, could operate the machine at all. The overwhelming complexity excluded almost everyone else.
From Direct Wiring to Stored Programs
The first great increase in distance came with the stored-program concept, articulated by John von Neumann and realized in machines like the Manchester Baby (1948) and EDSAC (1949). Instead of rewiring hardware for each task, instructions could be loaded into memory as data. This was more abstract: the "program" became a manipulable artifact, not a physical configuration.
Ironically, this new flexibility moved the user further away from the physical workings of the machine. Writing instructions in binary or octal form — the early machine code — meant working with symbolic representations rather than electrical states. Fewer people now needed to understand the exact layout of circuits, but more needed to learn a specialized numeric language.
The payoff was enormous. Problems could be redefined in minutes rather than hours, programs could be stored and reused, and machines could perform sequences of operations far beyond human calculation speed. The distance had grown, but so had the pool of potential users — from dozens to hundreds worldwide — and this growth in users was made possible by the increased compute capacity of postwar hardware.
Assembly Language and the First Abstraction Layer
In the early 1950s, the first assembly languages appeared, introducing a symbolic shorthand for machine instructions. Instead of 10011010, the programmer could write ADD. The assembler, itself a program, translated these symbols into binary.
This was the first explicit layer of abstraction between the user and the machine. The programmer no longer needed to remember numeric opcodes or manually convert addresses; the machine's available compute could now be spent on translation. This added computational overhead — the assembler had to run before the program could run — but hardware performance was improving fast enough to absorb it.
This pattern — using new compute capacity to insert a more user-friendly layer — would repeat for decades. Each time, the distance to the hardware grew, but so did the number of people who could now cross it.
High-Level Languages: From Machine to Human Logic
The late 1950s saw the introduction of high-level languages like Fortran (1957) and COBOL (1959). These allowed programmers to express algorithms in a form closer to mathematics or English. A Fortran statement like DO 100 I = 1, 10 could produce dozens of machine instructions once compiled.
This was a dramatic leap in distance. The programmer no longer worked in the machine's own logical structures, but in a conceptual framework native to human reasoning. The machine became an interpreter of human intention. The price of this shift was more compute — compilation itself became a significant workload — but the hardware of the day, powered by second-generation transistors and increasing clock speeds, could handle it.
As the distance grew, the circle widened: now engineers, scientists, and business analysts could write software without ever seeing a memory address or a register map. Each abstraction layer made computing "slower" in the short term — more steps between instruction and execution — but the abundance of compute power turned that overhead into an investment in accessibility.
Operating Systems: Mediating the Machine
By the 1960s, operating systems added yet another layer. Before their widespread adoption, programs had to handle input/output devices directly, manage their own memory, and coordinate with other processes. With an operating system like IBM's OS/360 (1964) or later Unix (1969), these concerns were delegated to the system software.
The user's conceptual distance from the hardware widened again. A programmer could now treat a disk drive as an abstracted file system, or a printer as an output stream, without caring about its physical mechanics. The OS consumed cycles to perform these translations, but again, hardware gains made it possible.
Time-sharing systems like MIT's CTSS (1961) and later Multics further expanded this model by enabling multiple users to interact with a single machine simultaneously, each believing they had exclusive access. The computational cost of such illusions was high, but transistor counts and processing speeds were rising steeply.
The GUI: Metaphor as Interface
The graphical user interface (GUI), pioneered at Xerox PARC in the 1970s and commercialized by the Apple Macintosh in 1984, represented a massive expansion in distance. The user no longer issued commands or wrote code; they manipulated icons, windows, and menus. The actions were metaphors — dragging a file to the trash did not actually "move" anything physically — but the system translated them into operations on the underlying data.
Rendering graphics, managing overlapping windows, and tracking pointer positions consumed vast amounts of compute relative to text-based systems. In 1984, the Macintosh's Motorola 68000 CPU and 128 KB of RAM were considered barely adequate for this new interface. Yet as processors doubled in speed every 18-24 months, the overhead became trivial, and the GUI became standard.
The result was an explosion in users: millions of people could now operate a computer without knowing any commands, file structures, or programming concepts. The distance had grown dramatically — the average user's mental model of the machine was now almost entirely metaphorical — but the base of participants expanded orders of magnitude.
The Web and Search: Abstraction of Location and Structure
The World Wide Web in the early 1990s added another thick layer. The user no longer needed to know where information was stored or how it was organized. A hyperlink could jump across continents. A URL became a human-readable pointer to a resource, hiding IP addresses and routing protocols.
Search engines abstracted even the location of documents. Typing a few words into Google returned ranked results from billions of pages, without the user needing to understand file formats, directories, or indexing methods.
All of this required enormous compute resources — web servers generating pages dynamically, crawlers indexing content, ranking algorithms scoring relevance — but hardware improvements and distributed computing made it viable. The distance grew again: the average user's understanding of "where" their data was and "how" their request was fulfilled became minimal.
Mobile and the App Ecosystem: Context-Aware Abstraction
The smartphone era, beginning with the iPhone in 2007, turned the computer into an always-available personal assistant. The app ecosystem abstracted general-purpose computing into thousands of small, purpose-built interfaces. Sensors, GPS, and connectivity allowed apps to tailor their behavior to context without user intervention.
These layers demanded even more compute overhead — location services, background synchronization, UI animations — but the miniaturization and efficiency of mobile processors made them invisible to the user. The gap between human mental model and actual machine operations was now vast, yet billions could use the technology with minimal training.
AI as the Universal Interface
The arrival of consumer AI interfaces in the early 2020s — notably ChatGPT in 2022 — added the thickest abstraction layer yet. A large language model can sit atop decades of accumulated layers: languages, operating systems, network stacks and distributed cloud infrastructure. It interprets open-ended human language and maps it to precise machine operations.
For the user, the distance is now almost total: they need not know which tools the AI will call, what algorithms it will run, or even what format the result will take. They issue a request, and the system handles the rest.
This is only possible because of staggering compute availability, underpinned by massive, energy-intensive infrastructure. Without the exponential growth of both processing power and the energy systems required to sustain it, the AI interface would be too slow, too costly, or too limited to act as a universal layer.
The Current Limit — Language as the Bottleneck
All this layering has shifted the nature of the limit in human-computer interaction. In the ENIAC era, the bottleneck was the machine's ability to accept and execute instructions at all. Today, the bottleneck is human language itself.
Speech averages 150 words per minute, typing around 40, Morse code around 20 for skilled operators. Even at the upper bound of speech, we are slow compared to the microsecond-scale operations of modern processors. An AI can generate a response in milliseconds but must wait for us to finish formulating the request.
The growing distance to compute has freed us from knowing the machine's inner workings, but it has concentrated all friction into the input/output channel. This is now the frontier. To expand further, we must develop interfaces that bypass the linguistic bottleneck — and, as in every previous leap, that expansion will depend not only on compute growth, network capacity, but also on the energy systems capable of powering it.
Part 2 will explore potential trajectories of the Distance to Compute.