Future past: computers

Today, computers are ubiquitous. They’re so common that many people simply can’t function without them, and they’ve been around long enough that most can’t remember a time when they didn’t have them. (I straddle the boundary on this one. I can remember my early childhood, when I didn’t know about computers—except for game consoles, which don’t really count—but those days are very hazy.)

If the steam engine was the invention that began the Industrial Revolution, then the programmable, multi-purpose device I’m using to write this post started the Information Revolution. Because that’s really what it is. That’s the era we’re living in.

But did it have to turn out that way? Is there a way to have computers (of any sort) before the 1940s? Did we have to wait for Turing and the like? Or is there a way for an author to build a plausible timeline that gives us the defining invention of our day in a day long past? Let’s see what we can see.

Intro

Defining exactly what we mean by “computer” is a difficult task fraught with peril, so I’ll keep it simple. For the purposes of this post, a computer is an automated, programmable machine that can calculate, tabulate, or otherwise process arbitrary data. It doesn’t have to have a keyboard, a CPU, or an operating system. You just have to be able to tell it what to do and know that it will indeed do what you ask.

By that definition, of course, the first true computers came about around World War II. At first, they were mostly used for military and government purposes, later filtering down into education, commerce, and the public. Now, after a lifetime, we have them everywhere, to the point where some people think they have too much influence over our daily lives. That’s evolution, but the invention of the first computers was a revolution.

Theory

We think of computers as electronic, digital, binary. In a more abstract sense, though, a computer is nothing more than a machine. A very, very complex machine, to be sure, but a machine nonetheless. Its purpose is to execute a series of steps, in the manner of a mathematical algorithm, on a set of input data. The result is then output to the user, but the exact means is not important. Today, it’s 3D graphics and cutesy animations. Twenty years ago, it was more likely to be a string of text in a terminal window, while the generation before that might have settled for a printout or paper tape. In all these cases, the end result is the same: the computer operates on your input to give you output. That’s all there is to it.

The key to making computers, well, compute is their programmability. Without a way to give the machine a new set of instructions to follow, you have a single-purpose device. Those are nice, and they can be quite useful (think of, for example, an ASIC cryptocurrency miner: it can’t do anything else, but its one function can more than pay for itself), but they lack the necessary ingredient to take computing to the next level. They can’t expand to fill new roles, new niches.

How a computer gets its programs, how they’re created, and what operations are available are all implementation details, as they say. Old code might be written in Fortran, stored on ancient reel-to-reel tape. The newest JavaScript framework might exist only as bits stored in the nebulous “cloud”. But they, as well as everything in between, have one thing in common: they’re Turing complete. They can all perform a specific set of actions proven to be the universal building blocks of computing. (You can find simulated computers that have only a single available instruction, but that instruction can construct anything you can think of.)

Basically, the minimum requirements for Turing completeness are changing values in memory and branching. Obviously, these imply actually having memory (or other storage) and a means of diverting the flow of execution. Again, implementation details. As long as you can do those, you can do just about anything.

Practice

You may be surprised to note that Alan Turing was the one who worked all that out. Quite a few others made their mark on computing, as well. George Boole (1815-64) gave us the fundamentals of computer logic (hence why we refer to true/false values as boolean). Charles Babbage (1791-1871) designed the precursors to programmable computers, while Ada Lovelace (1815-52) used those designs to create what is considered to be the first program. The Jacquard loom, named after Joseph Marie Jacquard (1752-1834), was a practical display of programming that influenced the first computers. And the list goes on.

Earlier precursors aren’t hard to find. Jacquard’s loom was a refinement of older machines that attempted to automate weaving by feeding a pattern into the loom that would allow it to move the threads in a predetermined way. Pascal and Leibniz worked on calculators. Napier and Oughtred made what might be termed analog computing devices. The oldest object that we can call a computer by even the loosest definition, however, dates back much farther, all the way to classical Greece: the Antikythera mechanism.

So computers aren’t necessarily a product of the modern age. Maybe digital electronics are, because transistors and integrated circuits require serious precision and fine tooling. But you don’t need an ENIAC to change the world, much less a Mac. Something on the level of Babbage’s machines (if he ever finished them, which he didn’t particularly like to do) could trigger an earlier Information Age. Even nothing more than a fast way to multiply, divide, and find square roots—the kind of thing a pocket calculator can do instantly—would advance mathematics, and thus most of the sciences.

But can it be done? Well, maybe. Programmable automatons date back about a thousand years. True computing machines probably need at least Renaissance-era tech, mostly for gearing and the like. To put it simply: if you can make a clock that keeps good time, you’ve got all you need to make a rudimentary computer. On the other hand, something like a “hydraulic” computer (using water instead of electricity or mechanical power) might be doable even earlier, assuming you can find a way to program it.

For something Turing complete, rather than a custom-built analog solver like the Antikythera mechanism, things get a bit harder. Not impossible, mind you, but very difficult. A linear set of steps is fairly easy, but when you start adding in branches and loops (a loop is nothing more than a branch that goes back to an earlier location), you need to add in memory, not to mention all the infrastructure for it, like an instruction pointer.

If you want digital computers, or anything that does any sort of work in parallel, then you’ll probably also need a clock source for synchronization. Thus, you may have another hard “gate” on the timeline, because water clocks and hourglasses probably won’t cut it. Again, gears are the bare minimum.

Output may be able to go on the same medium as input. If it can, great! You can do a lot more that way, since you’d be able to feed the result of one program into another, a bit like what functional programmers call composition. That’s also the way to bring about compilers and other programs whose results are their own set of instructions. Of course, this requires a medium that can be both read and written with relative ease by machines. Punched cards and paper tape are the historical early choices there, with disks, memory, and magnetic tape all coming much later.

Thus, creating the tools looks to be the hardest part about bringing computation into the past. And it really is. The leaps of logic that Turing and Boole made were not special, not miraculous. There’s nothing saying an earlier mathematician couldn’t discover the same foundations of computer science. They’d have to have the need, that’s all. Well, the need and the framework. Algebra is a necessity, for instance, and you’d also want number theory, set theory, and a few others.

All in all, computers are a modern invention, but they’re a modern invention with enough precursors that we could plausibly shift their creation back in time a couple of centuries without stretching believability. You won’t get an iPhone in the Enlightenment, but the most basic tasks of computation are just barely possible in 1800. Or, for that matter, 1400. Even if using a computer for fun takes until our day, the more serious efforts it speeds up might be worth the comparatively massive cost in engineering and research.

But only if they had a reason to make the things in the first place. We had World War II. An alt-history could do the same with, say, the Thirty Years’ War or the American Revolution. Necessity is the mother of invention, so it’s said, so what could make someone need a computer? That’s a question best left to the creator of a setting, which is you.

Leave a Reply

Your email address will not be published. Required fields are marked *