Lands of the lost

Recently, I finished reading Fingerprints of the Gods. I picked it up because I found the premise interesting, and because the mainstream media made such a big deal about author Graham Hancock getting a Netflix miniseries to showcase his unorthodox theories. I went into the book hoping there would be something tangible about those theories. Unfortunately, there isn’t.

Time of ice

The basic outline of the book is this: What if an advanced civilization existed before all known historical ones, and imparted some of its wisdom to those later civilizations as a way of outliving its own demise?

Put like that, it’s an intriguing proposition, one that has cropped up in many places over the past three decades. The Stargate franchise—one of my favorites, I must admit—is based largely on Hancock’s ideas, along with those of noted crackpots like Erich von Daniken. Chrono Trigger, widely regarded as one of the greatest video games of all time, uses the concept as a major plot point. Plenty of novels, especially in fantasy genres, suppose an ancient "builder" race or culture whose fingerprints are left within the world in some fashion.

It was this last point that piqued my interest, because my Otherworld series revolves around exactly this. And I even unknowingly used some of Hancock’s hypotheses for that. The timing of my ancients leaving Earth for their second world matches that of his ancients’ final collapse. The connection of archaeoastronomy as a way of leading to their knowledge arises in my books. Even using the prehistoric Mesoamericans as the catalyst wasn’t an original idea of mine; in my case, however, I did it so I wouldn’t have to deal with the logistics of the characters traveling to another continent.

Some of the questions Hancock asks are ones that need to be asked. It’s clear that ancient historical cultures the world over have some common themes which arise in their earliest mythology. Note, though, that these aren’t the specific ones he lists. The flood of Noah and Gilgamesh is entirely different from those of cultures beyond the Fertile Crescent and Asia Minor, for example, because it most likely stems from oral traditions of the breaking of the Bosporus, which led to a massive expansion of the Black Sea. Celts, to take one instance, would instead have a flood myth pointing to the flooding of what is now the Dogger Bank; peoples of New Guinea might have one relating to the inundation of the Sunda region; American Indian myths may have preserved echoes of the flooding of Beringia; and so on.

While the details Hancock tries to use don’t always work, the broad strokes of his supposition have merit. There are definitely astronomical alignments in many prehistoric structures, and some of them are downright anachronistic. Too many indigenous American cultures have myths about people who most definitely are not Amerind. (And now I’m wondering if Kennewick Man was a half-breed. I may need to incorporate that into a book…)

The possibility can’t yet be ruled out that cultures with technology more advanced than their direct successors did exist in the past. We know that Dark Ages happen, after all. We have historical records of two in the West (the familiar medieval Dark Age beginning c. 500 AD and the Greek Dark Age that started c. 1200 BC), and we’re very likely on the threshold of what might one day be termed the Progressive Dark Age.

With the cataclysmic end of the Ice Age and the catastrophic Younger Dryas cold snap, which now seems likely to be caused by at least one asteroid impact, there’s a very good impetus for the "breaking the chain" effect that leads to a Dark Age, one that would erase most traces of such an advanced civilization.

Habeas corpus

Of course, the biggest problem with such a theory is the lack of evidence. Even worse, Hancock, like most unorthodox scholars, argues from an "absence of evidence is not evidence of absence" line of thought. Which is fine, but it’s not science. Science is about making testable and falsifiable predictions about the world. It’s not simply throwing out what-ifs and demanding that the establishment debunk them.

The onus is on those who make alternative theories, and this is where Hancock fails miserably. Rarely in the book does he offer any hard evidence in favor of his conjecture. Instead, he most often uses the "beyond the scope of this book" cop-out (to give him credit, that does make him exactly like any orthodox academic) or takes a disputed data point as proof that, since the establishment can’t explain it, that must mean he’s right. It’s traditional crackpottery, and that’s unfortunate. I would’ve liked a better accounting of the actual evidence.

Probably the most disturbing aspect of the book is the author’s insistence on taking myths at face value. We know that mythology is absolutely false—the Greek gods don’t exist, for example—but that it can often hide clues to historical facts.

To me, one of the most interesting examples of this is also one of the most recent: the finding in 2020 of evidence pointing to an impact or airburst event near the shore of the Dead Sea sometime around 1600 BC. This event apparently not only destroyed a town in such a violent event that it vaporized human flesh, but it also scattered salt from the sea over such a wide region that it literally salted the earth. And the only reference, oral or written, to this disaster is as a metaphor, in the Jewish fable of Sodom and Gomorrah.

Myths, then, can be useful to historians and archaeologists, but they’re certainly not a primary source. The nameless town on the shore of the Dead Sea wasn’t wiped out by a capricious deity’s skewed sense of justice, but by a natural, if rare, disaster. Similarly, references in Egyptian texts of gods who ruled as kings doesn’t literally mean that their gods existed. Because they didn’t.

In the same vein, Hancock focuses too much on numerological coincidences, assuming that they must have some deeper meaning. But the simple fact is that many cultures could independently hit upon the idea of dividing the sky into 360 degrees. It’s a highly composite number, after all, and close enough to the number of days in the year that it wouldn’t be a huge leap. That the timeworn faces of the Giza pyramids are currently in certain geometric ratios doesn’t mean that they always were, or that they were intended to be, or that they were intended to be as a message from ten thousand years ago.

Again, the burden of proof falls on the one making the more outlandish claims. Most importantly, if there did exist an ancient civilization with enough scientific and technological advancement to pose as gods around the world, there should be evidence of their existence. Direct, physical evidence. An industrial civilization puts out tons upon tons of waste. It requires natural resources, including some that are finite. The more people who supposedly lived in this Quaternary Atlantis, the more likely we would have stumbled upon one’s remains by now.

Even more than this, the scope of Hancock’s conjecture is absurdly small. He draws most of his conclusions from three data points: Egypt, Peru, and Central America. Really, that’s more like two and a half, because there were prehistoric connections between the two halves of the Americas—potatoes and corn had to travel somehow. Rarely does he point to India, where Dravidians mangled the myths of the Yamnaya into the Vedas. China, which became literate around the same time as Egypt, is almost never mentioned. Did the ancients just not bother with them? What about Stonehenge, which is at least as impressive, in terms of the necessary engineering, as the Pyramids?

Conclusion

I liked the book. Don’t get me wrong on that. It had some thought-provoking moments, and it makes for good novel fodder. I’ll definitely have to make a mention of Viracocha in an Otherworld story at some point.

As a scientific endeavor, or even as an introduction to an unorthodox theory, it’s almost useless. There are too many questions, too few answers, and too much moralizing. There’s also a strain of romanticism, which is common to a lot of people who study archaeological findings but aren’t themselves archaeologists. At many points, Hancock denigrates modern society while upholding his supposed lost civilization as a Golden Age of humanity. You know, exactly like Plato and Francis Bacon did.

That said, it’s worth a read if only to see what not to do. In a time where real science is under attack, and pseudoscience is given preferential treatment in society, government, and media, it’s important to know that asking questions is the first step. Finding evidence to support your assertions is the next, and it’s necessary if you want to advance our collective knowledge.

The last Dark Age

In the title of this post, “last” means “previous” rather than “final”, for I truly believe we are on the precipice of a new Dark Age. With that in mind, it’s not that bad an idea to look back at the one that came before.

Defining the moment

A lot of modern academics don’t even like talking about the Dark Ages. They prefer the bland descriptor “Early Middle Ages” instead. But that line of thinking is faulty in multiple respects.

First, the given reasoning for referring to the Dark Ages as something else is because the “darkness” of the times was a localized concept. Outside of Europe, it wasn’t all that dark. Islam, for instance, had a bit of a renaissance around the same time, and China barely noticed the troubles of the West at all.

However, this same logic should dictate that the Middle Ages are no less localized. After all, the term comes from post-medieval sources who placed that time between their modern era and the classical period of the Greeks and Romans. Similarly, is referring to the Iron Age (which began around the time of the Greek Dark Ages, starting in 1177 BC) any less patronizing? Iron tools were never developed by natives in the Americas or Australia; what was the Iron Age in Anatolia would have been nothing more than the later Stone Age in Mesoamerica. The Middle Ages aren’t “middle” at all, except through the same lens that gives us the Dark Ages.

The second reason why it’s an error to conflate the Dark Ages with the Middle Ages is character, and it’s the subject of this post.

Beginning and ending

Before we can get to that, though, we need to define the limits of the period. The beginning is fairly easy, because Europe’s decline can be traced directly to the fall of Rome in 476 AD. This event was the culmination of decades of barbarian activity, with the entire empire facing threats from waves of migrant Vandals, Goths, Huns, and others. Those peoples slowly encroached upon Roman territory, nipping away at the borders, until they were able to reach the capital itself. Rome was sacked, and the last western emperor, Romulus Augustulus, fled into exile. Or was sent there. Conflicting tales exist, but the gist is clear: Europe no longer bowed to Rome.

Things didn’t change overnight, of course. The barbarian kings often paid homage to the Byzantine emperor who continued to style himself Roman all the way to the 15th century. For a time, they considered themselves successors to the western throne, or at least to the provinces it had once controlled.

No, the Dark Ages only truly began once continuity was lost. That was a slow breakdown over years, decades, generations. The barbarian hordes lacked Roman culture. Without an imperial presence in Europe, that culture began to disappear, fading into memory as those who continued to consider themselves Roman aged and died. Later in the post, we’ll look at what that entailed.

As for when the Dark Ages ended, that’s a tougher question. Some might point to the coronation of Charlemagne as Holy Roman Emperor in 800. Indeed, this did rejuvenate Europe for a time, bringing about the Carolingian Renaissance, and the 9th century gave us a few technological advances; Cathedral, Forge, and Waterwheel, by Joseph and Frances Gies, details some of these, including the three in the book’s title.

Another date might be 927, marking the defeat of the Vikings by Æthelstan, first King of England. This was significant from both a political and religious standpoint, as England became a unified Christian kingdom for the first time in its history; Spain, for instance, wouldn’t manage that for nearly 600 years. And Æthelstan’s victory over the Danes did begin to bring about the changes that define the Middle Ages, such as the feudal system.

Still others would argue that the Dark Ages didn’t really end until William the Conqueror was crowned in 1066. By this point, all the pieces of the Middle Ages were in place, from the manorial society to the schism of Catholic and Orthodox. The Reconquista had begun in Spain, Turks were overrunning Byzantine lands, and the Crusades were about to begin. Clearly, the world had moved on from the Fall of Rome.

Continuum

Personally, I think that’s too late, while the Charlemagne date of 800 seems a bit too early. But it may be that there is no single date we can point to and say, “The Dark Ages ended here.” Rather, there’s a continuum. The period ended at different times in different places throughout Europe, as connections to the past were rediscovered, and connections among those in the present were strengthened.

When the period began, the results were devastating. As Roman rule fell, so too did Roman institutions. The roads, so famous that we enshrine them in aphorisms, began to succumb to the ravages of time. Likewise for the bath, the forum, the legal framework, and the educational system.

The replacements weren’t always up to par, either. One of the reasons the Dark Ages are, well, dark is because of the relative lack of written works from the time. We have tons of Roman-era books: Caesar’s commentaries on the Gallic Wars, Ovid’s masterpieces, the Stoic philosophy of Marcus Aurelius, and even the New Testament of the Bible all come from the Roman world. By contrast, the best-known writings to come from the period 476-1066 are histories like the Anglo-Saxon Chronicle, religious texts such as those by Bede, and Beowulf.

That’s not to say that people in the Dark Ages were stupid. Far from it. Instead, they had different priorities. They lived in a different world, one that didn’t have much opportunity for philosophy. Even when it did, that was almost exclusively the domain of the Church, one of the few institutions that retained some measure of continuity with the previous age.

With the breakdown of Roman society came a change in the way people saw themselves. While the barbarians did become civilized, they didn’t become Romanized. Gone were the trappings of republic and the scholastic zeal we associate with Late Antiquity. Dark Age society focused more on tribal identity, family honor, and individual heroism. The world, in a sense, shrank for the average person. Some of the changes came from the pagan background of the Gauls, Goths, and others, but they retained them even after converting to Christianity.

The unifying power of the Church may have helped usher in the end of the Dark Ages, in that it created the backdrop for the centralization of secular power, turning petty kingdoms into nation-states. Seven English kingdoms became a single England. Vast swathes of Europe fell under the rule of the emperor in Aachen. And this could be seen as lifting the continent out of the mire. A powerful nation can build bigger than a small tribe; the grand cathedrals begun in the ninth and tenth centuries are evidence of that.

But that didn’t change the fact that so much had been lost. In some places, particularly rural Britain, standards of living (which weren’t all that high in Roman times, to be fair) dropped to a level not seen since the Bronze Age, some 2000 years before. With Roman construction and sanitation forgotten, life expectancies fell, as did urban population. This was the Dark Ages in a nutshell. When Hobbes describes early man’s life as “nasty, brutish, and short,” he’s also talking about post-Roman, pre-medieval Europe. A life without even the most basic trappings of civilization, with little hope for advancement except through heroic deeds, with the specter of death lurking around every corner…that’s not much of a life at all.

Light returns

The Dark Ages did, however, come to an end. As I said above, the ninth century brought about the Carolingian Renaissance, a small uplifting. Much later came the 12th-century version, which brought about the High Middle Ages. Bits of darkness lingered all the way to 1453, when the last vestige of ancient Rome fell to the Ottoman Empire.

Odoacer’s sack of the imperial capital in 476 brought about, in a sense, the end of the world. When Mehmed II did the same thing to the other Roman capital, Constantinople, a millennium later, the effect was quite different. Instead of a new Dark Age, the end of the Byzantines fanned the flames of the Renaissance. The true Renaissance, the one which deserves this name. By then, so much of classical times had been forgotten by Europe at large, but it was now rediscovered, the bonds reforged.

Dark Ages end when light shines through. Or when enough people decide that they are destined for greater things. In Europe, the three centuries after 476 were a period of stasis, even regression. What little of our modern media touches on this period tends to focus on heroes real or invented: Vikings, The Last Kingdom, and so on. That’s understandable, as the life of the ordinary Saxon in Winchester, the Frank in Paris, or the Lombard in Pavia is relatively dull and uninspiring. The ones whose names we remember are those who rose above that. Heroes exist in every age, no matter what the society around them looks like.

Darkness, in this sense, can be defeated. This is a darkness of ignorance, of barbarism, of tribal infighting. Knowledge is the light that washes it away. To this day, we still can’t recreate some of the progress of Antiquity: we don’t know precisely how the Romans made their concrete, the composition of Greek fire, or the purpose of the Antikythera Mechanism.

Those secrets were lost because continuity was lost. The passing of culture from one generation to the next stopped, breaking a chain that had endured for centuries. With our interconnected world of today, it’s easy to think that can’t happen anymore. After all, we can call up an entire library on our phones. But what happens when that chain is sabotaged? What happens when culture and history are intentionally altered or buried? The result would be a new Dark Age.

Culture and history forgotten. Waves of migrants. Cities sacked. The loss of classical education and scholasticism. Sounds awfully familiar, doesn’t it?

The price of protest

Tin soldiers and Nixon coming…four dead in Ohio

I have written a lot in the past few years to commemorate the 50th anniversary of various spaceflight milestones: the Apollo 8 lunar orbit, Apollo 11’s landing in 1969, and so on. I do that because I love the American space program, of course, but also because I believe its accomplishments rank among the greatest in human history. They are certainly shining lights in the 20th century.

But we must also remember the darker days, lest, to paraphrase Santayana, we be doomed to repeat their mistakes.

This day 50 years ago, on May 4, 1970, four students at Kent State University were shot and killed by National Guard soldiers during a protest against the Vietnam War. Nine others were injured, a college campus became a battlefield, and the entire nation lost whatever vestiges of innocence it still had after years of needless death in the jungles of Southeast Asia.

I was not alive for these events. They were 13 years before I was born; those who lost their lives were over a decade older than my parents! Yet I have seen the documentaries. I’ve read the stories. That is how history survives, through the telling and retelling of events beyond our own experience. In the modern era, we have photographs, television recordings, and other resources beyond mere—and fallible—human memory.

For Kent State, I’ve watched the videos from the tragedy itself, and few things have ever left me more disgusted, more saddened, and more…angry. It boggles my mind how anyone, even soldiers trained in the art of war and encouraged to look at their enemy as less than human, could think this was a good thing, a just thing. Yet they did not hold their fire. If they stopped to think, “These are young Americans, people just like me, and they’re doing what’s right,” then it never showed in their actions.

Worse, however, is the public perception that followed. In the wake of the massacre, polls showed that a vast majority of people in this country supported the soldiers. Yes. About two-thirds of those surveyed said they felt it was justified to use lethal force against peaceful protestors who were defending themselves.

Let’s break that down, shall we? First, protests are a right. The “right of the people peaceably to assemble” is guaranteed in the First Amendment; it doesn’t get the attention of speech, religion, and the press, but it’s right there alongside them. And remember that the Bill of Rights, as I’ve repeatedly stated in my writings, is not a list of those rights the government has granted its citizenry. Rather, it’s an incomplete enumeration of rights we are born with—“endowed by our Creator”, in Jefferson’s terms—that cannot be taken away by a government without resorting to tyranny.

Some may argue that the Kent State protests were not peaceful. After all, the iconic video is of a student throwing a canister of tear gas at the police officers called in to maintain order, right? But that argument falls flat when you see that the tear gas came from those same cops. It was fired to disperse the crowd. The protestors didn’t like that, so they risked physical danger (not only the chance of getting shot, but even just burns from the canisters themselves) to clear the space they had claimed as their own.

And finally, the notion that killing students was the only way to end the protest would be laughable if it weren’t so sad. They were unarmed. Deescalation should always be the first option. Whatever you think about the protest itself, whether you feel it was wholly justified or dangerously un-American, you cannot convince me that shooting live rounds into a crowd is an acceptable answer. The only way, in my opinion, you could convince yourself is if you accept the premise that these students were enemy collaborators, and the National Guard’s response was legitimate under the rules of engagement.

But that presumes a dangerous proposition: that American citizens opposing a government action they feel is morally wrong constitutes a threat to the nation. And here we see that those lessons learned in Kent State 50 years ago have been forgotten since.


Today, we don’t have the Vietnam War looming over us. The eternal morass of Iraq and Afghanistan, despite taking twice as much time (and counting), has long since lost the furious reactions it once inspired. Trump’s presidency was worth a few marches, the Occupy and Tea Party movements were quashed or commandeered, and even the Great Recession didn’t prompt much in the way of social unrest.

But a virus did.

Rather, the government response to the Wuhan virus, whether on the federal, state, or local level, has, in some places, been enough to motivate protests. The draconian lockdown orders in Michigan, California, North Carolina, and elsewhere, unfounded in science and blatantly unconstitutional, have lit a fire in those most at risk from the continued economic and social devastation. Thousands marching, cars causing gridlock for miles, and beaches flooded with people who don’t want to hurt anyone, but just yearn to breathe free. It’s a stirring sight, a true show of patriotism and bravery.

Yet too many people see it as something else. They believe the protests dangerous. The governors know what’s best for us, they argue. They have experts backing them up. Stay at home, they say. It’s safe there. Never mind that it isn’t. As we now know through numerous scientific studies, the Wuhan virus spreads most easily in isolated environments and close quarters. It’s most deadly for the elderly, and some two out of every three deaths (even overcounting per federal guidelines) come from nursing homes and similar places. For the vast majority of people under the age of 60, it is, as the CDC stated on May 1, barely more of a risk than “a recent severe flu season” such as 2017-18. Compared to earlier pandemic flu seasons (e.g., 1957, 1969), it’s not that bad, especially to children.

Of course, people of all sorts are dying from it. That much is true, and my heart cries out for every last one of them. Stopping our lives, ending our livelihoods, is not the answer. People, otherwise healthy people who aren’t senior citizens, die from the flu every year. My cousin did in 2014, and he was 35. That’s the main reason I feared for my life when I was sick back in December; looking back, the symptoms my brother and I showed match better with the Wuhan virus than with the flu, and each week brings new evidence pointing to the conclusion that it was in the US far earlier than we were told. If that is what we had, it didn’t kill us, just like it won’t kill the overwhelming majority of people infected.

Epidemiology isn’t my goal here, however. I merely wanted to remind anyone reading this that the virus, while indeed a serious threat, is not the apocalypse hyped by the media. Common sense, good hygiene, and early medical treatment will help in most cases, and that’s no different from the flu, or the pneumonia that almost put me in the ICU in 2000, or even the common cold.

Now that all indications are showing us on the downslope of the curve, I’d rather look to the coming recovery effort, and the people—the patriots—who have started that conversation in the most public fashion. The Reopen America protestors are doing exactly what Americans should do when they perceive the threat of government tyranny: take to the streets and let your voice be heard. Civil disobedience is alive and well, and that is a good thing. It’s an American thing.

The movement is unpopular, alas. Reopen protestors are mocked and derided. Those who report on them in a favorable light are called out. A quick perusal of Twitter, for instance, will turn up some truly awful behavior. Suggestions that anyone protesting should be required to waive any right to medical treatment. Naked threats of calling Child Protective Services on parents who let their kids play outside. Worst of all, the holier-than-thou smugness of those who would willingly lock themselves away for months, if not years, over something with a 99.8% survival rate, solely on the basis of an appeal to authority.

A past generation would call such people Tories; in modern parlance, they are Karens. I call them cowards. Not because they fear the virus—I did until I learned more about it, and I accept that some people probably do need to be quarantined, and that some commonsense mitigation measures are necessary for a short time.

No, these people are cowards because they have sacrificed their autonomy, their rationality, and their liberty on an altar of fear, offerings to their only god: government. It’s one thing to be risk-averse. We beat worse odds than 500-to-1 all the time, but there’s always a chance. To live your life paralyzed by fear, unable to enjoy it without worrying about all the things that might kill you, that’s a terrible way to live. I know. I’ve been there. But never in my darkest moments did I consider extending my misery to the 320 million other people in this country. That is true cowardice, to be so afraid of the future that you would take it from everyone else.

Protest is a powerful weapon. The Vietnam War proved that beyond a shadow of a doubt. Fifty years ago today, four Ohio students paid the ultimate price for wielding that weapon. But they died believing what they did was right. They died free, because they died in a public expression of the freedom each of us is gifted the day we’re born.

Better that than dying alone in your safe space.

Future past: computers

Today, computers are ubiquitous. They’re so common that many people simply can’t function without them, and they’ve been around long enough that most can’t remember a time when they didn’t have them. (I straddle the boundary on this one. I can remember my early childhood, when I didn’t know about computers—except for game consoles, which don’t really count—but those days are very hazy.)

If the steam engine was the invention that began the Industrial Revolution, then the programmable, multi-purpose device I’m using to write this post started the Information Revolution. Because that’s really what it is. That’s the era we’re living in.

But did it have to turn out that way? Is there a way to have computers (of any sort) before the 1940s? Did we have to wait for Turing and the like? Or is there a way for an author to build a plausible timeline that gives us the defining invention of our day in a day long past? Let’s see what we can see.

Intro

Defining exactly what we mean by “computer” is a difficult task fraught with peril, so I’ll keep it simple. For the purposes of this post, a computer is an automated, programmable machine that can calculate, tabulate, or otherwise process arbitrary data. It doesn’t have to have a keyboard, a CPU, or an operating system. You just have to be able to tell it what to do and know that it will indeed do what you ask.

By that definition, of course, the first true computers came about around World War II. At first, they were mostly used for military and government purposes, later filtering down into education, commerce, and the public. Now, after a lifetime, we have them everywhere, to the point where some people think they have too much influence over our daily lives. That’s evolution, but the invention of the first computers was a revolution.

Theory

We think of computers as electronic, digital, binary. In a more abstract sense, though, a computer is nothing more than a machine. A very, very complex machine, to be sure, but a machine nonetheless. Its purpose is to execute a series of steps, in the manner of a mathematical algorithm, on a set of input data. The result is then output to the user, but the exact means is not important. Today, it’s 3D graphics and cutesy animations. Twenty years ago, it was more likely to be a string of text in a terminal window, while the generation before that might have settled for a printout or paper tape. In all these cases, the end result is the same: the computer operates on your input to give you output. That’s all there is to it.

The key to making computers, well, compute is their programmability. Without a way to give the machine a new set of instructions to follow, you have a single-purpose device. Those are nice, and they can be quite useful (think of, for example, an ASIC cryptocurrency miner: it can’t do anything else, but its one function can more than pay for itself), but they lack the necessary ingredient to take computing to the next level. They can’t expand to fill new roles, new niches.

How a computer gets its programs, how they’re created, and what operations are available are all implementation details, as they say. Old code might be written in Fortran, stored on ancient reel-to-reel tape. The newest JavaScript framework might exist only as bits stored in the nebulous “cloud”. But they, as well as everything in between, have one thing in common: they’re Turing complete. They can all perform a specific set of actions proven to be the universal building blocks of computing. (You can find simulated computers that have only a single available instruction, but that instruction can construct anything you can think of.)

Basically, the minimum requirements for Turing completeness are changing values in memory and branching. Obviously, these imply actually having memory (or other storage) and a means of diverting the flow of execution. Again, implementation details. As long as you can do those, you can do just about anything.

Practice

You may be surprised to note that Alan Turing was the one who worked all that out. Quite a few others made their mark on computing, as well. George Boole (1815-64) gave us the fundamentals of computer logic (hence why we refer to true/false values as boolean). Charles Babbage (1791-1871) designed the precursors to programmable computers, while Ada Lovelace (1815-52) used those designs to create what is considered to be the first program. The Jacquard loom, named after Joseph Marie Jacquard (1752-1834), was a practical display of programming that influenced the first computers. And the list goes on.

Earlier precursors aren’t hard to find. Jacquard’s loom was a refinement of older machines that attempted to automate weaving by feeding a pattern into the loom that would allow it to move the threads in a predetermined way. Pascal and Leibniz worked on calculators. Napier and Oughtred made what might be termed analog computing devices. The oldest object that we can call a computer by even the loosest definition, however, dates back much farther, all the way to classical Greece: the Antikythera mechanism.

So computers aren’t necessarily a product of the modern age. Maybe digital electronics are, because transistors and integrated circuits require serious precision and fine tooling. But you don’t need an ENIAC to change the world, much less a Mac. Something on the level of Babbage’s machines (if he ever finished them, which he didn’t particularly like to do) could trigger an earlier Information Age. Even nothing more than a fast way to multiply, divide, and find square roots—the kind of thing a pocket calculator can do instantly—would advance mathematics, and thus most of the sciences.

But can it be done? Well, maybe. Programmable automatons date back about a thousand years. True computing machines probably need at least Renaissance-era tech, mostly for gearing and the like. To put it simply: if you can make a clock that keeps good time, you’ve got all you need to make a rudimentary computer. On the other hand, something like a “hydraulic” computer (using water instead of electricity or mechanical power) might be doable even earlier, assuming you can find a way to program it.

For something Turing complete, rather than a custom-built analog solver like the Antikythera mechanism, things get a bit harder. Not impossible, mind you, but very difficult. A linear set of steps is fairly easy, but when you start adding in branches and loops (a loop is nothing more than a branch that goes back to an earlier location), you need to add in memory, not to mention all the infrastructure for it, like an instruction pointer.

If you want digital computers, or anything that does any sort of work in parallel, then you’ll probably also need a clock source for synchronization. Thus, you may have another hard “gate” on the timeline, because water clocks and hourglasses probably won’t cut it. Again, gears are the bare minimum.

Output may be able to go on the same medium as input. If it can, great! You can do a lot more that way, since you’d be able to feed the result of one program into another, a bit like what functional programmers call composition. That’s also the way to bring about compilers and other programs whose results are their own set of instructions. Of course, this requires a medium that can be both read and written with relative ease by machines. Punched cards and paper tape are the historical early choices there, with disks, memory, and magnetic tape all coming much later.

Thus, creating the tools looks to be the hardest part about bringing computation into the past. And it really is. The leaps of logic that Turing and Boole made were not special, not miraculous. There’s nothing saying an earlier mathematician couldn’t discover the same foundations of computer science. They’d have to have the need, that’s all. Well, the need and the framework. Algebra is a necessity, for instance, and you’d also want number theory, set theory, and a few others.

All in all, computers are a modern invention, but they’re a modern invention with enough precursors that we could plausibly shift their creation back in time a couple of centuries without stretching believability. You won’t get an iPhone in the Enlightenment, but the most basic tasks of computation are just barely possible in 1800. Or, for that matter, 1400. Even if using a computer for fun takes until our day, the more serious efforts it speeds up might be worth the comparatively massive cost in engineering and research.

But only if they had a reason to make the things in the first place. We had World War II. An alt-history could do the same with, say, the Thirty Years’ War or the American Revolution. Necessity is the mother of invention, so it’s said, so what could make someone need a computer? That’s a question best left to the creator of a setting, which is you.

Future past: steam

Let’s talk about steam. I don’t mean the malware installed on most gamers’ computers, but the real thing: hot, evaporated water. You may see it as just something given off by boiling stew or dying cars, but it’s so much more than that. For steam was the fluid that carried us into the Industrial Revolution.

And whenever we talk of the Industrial Revolution, it’s only natural to think about its timing. Did steam power really have to wait until the 18th century? Is there a way to push back its development by a hundred, or even a thousand, years? We can’t know for sure, but maybe we can make an educated guess or two.

Intro

Obviously, knowledge of steam itself dates back to the first time anybody ever cooked a pot of stew or boiled their day’s catch. Probably earlier than that, if you consider natural hot springs. However you take it, they didn’t have to wait around for a Renaissance and an Enlightenment. Steam itself is embarrassingly easy to make.

Steam is a gas; it’s the gaseous form of water, in the same way that ice is its solid form. Now, ice forms naturally if the temperature gets below 0°C (32°F), so quite a lot of places on Earth can find some way of getting to it. Steam, on the other hand requires us to take water to its boiling point of 100°C (212°F) at sea level, slightly lower at altitude. Even the hottest parts of the world never get temperatures that high, so steam is, with a few exceptions like that hot spring I mentioned, purely artificial.

Cooking is the main way we come into contact with steam, now and in ages past. Modern times have added others, like radiators, but the general principle holds: steam is what we get when we boil water. Liquid turns to gas, and that’s where the fun begins.

Theory

The ideal gas law tells us how an ideal gas behaves. Now, that’s not entirely appropriate for gases in the real world, but it’s a good enough approximation most of the time. In algebraic form, it’s PV = nRT, and it’s the key to seeing why steam is so useful, so world-changing. Ignore R, because it’s a constant that doesn’t concern us here; the other four variables are where we get our interesting effects. In order: P is the pressure of a gas, V is its volume, n is how much of it there is (in moles), and T is its temperature.

You don’t need to know how to measure moles to see what happens. When we turn water into steam, we do so by raising its temperature. By the ideal gas law, increasing T must be balanced out by a proportional increase on the other side of the equation. We’ve got two choices there, and you’ve no doubt seen them both in action.

First, gases have a natural tendency to expand to fill their containers. That’s why smoke dissipates outdoors, and it’s why that steam rising from the pot gets everywhere. Thus, increasing V is the first choice in reaction to higher temperatures. But what if that’s not possible? What if the gas is trapped inside a solid vessel, one that won’t let it expand? Then it’s the backup option: pressure.

A trapped gas that is heated increases in pressure, and that is the power of steam. Think of a pressure cooker or a kettle, either of them placed on a hot stove. With nowhere to go, the steam builds and builds, until it finds relief one way or another. (With some gases, this can come in the more dramatic form of a rupture, but household appliances rarely get that far.)

As pressure is force per unit of area, and there’s not a lot of area in the spout of a teapot, the rising temperatures can cause a lot of force. Enough to scald, enough to push. Enough to…move?

Practice

That is the basis for steam power and, by extension, many of the methods of power generation we still use today. A lot of steam funneled through a small area produces a great amount of force. That force is then able to run a pump, a turbine, or whatever is needed, from boats to trains. (And even cars: some of the first automobiles were steam-powered.)

Steam made the Industrial Revolution possible. It made most of what came after possible, as well. And it gave birth to the retro fad of steampunk, because many people find the elaborate contraptions needed to haul superheated water vapor around to be aesthetically pleasing. Yet there is a problem. We’ve found steam-powered automata (e.g., toys, “magic” temple doors) from the Roman era, so what happened? Why did we need over 1,500 years to get from bot to Watt?

Unlike electricity, where there’s no obvious technological roadblock standing between Antiquity and advancement, steam power might legitimately be beyond classical civilizations. Generation of steam is easy—as I’ve said, that was done with the first cooking pot at the latest. And you don’t need an ideal gas law to observe the steam in your teapot shooting a cork out of the spout. From there, it’s not too far a leap to see how else that rather violent power can be utilized.

No, generating small amounts of steam is easy, and it’s clear that the Romans (and probably the Greeks, Chinese, and others) could do it. They could even use it, as the toys and temples show. So why didn’t they take that next giant leap?

The answer here may be a combination of factors. First is fuel. Large steam installations require metaphorical and literal tons of fuel. The Victorian era thrived on coal, as we know, but coal is a comparatively recent discovery. The Romans didn’t have it available. They could get by with charcoal, but you need a lot of that, and they had much better uses for it. It wouldn’t do to cut down a few acres of forest just to run a chariot down to Ravenna, even for an emperor. Nowadays, we can make steam by many different methods, including renewable variations like solar boilers, but that wasn’t an option back then. Without a massive fuel source, steam—pardon the pun—couldn’t get off the ground.

Second, and equally important, is the quality of the materials that were available. A boiler, in addition to eating fuel at a frantic pace, also has some pretty exacting specifications. It has to be built strong enough to withstand the intense pressures that steam can create (remember our ideal gas law); ruptures were a deadly fixture of the 19th century, and that was with steel. Imagine trying to do it all with brass, bronze, and iron! On top of that, all your valves, tubes, and other machinery must be built to the same high standard. It’s not just a gas leaking out, but efficiency.

The ancients couldn’t pull that off. Not from lacking of trying, mind you, but they weren’t really equipped for the rigors of steam power. Steel was unknown, except in a few special cases. Rubber was an ocean away, on a continent they didn’t know existed. Welding (a requirement for sealing two metal pipes together so air can’t escape) probably wasn’t happening.

Thus, steam power may be too far into the future to plausibly fit into a distant “retro-tech” setting. It really needs improvements in a lot of different areas. That’s not to say that steam itself can’t fit—we know it can—but you’re not getting Roman railroads. On a small scale, using steam is entirely possible, but you can’t build a classical civilization around it. Probably not even a medieval one, at that.

No, it seems that steam as a major power source must wait until the rest of technology catches up. You need a fuel source, whether coal or something else. You absolutely must have ways of creating airtight seals. And you’ll need a way to create strong pressure vessels, which implies some more advanced metallurgy. On the other hand, the science isn’t entirely necessary; if your people don’t know the ideal gas law yet, they’ll probably figure it out pretty soon after the first steam engine starts up. And as for finding uses, well, they’d get to that part without much help, because that’s just what we do.

Future past: Electricity

Electricity is vital to our modern world. Without it, I couldn’t write this post, and you couldn’t read it. That alone should show you just how important it is, but if not, then how about anything from this list: air conditioning, TVs, computers, phones, music players. And that’s just what I can see in the room around me! So electricity seems like a good start for this series. It’s something we can’t live without, but its discovery was relatively recent, as eras go.

Intro

The knowledge of electricity, in some form, goes back thousands of years. The phenomenon itself, of course, began in the first second of the universe, but humans didn’t really get to looking into it until they started looking into just about everything else.

First came static electricity. That’s the kind we’re most familiar with, at least when it comes to directly feeling it. It gives you a shock in the wintertime, it makes your clothes stick together when you pull them out of the drier, and it’s what causes lightning. At its source, static electricity is nothing more than an imbalance of electrons righting itself. Sometimes, that’s visible, whether as a spark or a bolt, and it certainly doesn’t take modern convenience to produce such a thing.

The root electro-, source of electricity and probably a thousand derivatives, originally comes from Greek. There, it referred to amber, that familiar resin that occasionally has bugs embedded in it. Besides that curious property, amber also has a knack for picking up a static charge, much like wool and rubber. It doesn’t take Ben Franklin to figure that much out.

Static electricity, however, is one-and-done. Once the charge imbalance is fixed, it’s over. That can’t really power a modern machine, much less an era, so the other half of the equation is electric current. That’s the kind that runs the world today, and it’s where we have volts and ohms and all those other terms. It’s what runs through the wires in your house, your computer, your everything.

Theory

The study of current, unlike static electricity, came about comparatively late (or maybe it didn’t; see below). It wasn’t until the 18th century that it really got going, and most of the biggest discoveries had to wait until the 19th. The voltaic pile—which later evolved into the battery—electric generators, and so many more pieces that make up the whole of this electronic age, all of them were invented within the last 250 years. But did they have to be? We’ll see in a moment, but let’s take a look at the real world first.

Although static electricity is indeed interesting, and not just for demonstrations, current makes electricity useful, and there are two ways to get it: make it yourself, or extract it from existing materials. The latter is far easier, as you might expect. Most metals are good conductors of electricity, and there are a number of chemical reactions which can cause a bit of voltage. That’s the essence of the battery: two different metals, immersed in an acidic solution, will react in different ways, creating a potential. Volta figured this much out, so we measure the potential in volts. (Ohm worked out how voltage and current are related by resistance, so resistance is measured in ohms. And so on, through essentially every scientist of that age.)

Using wires, we can even take this cell and connect it to another, increasing the amount of voltage and power available at any one time. Making the cells themselves larger (greater cross-section, more solution) creates a greater reserve of electricity. Put the two together, and you’ve got a way to store as much as you want, then extract it however you need.

But batteries eventually run dry. What the modern age needed was a generator. To make that, you need to understand that electricity is but one part of a greater force: electromagnetism. The other half, as you might expect, is magnetism, and that’s the key to generating power. Moving magnetic fields generate electrical potential, i.e., current. And one of the easiest ways to do it is by rotating a magnet inside another. (As an experiment, I’ve seen this done with one of those hand-cranked pencil sharpeners, so it can’t be that hard to construct.)

One problem is that the electricity this sort of generator makes isn’t constant. Its potential, assuming you’ve got a circular setup, follows a sine-wave pattern from positive to negative. (Because you can have negative volts, remember.) That’s alternating current, or AC, while batteries give you direct current, DC. The difference between the two can be very important, and it was at the heart of one of science’s greatest feuds—Edison and Tesla—but it doesn’t mean too much for our purposes here. Both are electric.

Practice

What does it take to create electricity? Is there anything special about it that had to wait until 1800 or so?

As a matter of fact, not only was it possible to have something electrical before the Enlightenment, but it may have been done…depending on who you ask. The Baghdad battery is one of those curious artifacts that has multiple plausible explanations. Either it’s a common container for wine, vinegar, or something of that sort, or it’s a 2000-year-old voltaic cell. The simple fact that this second hypothesis isn’t immediately discarded answers one question: no, nothing about electricity requires advanced technology.

Building a rudimentary battery is so easy that it almost has to have been done before. Two coins (of different metals) stuck into a lemon can give you enough voltage to feel, especially if you touch the wires to your tongue, like some people do with a 9-volt. Potatoes work almost as well, but any fruit or vegetable whose interior is acidic can provide the necessary solution for the electrochemical reactions to take place. From there, it’s not too big a step to a small jar of vinegar. Metals known in ancient times can get you a volt or two from a single cell, and connecting them in series nets you even larger potentials. It won’t be pretty, but there’s absolutely nothing insurmountable about making a battery using only technology known to the Romans, Greeks, or even Egyptians.

Generators a bit harder. First off, you need magnets. Lodestones work; they’re naturally magnetized, possibly by lightning, and their curious properties were first noticed as early as 2500 years ago. But they’re rare and hard to work with, as well as probably being full of impurities. Still, it doesn’t take a genius (or an advanced civilization) to figure out that these can be used to turn other pieces of metal (specifically iron) into magnets of their own.

Really, then, creation of magnets needs iron working, so generators are beyond the Bronze Age by definition. But they aren’t beyond the Iron Age, so Roman-era AC power isn’t impossible. They may not understand how it works, but they have the means to make it. The pieces are there.

The hardest part after that would be wire, because shuffling current around needs that. Copper is a nice balance of cost and conductivity, which is why we use it so much today; gold is far more ductile, while silver offers better conduction properties, but both are too expensive to use for much even today. The latter two, however, have been seen in wire form since ancient times, which means that ages past knew the methods. (Drawn wire didn’t come about until the Middle Ages, but it’s not the only way to do it.) So, assuming that our distant ancestors could figure out why they needed copper wire, they could probably come up with a way to produce it. It might not have rubber or plastic insulation, but they’d find something.

In conclusion, then, even if the Baghdad battery is nothing but a jar with some leftover vinegar inside, that doesn’t mean electricity couldn’t be used by ancient peoples. Technology-wise, nothing at all prevents batteries from being created in the Bronze Age. Power generation might have to wait until the Iron Age, but you can do a lot with just a few cells. And all the pieces were certainly in place in medieval times. The biggest problem after making the things would be finding a use for them, but humans are ingenious creatures. They’d work something out.

Future past: Introduction

With the “Magic and Tech” series on hiatus right now (mostly because I can’t think of anything else to write in it), I had the idea of taking a look at a different type of “retro” technological development. In this case, I want to look at different technologies that we associate with our modern world, and see just how much—or how little—advancement they truly require. In other words, let’s see just what could be made by the ancients, or by medieval cultures, or in the Renaissance.

I’ve been fascinated by this subject for many years, ever since I read the excellent book Lost Discoveries. And it’s very much a worldbuilding pursuit, especially if you’re building a non-Earth human culture or an alternate history. (Or both, in the case of my Otherworld series.) As I’ve looked into this particular topic, I’ve found a few surprises, so this is my chance to share them with you, along with my thoughts on the matter

The way it works

Like “Magic and Tech”, this series (“Future Past”; you get no points for guessing the reference) will consist of an open-ended set of posts, mostly coming out whenever I decide to write them. Each post will be centered on a specific invention, concept, or discovery, rather than the much broader subjects of “Magic and Tech”. For example, the first will be that favorite of alt-historians: electricity. Others will include the steam engine, various types of power generation, and so on. Maybe you can’t get computers in the Bronze Age—assuming you don’t count the Antikythera mechanism—but you won’t believe what you can get.

Every post in the series will be divided into three main parts. First will come an introduction, where I lay out the boundaries of the topic and throw in a few notes about what’s to come. Next is a “theory” section: a brief description of the technology as we know it. Last and longest is the “practice” part, where we’ll look at just how far we can turn back the clock on the invention in question.

Hopefully, this will be as fun to read as it is to write. And I will get back to “Magic and Tech” at some point, probably early next year, but that will have to wait until I’m more inspired on that front. For now, let’s forget the fantasy magic and turn our eyes to the magic of invention.

On fantasy stasis

In fantasy literature, the medieval era is the most common setting. Sure, you get the “flintlock fantasy” that moves things forward a bit, and then there’s the whole subgenre of urban fantasy, but most of the popular works of the past century center on the High Middle Ages.

It’s not hard to see why. That era has a lot going for it. It’s so far back that it’s well beyond living memory, so there’s nobody who can say, “It’s not really like that!” Records are spotty enough that there’s a lot of room for “hidden” discoveries and alternate histories. You get all the knights and chivalry and nobility as a builtin part of the setting, but you don’t have to worry about gunpowder weapons if you don’t want to, or oceanic exploration, or some of the more complex scientific matters discovered in the Renaissance.

For a fantasy world, of course, medieval times give you mostly the same advantages, but also a few more. It’s less you have to do, obviously, as you don’t have the explosion of technology and discovery starting circa 1500. Medieval times were simpler, in a way, and simple makes worldbuilding easy. Magic fits neatly in the gaps of medieval knowledge. The world map can have the blank spaces needed to hide a dragon or a wizard’s lair.

Times are (not) changing

But this presents a problem, because another thing fantasy authors really, really want is a long history, yet they don’t want the usual pattern of advancement that comes with those long ages. Just to take examples from some of my personal favorites, let’s see what we’ve got.

  • A Song of Ice and Fire, by George R. R. Martin. You’ll probably know this better as Game of Thrones, the TV show, but the books go into far greater depth concerning the world history. The Others (White Walkers, in the show, for reasons I’ve never clearly understood) last came around some 8,000 years ago. About the only thing that’s changed since is the introduction of iron weaponry.

  • Lord of the Rings; J.R.R. Tolkien. Everybody knows this one, but how many know Middle Earth’s “internal” history? The Third Age lasts over 3,000 years with no notable technological progress, and that’s on top of the 3,500 years of the Second Age and a First Age (from The Silmarillion) that tacks on another 600 or so. Indeed, most technology in Middle Earth comes from the great enemies, Sauron and Morgoth and Saruman. That’s certainly no coincidence.

  • Mistborn; Brandon Sanderson. Here’s a case where technology actually regressed over the course of 1,000 years. The tyrannical Lord Ruler suppressed the knowledge of gunpowder (he preferred his ranged fighters to have skill) and turned society from seemingly generic fantasy feudalism into a brutal serfdom. (The newer trilogy, interestingly, upends this trope entirely; the world has gone from essentially zero—because of events at the end of Book 3—to Victorian Era in something like 500 years.)

  • Malazan Book of the Fallen; Steven Erikson. This series already has more timeline errors than I can count, so many that fans have turned the whole thing into a meme, and even the author himself lampooned it in the story. But Erikson takes the “fantasy stasis” to a whole new level. The “old” races are over 100,000 years old, there was an ice age somewhere in there, and the best anyone’s done is oceangoing ships and magical explosives, both within the last century or so.

Back in time

It’s a conundrum. Let’s look at our own Western history to see why. A thousand years ago was the Middle Ages, the time when your average fantasy takes place. It’s the time of William the Conqueror, of the Holy Roman Empire and the Crusades and, later, the Black Death. Cathedrals were being built, the first universities founded, and so on. But it was nothing like today. It was truly a whole different world.

Add another thousand years, and you’re in Roman times. You’ve got Caesar, Pliny the Elder, Vesuvius, Jesus. Here, you’re in a world of antiquity, but you have to remember that it’s not really any further back from medieval times than they are from us. If we in 2017 are at the destruction of the One Ring, the founding of the Shire was not long after all this, about at the fall of the Roman Empire.

Another millennium takes you to ancient Greece, to the Bronze Age. That’s “Bronze Age” as in “ironworking hasn’t been invented yet”, by the way. Well, it had been, but it was only used in limited circumstances. Three thousand years ago is about the time of the later Old Testament or Homer. Compared to us, it’s totally unrecognizable, but it’s about the same length of time between the first time the One Ring was worn by someone other than Sauron and the moment Frodo and Sam walked up to Mount Doom.

Let’s try 8,000, like in Westeros. Where does that put us in Earth history? Well, it would be 6000 BC, so before Egypt, Sumeria, Babylon, the Minoans…even the Chinese. The biggest city in the world might have a few thousand people in it—Jericho and Çatalhöyük are about that old. Domestication of animals and plants is still in its infancy at this point in time; you’re closer to the first crops than to the first computers. Bran the Builder would have to have magic to make the Wall. The technology sure wasn’t there yet.

Breaking the ice

And that’s really the problem with so many of these great epic fantasy sagas. Yes, we get to see the grand sweep of history in the background, but it’s only grand because it’s been stretched. In the real world, centuries of stasis simply don’t exist in the eras of these stories. Even the Dark Ages saw substantial progress in some areas, and that’s not counting the massive advancement happening in, say, the Islamic world.

To have this stasis and make it work (assuming it’s not just ancient tales recast in modern terms) requires something supernatural, something beyond what we know. That can be magic or otherworldly beings or even a “caretaker” ruler, but it has to be something. Left to their own devices, people will invent their way out of the Fantasy Dark Age.

Maybe magic replaces technology. That’s an interesting thought, and one that fits in with some of my other writings here. It’s certainly plausible that a high level of magical talent could retard technological development. Magic is often described as far easier than invention, and far more practical now.

Supernatural beings can also put a damper on tech levels, but they may also have the opposite effect. If the mighty dragon kills everything that comes within 100 yards, then a gun that can shoot straight at twice that would be invaluable. Frodo’s quest would have been a piece of cake if he’d had even a World War I airplane, and you don’t even have to bring the Eagles into that one! Again, people are smart. They’ll figure these things out, given enough time. Thousands of years is definitely enough time.

Call this a rant if you like. Maybe that’s what it really is. Now, I’m not saying I hate stories that assume hundreds or thousands of years of stagnation. I don’t; some of my favorite books hinge on that very assumption. But worldbuilding can do better. That’s what I’m after. If that means I’ll never write a true work of epic fantasy, then so be it. There’s plenty of wonder out there.

Writing World War II

Today, there is no more popular war than World War II. No other war in history has been the focus of so much attention, attention that spans the gap between nonfiction and fiction. And for good reason, too. World War II gave us some of the most inspiring stories, some of the most epic battles (in the dramatic and FX senses), and an overarching narrative that perfectly fits so many of the common conflicts and tropes known to writers.

The list of WWII-related stories is far too big for this post to even scratch the surface, so I won’t even try. Suffice to say, in the 70 years since the war ended, thousands of works have been penned, ranging from the sappy (Pearl Harbor) to the gritty (Saving Private Ryan), from lighthearted romp (Red Tails) to cold drama (Schindler’s List). Oh, and those are only the movies. That’s not counting the excellent TV series (Band of Brothers, The Pacific) or the myriad books concerning this chapter of our history.

World War II, then, is practically a genre of its own, and it’s a very cluttered one. No matter the media, a writer wishing to tackle this subject will have a harder time than usual. Most of the “good” stories have been done, and done well. In America, at least, many the heroes are household names: Easy Company, the Tuskegee Airmen, the USS Arizona and the Enola Gay. The places are etched into our collective memory, as well, from Omaha Beach and Bastogne to Pearl Harbor, Iwo Jima, and Hiroshima. It’s a crowded field, to put it mildly.

Time is running out

But you’re a writer. You’re undaunted. You’ve got this great idea for a story set in WWII, and you want to tell it. Okay, that’s great. Just because something happened within the last century doesn’t get you out of doing your homework.

First and foremost, now is the last good chance to write a WWII story. By “now”, I mean within the next decade, and there’s a very good reason for that. This is 2016. The war ended right around 70 years ago. Since most of the soldiers were conscripted, many right out of high school, or young volunteers, they were typically about 18 to 25 years old when they went into service. The youngest WWII veterans are at least in their late 80s, with most in their 90s. They won’t live forever. We’ve seen that in this decade, as the final World War I veterans passed on, and an entire era left living memory.

Yes, there are uncountably many interviews, written or recorded, with WWII vets. The History Channel used to show nothing else. But nothing compares to a face-to-face conversation with someone who literally lived through history. One of the few good things to come out of my public education was the chance to meet one of the real Tuskegee Airmen, about twenty years ago. The next generation of schoolchildren likely won’t have that same opportunity.

Give it a shot

Whether through personal contact or the archives and annals of a generation, you’ll need research. Partly, that’s for the same reason: WWII is within living memory, so you have eyewitnesses who can serve as fact-checkers. (Holocaust deniers, for instance, will only get bolder once there’s no one left who can directly prove them wrong.) Also, WWII was probably the most documented war of all time. Whatever battle you can think of, there’s some record of it. Unlike previous conflicts, there’s not a lot of room to slip through the cracks.

On the face of it, that seems to limit the space available for historical fiction. But it’s not that bad. Yes, the battles were documented, as were many of the units, the aircraft, and even the strategies. However, they didn’t write down everything. It’s easy enough to pick a unit—bonus points if it’s one that was historically wiped out to the man, so there’s no one left to argue—and use it as the basis for your tale.

And that highlights another thing about WWII. War stories of older times often fixate on a single soldier, a solitary hero. With World War II, though, we begin to see the unit itself becoming a character. That’s how it worked with Band of Brothers, for instance. And this unit-based approach is a good one for a story focused on military actions. Soldiers don’t fight alone, and so many of the great field accomplishments of WWII were because of the bravery of a squad, a company, or a squadron.

If your story happens away from the front lines, on the other hand, then it’s back to individuals. And what a cast of characters you have. Officers, generals, politicians, spies…you name it, you can find it. But these tend to be more well-known, and that does limit your choices for deviating from history.

Diverging parallels

While the war itself is popular enough, as are some of the events that occurred at the same time, what happened after is just as ripe for storytelling. Amazon’s The Man in the High Castle (based on the Philip K. Dick story of the same name) is one such example of an alternate WWII, and I’ve previously written a post that briefly touched on another possible outcome.

I think the reason why WWII gets so much attention from the alternate-history crowd is the potential for disaster. The “other” side—the Axis—was so evil that giving them a victory forces a dystopian future, and dystopia is a storyteller’s favorite condition, because it’s a breeding ground for dramatic conflict and tension. And there’s also a general sense that we got the best possible outcome from the war; thus, following that logic, any other outcome is an exercise in contrast. It’s not the escapism that I like from my fiction, but it’s a powerful statement in its own right, and it may be what draws you into the realm of what-ifs.

The post I linked above is all about making an alternate timeline, but I’ll give a bit of a summary here. The assumption is that everything before a certain point happened exactly as it did, but one key event didn’t. From there, everything changes, causing a ripple effect up to the present. For World War II, that’s only 70 years, but that’s more than enough time for great upheaval.

Most people will jump to one conclusion there: the Nazis win. True, that’s one possible (but unlikely, in my opinion) outcome, but it’s not the only one. Some among the allies argued for a continuation of the war, moving to attack the Soviets next. That would have preempted the entire Cold War, with all the knock-on effects that would have caused. What if Japan hadn’t surrendered? Imagine a nuclear bomb dropped on Tokyo, and what that would do to history. The list goes on, ad infinitum.

Fun, fun, fun

Any genre fits World War II. Any kind of story can be told within that span of years. Millions of people were involved, and billions are still experiencing its reverberations. Although it’s hard to talk of a war lasting more than half a decade as a single event, WWII is, collectively speaking, the most defining event of the last century. It’s a magnet for storytelling, as the past 70 years have shown. In a way, despite the horrors visited upon the world during that time, we can even see it as fun.

Too many people see World War II as Hitler, D-Day, Call of Duty, and nukes. But it was far more than that. It was the last great war, in many ways. And great wars make for great stories, real or fictional.

On ancient artifacts

I’ve been thinking about this subject for some time, but it was only after reading this article (and the ones linked there) that I decided it would make a good post. The article is about a new kind of data storage, created by femtosecond laser bursts into fused quartz. In other words, as the researchers helpfully put it, memory crystals. They say that these bits of glass can last (for all practical purposes) indefinitely.

A common trope in fiction, especially near-future sci-fi, is the mysterious artifact left behind by an ancient, yet unbelievably advanced, civilization. Whether it’s stargates in Egypt, monoliths on Europa, or the Prothean archives on Mars, the idea is always the same: some lost race left their knowledge, their records, or their technology, and we are the ones to rediscover them. I’m even guilty of it; my current writing project is a semi-fantasy novel revolving around the same concept.

It’s easy enough to say that an ancient advanced artifact exists in a story. Making it fit is altogether different, particularly if you’re in the business of harder science fiction. Most people will skim over the details, but there will always be the sticklers who point out that your clever idea is, in fact, physically impossible. But let’s see what we can do about that. Let’s see how much we can give the people a hundred, thousand, or even million years in the future.

Built to last

If your computer is anything like mine, it might last a decade. Two, if you’re lucky. Cell phone? They’re all but made to break every couple of years. Writable CDs and DVDs may be able to stand up to a generation or two of wear, and flash memory is too new to really know. In our modern world of convenience, disposability, and frugality, long-lasting goods aren’t popular. We buy the cheap consumer models, not the high-end or mil-spec stuff. When something can become obsolete the moment you open it, that’s not even all that unwise. Something that has to survive the rigors of the world, though, needs to be built to a higher standard.

For most of our modern technology, it’s just plain too early to tell how long it can really last. An LED might be rated for 11,000 hours, a hard drive for 100,000, but that’s all statistics. Anything can break tomorrow, or outlive its owner. Even in one of the most extreme environments we can reach, life expectancy is impossible to guess. Opportunity landed on Mars in 2004, and it was expected to last 90 days.

But there’s a difference between surviving a very long time and being designed to. To make something that will survive untold years, you have to know what you’re doing. Assuming money and energy are effectively unlimited—a fair assumption for a super-advanced civilization—some amazing things can be achieved, but they won’t be making iPhones.

Material things

Many things that we use as building materials are prone to decay. In a lot of cases, that’s a feature, not a bug, but making long-term time capsules isn’t one of those cases. Here, decay, decomposition, collapse, and chemical alteration are all very bad things. So most plastics are out, as are wood and other biological products—unless, of course, you’re using some sort of cryogenics. Crossing off all organics might be casting too wide a net, but not by much.

We can look to archaeology for a bit of guidance here. Stone stands the test of time in larger structures, especially in the proper climate. The same goes for (some) metal and glass, and we know that clay tablets can survive millennia. Given proper storage, many of these materials easily get you a thousand years or more of use. Conveniently, most of them are good for data, too, whether that’s in the form of cuneiform tablets or nanoscale fused quartz.

Any artifact made to stand the test of time is going to be made out of something that lasts. That goes for all of its parts, not just the core structure. The longer something needs to last, the simpler it must be, because every additional complexity is one more potential point of failure.

Power

Some artifacts might need to be powered, and that presents a seemingly insurmountable problem. Long-term storage of power is very, very hard right now. Batteries won’t cut it; most of them are lucky to last ten years. For centuries or longer, we have to have something better.

There aren’t a lot of options here. Supercapacitors aren’t that much better than batteries in this regard. Most of the other options for energy storage require complex machinery, and “complex” here should be read as “failure-prone”.

One possibility that seems promising is a radioisotope thermoelectric generator (RTG), like NASA uses in space probes. These use the heat of radioactive decay to create electricity and they work as long as there’s radioactivity in the material you’re using. They’re high-tech, but they don’t require too much in the way of peripheral complexity. They can work, but there’s a trade-off: the longer the RTG needs to run, the less power you’ll get out of it. Few isotopes fit into that sweet spot of half-life and decay energy to make them worthwhile.

Well, if we can’t store the energy we need, can we store a way to make it? As blueprints, it’s easy, but then you’re dependent on the level of technology of those who find the artifact. Almost anything else, however, runs into the complexity problem. There are some promising leads in solar panels that might work, but it’s too early to say how long they would last. Your best bet might actually be a hand crank!

Knowledge

One of the big reasons for an artifact to exist is to provide a cache of knowledge for future generations. If that’s all you need, then you don’t have to worry too much about technology. The fused-quartz glass isn’t that bad an option. If nothing else, it might inspire the discoverers to invent a way to read it. What knowledge to include then becomes the important question.

Scale is the key. What’s the difference between the “knowers” and the “finders”? If it’s too great, the artifact may need to include lots and lots of bootstrapping information. Imagine sending a sort of inverse time capsule to, say, a thousand years ago. (For the sake of argument, we’ll assume you also provide a way to read the data.) People in 1016 aren’t going to understand digital electronics, or the internal combustion engine, or even modern English. Not only do you need to put in the knowledge you want them to have, you also have to provide the knowledge to get them to where it would be usable. A few groups are working on ways to do this whole bootstrap process for potential communication with an alien race, and their work might come in handy here.

Deep time

The longer something must survive, the more likely it won’t. There are just too many variables, too many things we can’t control. This is even more true once you get seriously far into the future. That’s the “ancient aliens” option, and it’s one of the hardest to make work.

The Earth is like a living thing. It moves, it shifts, it convulses. The plates of the crust slide around, and the continents are not fixed in place. The climate changes over the millennia, from Ice Age to warm period and back. Seas rise and fall, rivers change course, and mountains erode. The chances of an artifact surviving on the surface of our world for a million years are quite remote.

On other bodies, it’s hit or miss, almost literally. Most asteroids and moons are geologically dead, and thus fairly safe over these unfathomable timescales, but there’s always the minute possibility of a direct impact. A few unearthly places (Mars and Titan come to mind) have enough in the way of weather to present problems like those on Earth, but the majority of solid rock in the solar system is usable in some fashion.

Deep space, you might think, would be the perfect place for an ancient artifact. If it’s big enough, you could even disguise it as an asteroid or moon. However, space is a hostile place. It’s full of radiation and micrometeorites, both of which could affect an artifact. Voyager 2 has its golden record, but how long will it survive? In theory, forever. In practice, it’ll get hit eventually. Maybe not for a million years, but you never know.

Summing up

Ancient artifacts, whether from aliens or a lost race of humans, work well as a plot device in many stories. Most of the time, you don’t have to worry about how they’re made or how they survived for so long. But when you do, it helps to think about what’s needed to make something like an artifact. In modern times, we’re starting to make some things like this. Voyager 2, the Svalbard Global Seed Vault, and other things can act, in a sense, as our legacy. Ten thousand years from now, no matter what happens, they’ll likely still be around. What else will be?