Future past: steam

Let’s talk about steam. I don’t mean the malware installed on most gamers’ computers, but the real thing: hot, evaporated water. You may see it as just something given off by boiling stew or dying cars, but it’s so much more than that. For steam was the fluid that carried us into the Industrial Revolution.

And whenever we talk of the Industrial Revolution, it’s only natural to think about its timing. Did steam power really have to wait until the 18th century? Is there a way to push back its development by a hundred, or even a thousand, years? We can’t know for sure, but maybe we can make an educated guess or two.

Intro

Obviously, knowledge of steam itself dates back to the first time anybody ever cooked a pot of stew or boiled their day’s catch. Probably earlier than that, if you consider natural hot springs. However you take it, they didn’t have to wait around for a Renaissance and an Enlightenment. Steam itself is embarrassingly easy to make.

Steam is a gas; it’s the gaseous form of water, in the same way that ice is its solid form. Now, ice forms naturally if the temperature gets below 0°C (32°F), so quite a lot of places on Earth can find some way of getting to it. Steam, on the other hand requires us to take water to its boiling point of 100°C (212°F) at sea level, slightly lower at altitude. Even the hottest parts of the world never get temperatures that high, so steam is, with a few exceptions like that hot spring I mentioned, purely artificial.

Cooking is the main way we come into contact with steam, now and in ages past. Modern times have added others, like radiators, but the general principle holds: steam is what we get when we boil water. Liquid turns to gas, and that’s where the fun begins.

Theory

The ideal gas law tells us how an ideal gas behaves. Now, that’s not entirely appropriate for gases in the real world, but it’s a good enough approximation most of the time. In algebraic form, it’s PV = nRT, and it’s the key to seeing why steam is so useful, so world-changing. Ignore R, because it’s a constant that doesn’t concern us here; the other four variables are where we get our interesting effects. In order: P is the pressure of a gas, V is its volume, n is how much of it there is (in moles), and T is its temperature.

You don’t need to know how to measure moles to see what happens. When we turn water into steam, we do so by raising its temperature. By the ideal gas law, increasing T must be balanced out by a proportional increase on the other side of the equation. We’ve got two choices there, and you’ve no doubt seen them both in action.

First, gases have a natural tendency to expand to fill their containers. That’s why smoke dissipates outdoors, and it’s why that steam rising from the pot gets everywhere. Thus, increasing V is the first choice in reaction to higher temperatures. But what if that’s not possible? What if the gas is trapped inside a solid vessel, one that won’t let it expand? Then it’s the backup option: pressure.

A trapped gas that is heated increases in pressure, and that is the power of steam. Think of a pressure cooker or a kettle, either of them placed on a hot stove. With nowhere to go, the steam builds and builds, until it finds relief one way or another. (With some gases, this can come in the more dramatic form of a rupture, but household appliances rarely get that far.)

As pressure is force per unit of area, and there’s not a lot of area in the spout of a teapot, the rising temperatures can cause a lot of force. Enough to scald, enough to push. Enough to…move?

Practice

That is the basis for steam power and, by extension, many of the methods of power generation we still use today. A lot of steam funneled through a small area produces a great amount of force. That force is then able to run a pump, a turbine, or whatever is needed, from boats to trains. (And even cars: some of the first automobiles were steam-powered.)

Steam made the Industrial Revolution possible. It made most of what came after possible, as well. And it gave birth to the retro fad of steampunk, because many people find the elaborate contraptions needed to haul superheated water vapor around to be aesthetically pleasing. Yet there is a problem. We’ve found steam-powered automata (e.g., toys, “magic” temple doors) from the Roman era, so what happened? Why did we need over 1,500 years to get from bot to Watt?

Unlike electricity, where there’s no obvious technological roadblock standing between Antiquity and advancement, steam power might legitimately be beyond classical civilizations. Generation of steam is easy—as I’ve said, that was done with the first cooking pot at the latest. And you don’t need an ideal gas law to observe the steam in your teapot shooting a cork out of the spout. From there, it’s not too far a leap to see how else that rather violent power can be utilized.

No, generating small amounts of steam is easy, and it’s clear that the Romans (and probably the Greeks, Chinese, and others) could do it. They could even use it, as the toys and temples show. So why didn’t they take that next giant leap?

The answer here may be a combination of factors. First is fuel. Large steam installations require metaphorical and literal tons of fuel. The Victorian era thrived on coal, as we know, but coal is a comparatively recent discovery. The Romans didn’t have it available. They could get by with charcoal, but you need a lot of that, and they had much better uses for it. It wouldn’t do to cut down a few acres of forest just to run a chariot down to Ravenna, even for an emperor. Nowadays, we can make steam by many different methods, including renewable variations like solar boilers, but that wasn’t an option back then. Without a massive fuel source, steam—pardon the pun—couldn’t get off the ground.

Second, and equally important, is the quality of the materials that were available. A boiler, in addition to eating fuel at a frantic pace, also has some pretty exacting specifications. It has to be built strong enough to withstand the intense pressures that steam can create (remember our ideal gas law); ruptures were a deadly fixture of the 19th century, and that was with steel. Imagine trying to do it all with brass, bronze, and iron! On top of that, all your valves, tubes, and other machinery must be built to the same high standard. It’s not just a gas leaking out, but efficiency.

The ancients couldn’t pull that off. Not from lacking of trying, mind you, but they weren’t really equipped for the rigors of steam power. Steel was unknown, except in a few special cases. Rubber was an ocean away, on a continent they didn’t know existed. Welding (a requirement for sealing two metal pipes together so air can’t escape) probably wasn’t happening.

Thus, steam power may be too far into the future to plausibly fit into a distant “retro-tech” setting. It really needs improvements in a lot of different areas. That’s not to say that steam itself can’t fit—we know it can—but you’re not getting Roman railroads. On a small scale, using steam is entirely possible, but you can’t build a classical civilization around it. Probably not even a medieval one, at that.

No, it seems that steam as a major power source must wait until the rest of technology catches up. You need a fuel source, whether coal or something else. You absolutely must have ways of creating airtight seals. And you’ll need a way to create strong pressure vessels, which implies some more advanced metallurgy. On the other hand, the science isn’t entirely necessary; if your people don’t know the ideal gas law yet, they’ll probably figure it out pretty soon after the first steam engine starts up. And as for finding uses, well, they’d get to that part without much help, because that’s just what we do.

Future past: Electricity

Electricity is vital to our modern world. Without it, I couldn’t write this post, and you couldn’t read it. That alone should show you just how important it is, but if not, then how about anything from this list: air conditioning, TVs, computers, phones, music players. And that’s just what I can see in the room around me! So electricity seems like a good start for this series. It’s something we can’t live without, but its discovery was relatively recent, as eras go.

Intro

The knowledge of electricity, in some form, goes back thousands of years. The phenomenon itself, of course, began in the first second of the universe, but humans didn’t really get to looking into it until they started looking into just about everything else.

First came static electricity. That’s the kind we’re most familiar with, at least when it comes to directly feeling it. It gives you a shock in the wintertime, it makes your clothes stick together when you pull them out of the drier, and it’s what causes lightning. At its source, static electricity is nothing more than an imbalance of electrons righting itself. Sometimes, that’s visible, whether as a spark or a bolt, and it certainly doesn’t take modern convenience to produce such a thing.

The root electro-, source of electricity and probably a thousand derivatives, originally comes from Greek. There, it referred to amber, that familiar resin that occasionally has bugs embedded in it. Besides that curious property, amber also has a knack for picking up a static charge, much like wool and rubber. It doesn’t take Ben Franklin to figure that much out.

Static electricity, however, is one-and-done. Once the charge imbalance is fixed, it’s over. That can’t really power a modern machine, much less an era, so the other half of the equation is electric current. That’s the kind that runs the world today, and it’s where we have volts and ohms and all those other terms. It’s what runs through the wires in your house, your computer, your everything.

Theory

The study of current, unlike static electricity, came about comparatively late (or maybe it didn’t; see below). It wasn’t until the 18th century that it really got going, and most of the biggest discoveries had to wait until the 19th. The voltaic pile—which later evolved into the battery—electric generators, and so many more pieces that make up the whole of this electronic age, all of them were invented within the last 250 years. But did they have to be? We’ll see in a moment, but let’s take a look at the real world first.

Although static electricity is indeed interesting, and not just for demonstrations, current makes electricity useful, and there are two ways to get it: make it yourself, or extract it from existing materials. The latter is far easier, as you might expect. Most metals are good conductors of electricity, and there are a number of chemical reactions which can cause a bit of voltage. That’s the essence of the battery: two different metals, immersed in an acidic solution, will react in different ways, creating a potential. Volta figured this much out, so we measure the potential in volts. (Ohm worked out how voltage and current are related by resistance, so resistance is measured in ohms. And so on, through essentially every scientist of that age.)

Using wires, we can even take this cell and connect it to another, increasing the amount of voltage and power available at any one time. Making the cells themselves larger (greater cross-section, more solution) creates a greater reserve of electricity. Put the two together, and you’ve got a way to store as much as you want, then extract it however you need.

But batteries eventually run dry. What the modern age needed was a generator. To make that, you need to understand that electricity is but one part of a greater force: electromagnetism. The other half, as you might expect, is magnetism, and that’s the key to generating power. Moving magnetic fields generate electrical potential, i.e., current. And one of the easiest ways to do it is by rotating a magnet inside another. (As an experiment, I’ve seen this done with one of those hand-cranked pencil sharpeners, so it can’t be that hard to construct.)

One problem is that the electricity this sort of generator makes isn’t constant. Its potential, assuming you’ve got a circular setup, follows a sine-wave pattern from positive to negative. (Because you can have negative volts, remember.) That’s alternating current, or AC, while batteries give you direct current, DC. The difference between the two can be very important, and it was at the heart of one of science’s greatest feuds—Edison and Tesla—but it doesn’t mean too much for our purposes here. Both are electric.

Practice

What does it take to create electricity? Is there anything special about it that had to wait until 1800 or so?

As a matter of fact, not only was it possible to have something electrical before the Enlightenment, but it may have been done…depending on who you ask. The Baghdad battery is one of those curious artifacts that has multiple plausible explanations. Either it’s a common container for wine, vinegar, or something of that sort, or it’s a 2000-year-old voltaic cell. The simple fact that this second hypothesis isn’t immediately discarded answers one question: no, nothing about electricity requires advanced technology.

Building a rudimentary battery is so easy that it almost has to have been done before. Two coins (of different metals) stuck into a lemon can give you enough voltage to feel, especially if you touch the wires to your tongue, like some people do with a 9-volt. Potatoes work almost as well, but any fruit or vegetable whose interior is acidic can provide the necessary solution for the electrochemical reactions to take place. From there, it’s not too big a step to a small jar of vinegar. Metals known in ancient times can get you a volt or two from a single cell, and connecting them in series nets you even larger potentials. It won’t be pretty, but there’s absolutely nothing insurmountable about making a battery using only technology known to the Romans, Greeks, or even Egyptians.

Generators a bit harder. First off, you need magnets. Lodestones work; they’re naturally magnetized, possibly by lightning, and their curious properties were first noticed as early as 2500 years ago. But they’re rare and hard to work with, as well as probably being full of impurities. Still, it doesn’t take a genius (or an advanced civilization) to figure out that these can be used to turn other pieces of metal (specifically iron) into magnets of their own.

Really, then, creation of magnets needs iron working, so generators are beyond the Bronze Age by definition. But they aren’t beyond the Iron Age, so Roman-era AC power isn’t impossible. They may not understand how it works, but they have the means to make it. The pieces are there.

The hardest part after that would be wire, because shuffling current around needs that. Copper is a nice balance of cost and conductivity, which is why we use it so much today; gold is far more ductile, while silver offers better conduction properties, but both are too expensive to use for much even today. The latter two, however, have been seen in wire form since ancient times, which means that ages past knew the methods. (Drawn wire didn’t come about until the Middle Ages, but it’s not the only way to do it.) So, assuming that our distant ancestors could figure out why they needed copper wire, they could probably come up with a way to produce it. It might not have rubber or plastic insulation, but they’d find something.

In conclusion, then, even if the Baghdad battery is nothing but a jar with some leftover vinegar inside, that doesn’t mean electricity couldn’t be used by ancient peoples. Technology-wise, nothing at all prevents batteries from being created in the Bronze Age. Power generation might have to wait until the Iron Age, but you can do a lot with just a few cells. And all the pieces were certainly in place in medieval times. The biggest problem after making the things would be finding a use for them, but humans are ingenious creatures. They’d work something out.

Future past: Introduction

With the “Magic and Tech” series on hiatus right now (mostly because I can’t think of anything else to write in it), I had the idea of taking a look at a different type of “retro” technological development. In this case, I want to look at different technologies that we associate with our modern world, and see just how much—or how little—advancement they truly require. In other words, let’s see just what could be made by the ancients, or by medieval cultures, or in the Renaissance.

I’ve been fascinated by this subject for many years, ever since I read the excellent book Lost Discoveries. And it’s very much a worldbuilding pursuit, especially if you’re building a non-Earth human culture or an alternate history. (Or both, in the case of my Otherworld series.) As I’ve looked into this particular topic, I’ve found a few surprises, so this is my chance to share them with you, along with my thoughts on the matter

The way it works

Like “Magic and Tech”, this series (“Future Past”; you get no points for guessing the reference) will consist of an open-ended set of posts, mostly coming out whenever I decide to write them. Each post will be centered on a specific invention, concept, or discovery, rather than the much broader subjects of “Magic and Tech”. For example, the first will be that favorite of alt-historians: electricity. Others will include the steam engine, various types of power generation, and so on. Maybe you can’t get computers in the Bronze Age—assuming you don’t count the Antikythera mechanism—but you won’t believe what you can get.

Every post in the series will be divided into three main parts. First will come an introduction, where I lay out the boundaries of the topic and throw in a few notes about what’s to come. Next is a “theory” section: a brief description of the technology as we know it. Last and longest is the “practice” part, where we’ll look at just how far we can turn back the clock on the invention in question.

Hopefully, this will be as fun to read as it is to write. And I will get back to “Magic and Tech” at some point, probably early next year, but that will have to wait until I’m more inspired on that front. For now, let’s forget the fantasy magic and turn our eyes to the magic of invention.

On fantasy stasis

In fantasy literature, the medieval era is the most common setting. Sure, you get the “flintlock fantasy” that moves things forward a bit, and then there’s the whole subgenre of urban fantasy, but most of the popular works of the past century center on the High Middle Ages.

It’s not hard to see why. That era has a lot going for it. It’s so far back that it’s well beyond living memory, so there’s nobody who can say, “It’s not really like that!” Records are spotty enough that there’s a lot of room for “hidden” discoveries and alternate histories. You get all the knights and chivalry and nobility as a builtin part of the setting, but you don’t have to worry about gunpowder weapons if you don’t want to, or oceanic exploration, or some of the more complex scientific matters discovered in the Renaissance.

For a fantasy world, of course, medieval times give you mostly the same advantages, but also a few more. It’s less you have to do, obviously, as you don’t have the explosion of technology and discovery starting circa 1500. Medieval times were simpler, in a way, and simple makes worldbuilding easy. Magic fits neatly in the gaps of medieval knowledge. The world map can have the blank spaces needed to hide a dragon or a wizard’s lair.

Times are (not) changing

But this presents a problem, because another thing fantasy authors really, really want is a long history, yet they don’t want the usual pattern of advancement that comes with those long ages. Just to take examples from some of my personal favorites, let’s see what we’ve got.

  • A Song of Ice and Fire, by George R. R. Martin. You’ll probably know this better as Game of Thrones, the TV show, but the books go into far greater depth concerning the world history. The Others (White Walkers, in the show, for reasons I’ve never clearly understood) last came around some 8,000 years ago. About the only thing that’s changed since is the introduction of iron weaponry.

  • Lord of the Rings; J.R.R. Tolkien. Everybody knows this one, but how many know Middle Earth’s “internal” history? The Third Age lasts over 3,000 years with no notable technological progress, and that’s on top of the 3,500 years of the Second Age and a First Age (from The Silmarillion) that tacks on another 600 or so. Indeed, most technology in Middle Earth comes from the great enemies, Sauron and Morgoth and Saruman. That’s certainly no coincidence.

  • Mistborn; Brandon Sanderson. Here’s a case where technology actually regressed over the course of 1,000 years. The tyrannical Lord Ruler suppressed the knowledge of gunpowder (he preferred his ranged fighters to have skill) and turned society from seemingly generic fantasy feudalism into a brutal serfdom. (The newer trilogy, interestingly, upends this trope entirely; the world has gone from essentially zero—because of events at the end of Book 3—to Victorian Era in something like 500 years.)

  • Malazan Book of the Fallen; Steven Erikson. This series already has more timeline errors than I can count, so many that fans have turned the whole thing into a meme, and even the author himself lampooned it in the story. But Erikson takes the “fantasy stasis” to a whole new level. The “old” races are over 100,000 years old, there was an ice age somewhere in there, and the best anyone’s done is oceangoing ships and magical explosives, both within the last century or so.

Back in time

It’s a conundrum. Let’s look at our own Western history to see why. A thousand years ago was the Middle Ages, the time when your average fantasy takes place. It’s the time of William the Conqueror, of the Holy Roman Empire and the Crusades and, later, the Black Death. Cathedrals were being built, the first universities founded, and so on. But it was nothing like today. It was truly a whole different world.

Add another thousand years, and you’re in Roman times. You’ve got Caesar, Pliny the Elder, Vesuvius, Jesus. Here, you’re in a world of antiquity, but you have to remember that it’s not really any further back from medieval times than they are from us. If we in 2017 are at the destruction of the One Ring, the founding of the Shire was not long after all this, about at the fall of the Roman Empire.

Another millennium takes you to ancient Greece, to the Bronze Age. That’s “Bronze Age” as in “ironworking hasn’t been invented yet”, by the way. Well, it had been, but it was only used in limited circumstances. Three thousand years ago is about the time of the later Old Testament or Homer. Compared to us, it’s totally unrecognizable, but it’s about the same length of time between the first time the One Ring was worn by someone other than Sauron and the moment Frodo and Sam walked up to Mount Doom.

Let’s try 8,000, like in Westeros. Where does that put us in Earth history? Well, it would be 6000 BC, so before Egypt, Sumeria, Babylon, the Minoans…even the Chinese. The biggest city in the world might have a few thousand people in it—Jericho and Çatalhöyük are about that old. Domestication of animals and plants is still in its infancy at this point in time; you’re closer to the first crops than to the first computers. Bran the Builder would have to have magic to make the Wall. The technology sure wasn’t there yet.

Breaking the ice

And that’s really the problem with so many of these great epic fantasy sagas. Yes, we get to see the grand sweep of history in the background, but it’s only grand because it’s been stretched. In the real world, centuries of stasis simply don’t exist in the eras of these stories. Even the Dark Ages saw substantial progress in some areas, and that’s not counting the massive advancement happening in, say, the Islamic world.

To have this stasis and make it work (assuming it’s not just ancient tales recast in modern terms) requires something supernatural, something beyond what we know. That can be magic or otherworldly beings or even a “caretaker” ruler, but it has to be something. Left to their own devices, people will invent their way out of the Fantasy Dark Age.

Maybe magic replaces technology. That’s an interesting thought, and one that fits in with some of my other writings here. It’s certainly plausible that a high level of magical talent could retard technological development. Magic is often described as far easier than invention, and far more practical now.

Supernatural beings can also put a damper on tech levels, but they may also have the opposite effect. If the mighty dragon kills everything that comes within 100 yards, then a gun that can shoot straight at twice that would be invaluable. Frodo’s quest would have been a piece of cake if he’d had even a World War I airplane, and you don’t even have to bring the Eagles into that one! Again, people are smart. They’ll figure these things out, given enough time. Thousands of years is definitely enough time.

Call this a rant if you like. Maybe that’s what it really is. Now, I’m not saying I hate stories that assume hundreds or thousands of years of stagnation. I don’t; some of my favorite books hinge on that very assumption. But worldbuilding can do better. That’s what I’m after. If that means I’ll never write a true work of epic fantasy, then so be it. There’s plenty of wonder out there.

Writing World War II

Today, there is no more popular war than World War II. No other war in history has been the focus of so much attention, attention that spans the gap between nonfiction and fiction. And for good reason, too. World War II gave us some of the most inspiring stories, some of the most epic battles (in the dramatic and FX senses), and an overarching narrative that perfectly fits so many of the common conflicts and tropes known to writers.

The list of WWII-related stories is far too big for this post to even scratch the surface, so I won’t even try. Suffice to say, in the 70 years since the war ended, thousands of works have been penned, ranging from the sappy (Pearl Harbor) to the gritty (Saving Private Ryan), from lighthearted romp (Red Tails) to cold drama (Schindler’s List). Oh, and those are only the movies. That’s not counting the excellent TV series (Band of Brothers, The Pacific) or the myriad books concerning this chapter of our history.

World War II, then, is practically a genre of its own, and it’s a very cluttered one. No matter the media, a writer wishing to tackle this subject will have a harder time than usual. Most of the “good” stories have been done, and done well. In America, at least, many the heroes are household names: Easy Company, the Tuskegee Airmen, the USS Arizona and the Enola Gay. The places are etched into our collective memory, as well, from Omaha Beach and Bastogne to Pearl Harbor, Iwo Jima, and Hiroshima. It’s a crowded field, to put it mildly.

Time is running out

But you’re a writer. You’re undaunted. You’ve got this great idea for a story set in WWII, and you want to tell it. Okay, that’s great. Just because something happened within the last century doesn’t get you out of doing your homework.

First and foremost, now is the last good chance to write a WWII story. By “now”, I mean within the next decade, and there’s a very good reason for that. This is 2016. The war ended right around 70 years ago. Since most of the soldiers were conscripted, many right out of high school, or young volunteers, they were typically about 18 to 25 years old when they went into service. The youngest WWII veterans are at least in their late 80s, with most in their 90s. They won’t live forever. We’ve seen that in this decade, as the final World War I veterans passed on, and an entire era left living memory.

Yes, there are uncountably many interviews, written or recorded, with WWII vets. The History Channel used to show nothing else. But nothing compares to a face-to-face conversation with someone who literally lived through history. One of the few good things to come out of my public education was the chance to meet one of the real Tuskegee Airmen, about twenty years ago. The next generation of schoolchildren likely won’t have that same opportunity.

Give it a shot

Whether through personal contact or the archives and annals of a generation, you’ll need research. Partly, that’s for the same reason: WWII is within living memory, so you have eyewitnesses who can serve as fact-checkers. (Holocaust deniers, for instance, will only get bolder once there’s no one left who can directly prove them wrong.) Also, WWII was probably the most documented war of all time. Whatever battle you can think of, there’s some record of it. Unlike previous conflicts, there’s not a lot of room to slip through the cracks.

On the face of it, that seems to limit the space available for historical fiction. But it’s not that bad. Yes, the battles were documented, as were many of the units, the aircraft, and even the strategies. However, they didn’t write down everything. It’s easy enough to pick a unit—bonus points if it’s one that was historically wiped out to the man, so there’s no one left to argue—and use it as the basis for your tale.

And that highlights another thing about WWII. War stories of older times often fixate on a single soldier, a solitary hero. With World War II, though, we begin to see the unit itself becoming a character. That’s how it worked with Band of Brothers, for instance. And this unit-based approach is a good one for a story focused on military actions. Soldiers don’t fight alone, and so many of the great field accomplishments of WWII were because of the bravery of a squad, a company, or a squadron.

If your story happens away from the front lines, on the other hand, then it’s back to individuals. And what a cast of characters you have. Officers, generals, politicians, spies…you name it, you can find it. But these tend to be more well-known, and that does limit your choices for deviating from history.

Diverging parallels

While the war itself is popular enough, as are some of the events that occurred at the same time, what happened after is just as ripe for storytelling. Amazon’s The Man in the High Castle (based on the Philip K. Dick story of the same name) is one such example of an alternate WWII, and I’ve previously written a post that briefly touched on another possible outcome.

I think the reason why WWII gets so much attention from the alternate-history crowd is the potential for disaster. The “other” side—the Axis—was so evil that giving them a victory forces a dystopian future, and dystopia is a storyteller’s favorite condition, because it’s a breeding ground for dramatic conflict and tension. And there’s also a general sense that we got the best possible outcome from the war; thus, following that logic, any other outcome is an exercise in contrast. It’s not the escapism that I like from my fiction, but it’s a powerful statement in its own right, and it may be what draws you into the realm of what-ifs.

The post I linked above is all about making an alternate timeline, but I’ll give a bit of a summary here. The assumption is that everything before a certain point happened exactly as it did, but one key event didn’t. From there, everything changes, causing a ripple effect up to the present. For World War II, that’s only 70 years, but that’s more than enough time for great upheaval.

Most people will jump to one conclusion there: the Nazis win. True, that’s one possible (but unlikely, in my opinion) outcome, but it’s not the only one. Some among the allies argued for a continuation of the war, moving to attack the Soviets next. That would have preempted the entire Cold War, with all the knock-on effects that would have caused. What if Japan hadn’t surrendered? Imagine a nuclear bomb dropped on Tokyo, and what that would do to history. The list goes on, ad infinitum.

Fun, fun, fun

Any genre fits World War II. Any kind of story can be told within that span of years. Millions of people were involved, and billions are still experiencing its reverberations. Although it’s hard to talk of a war lasting more than half a decade as a single event, WWII is, collectively speaking, the most defining event of the last century. It’s a magnet for storytelling, as the past 70 years have shown. In a way, despite the horrors visited upon the world during that time, we can even see it as fun.

Too many people see World War II as Hitler, D-Day, Call of Duty, and nukes. But it was far more than that. It was the last great war, in many ways. And great wars make for great stories, real or fictional.

On ancient artifacts

I’ve been thinking about this subject for some time, but it was only after reading this article (and the ones linked there) that I decided it would make a good post. The article is about a new kind of data storage, created by femtosecond laser bursts into fused quartz. In other words, as the researchers helpfully put it, memory crystals. They say that these bits of glass can last (for all practical purposes) indefinitely.

A common trope in fiction, especially near-future sci-fi, is the mysterious artifact left behind by an ancient, yet unbelievably advanced, civilization. Whether it’s stargates in Egypt, monoliths on Europa, or the Prothean archives on Mars, the idea is always the same: some lost race left their knowledge, their records, or their technology, and we are the ones to rediscover them. I’m even guilty of it; my current writing project is a semi-fantasy novel revolving around the same concept.

It’s easy enough to say that an ancient advanced artifact exists in a story. Making it fit is altogether different, particularly if you’re in the business of harder science fiction. Most people will skim over the details, but there will always be the sticklers who point out that your clever idea is, in fact, physically impossible. But let’s see what we can do about that. Let’s see how much we can give the people a hundred, thousand, or even million years in the future.

Built to last

If your computer is anything like mine, it might last a decade. Two, if you’re lucky. Cell phone? They’re all but made to break every couple of years. Writable CDs and DVDs may be able to stand up to a generation or two of wear, and flash memory is too new to really know. In our modern world of convenience, disposability, and frugality, long-lasting goods aren’t popular. We buy the cheap consumer models, not the high-end or mil-spec stuff. When something can become obsolete the moment you open it, that’s not even all that unwise. Something that has to survive the rigors of the world, though, needs to be built to a higher standard.

For most of our modern technology, it’s just plain too early to tell how long it can really last. An LED might be rated for 11,000 hours, a hard drive for 100,000, but that’s all statistics. Anything can break tomorrow, or outlive its owner. Even in one of the most extreme environments we can reach, life expectancy is impossible to guess. Opportunity landed on Mars in 2004, and it was expected to last 90 days.

But there’s a difference between surviving a very long time and being designed to. To make something that will survive untold years, you have to know what you’re doing. Assuming money and energy are effectively unlimited—a fair assumption for a super-advanced civilization—some amazing things can be achieved, but they won’t be making iPhones.

Material things

Many things that we use as building materials are prone to decay. In a lot of cases, that’s a feature, not a bug, but making long-term time capsules isn’t one of those cases. Here, decay, decomposition, collapse, and chemical alteration are all very bad things. So most plastics are out, as are wood and other biological products—unless, of course, you’re using some sort of cryogenics. Crossing off all organics might be casting too wide a net, but not by much.

We can look to archaeology for a bit of guidance here. Stone stands the test of time in larger structures, especially in the proper climate. The same goes for (some) metal and glass, and we know that clay tablets can survive millennia. Given proper storage, many of these materials easily get you a thousand years or more of use. Conveniently, most of them are good for data, too, whether that’s in the form of cuneiform tablets or nanoscale fused quartz.

Any artifact made to stand the test of time is going to be made out of something that lasts. That goes for all of its parts, not just the core structure. The longer something needs to last, the simpler it must be, because every additional complexity is one more potential point of failure.

Power

Some artifacts might need to be powered, and that presents a seemingly insurmountable problem. Long-term storage of power is very, very hard right now. Batteries won’t cut it; most of them are lucky to last ten years. For centuries or longer, we have to have something better.

There aren’t a lot of options here. Supercapacitors aren’t that much better than batteries in this regard. Most of the other options for energy storage require complex machinery, and “complex” here should be read as “failure-prone”.

One possibility that seems promising is a radioisotope thermoelectric generator (RTG), like NASA uses in space probes. These use the heat of radioactive decay to create electricity and they work as long as there’s radioactivity in the material you’re using. They’re high-tech, but they don’t require too much in the way of peripheral complexity. They can work, but there’s a trade-off: the longer the RTG needs to run, the less power you’ll get out of it. Few isotopes fit into that sweet spot of half-life and decay energy to make them worthwhile.

Well, if we can’t store the energy we need, can we store a way to make it? As blueprints, it’s easy, but then you’re dependent on the level of technology of those who find the artifact. Almost anything else, however, runs into the complexity problem. There are some promising leads in solar panels that might work, but it’s too early to say how long they would last. Your best bet might actually be a hand crank!

Knowledge

One of the big reasons for an artifact to exist is to provide a cache of knowledge for future generations. If that’s all you need, then you don’t have to worry too much about technology. The fused-quartz glass isn’t that bad an option. If nothing else, it might inspire the discoverers to invent a way to read it. What knowledge to include then becomes the important question.

Scale is the key. What’s the difference between the “knowers” and the “finders”? If it’s too great, the artifact may need to include lots and lots of bootstrapping information. Imagine sending a sort of inverse time capsule to, say, a thousand years ago. (For the sake of argument, we’ll assume you also provide a way to read the data.) People in 1016 aren’t going to understand digital electronics, or the internal combustion engine, or even modern English. Not only do you need to put in the knowledge you want them to have, you also have to provide the knowledge to get them to where it would be usable. A few groups are working on ways to do this whole bootstrap process for potential communication with an alien race, and their work might come in handy here.

Deep time

The longer something must survive, the more likely it won’t. There are just too many variables, too many things we can’t control. This is even more true once you get seriously far into the future. That’s the “ancient aliens” option, and it’s one of the hardest to make work.

The Earth is like a living thing. It moves, it shifts, it convulses. The plates of the crust slide around, and the continents are not fixed in place. The climate changes over the millennia, from Ice Age to warm period and back. Seas rise and fall, rivers change course, and mountains erode. The chances of an artifact surviving on the surface of our world for a million years are quite remote.

On other bodies, it’s hit or miss, almost literally. Most asteroids and moons are geologically dead, and thus fairly safe over these unfathomable timescales, but there’s always the minute possibility of a direct impact. A few unearthly places (Mars and Titan come to mind) have enough in the way of weather to present problems like those on Earth, but the majority of solid rock in the solar system is usable in some fashion.

Deep space, you might think, would be the perfect place for an ancient artifact. If it’s big enough, you could even disguise it as an asteroid or moon. However, space is a hostile place. It’s full of radiation and micrometeorites, both of which could affect an artifact. Voyager 2 has its golden record, but how long will it survive? In theory, forever. In practice, it’ll get hit eventually. Maybe not for a million years, but you never know.

Summing up

Ancient artifacts, whether from aliens or a lost race of humans, work well as a plot device in many stories. Most of the time, you don’t have to worry about how they’re made or how they survived for so long. But when you do, it helps to think about what’s needed to make something like an artifact. In modern times, we’re starting to make some things like this. Voyager 2, the Svalbard Global Seed Vault, and other things can act, in a sense, as our legacy. Ten thousand years from now, no matter what happens, they’ll likely still be around. What else will be?

Out of the dark: building the Dark Ages

We have an awful lot of fiction out there set in something not entirely unlike our Middle Ages. Almost every cookie-cutter fantasy world is faux-medieval, and that’s only the ones that aren’t trying to be. The Renaissance and early Industrial Era also get plenty of love, and Roman antiquity even comes up from time to time. But there’s one time period in our history that seems a bit…left out. I’m talking about those centuries after Rome fell to the barbarian hordes, but before William crossed the Channel to give England the same fate. I’m talking about the Dark Ages.

A brighter shade of dark

Now, as we know today, what previous generations called the Dark Ages weren’t really all that dark. Sure, there were Vikings and Vandals, barbarians and Britons, Goths and Gauls, but it wasn’t a complete disaster. The reason we speak of the “Dark Ages”, though, is contrast. Rome was a magnificent empire by any account, and the first to coin the “Dark Age” moniker on its fallen children were living in the equally “shining” Enlightenment. By comparison, the time between wasn’t exactly grand.

Even in our modern knowledge, the notion of a Dark Age is still useful, even if it doesn’t quite mean what we think it means. In general, we can use it to refer to any period of technological, social, and political stagnation and regression. That’s not to say there wasn’t progress in the Dark Ages. One great book about the period is titled Cathedral, Forge, and Waterwheel, and that’s a pretty good indication of some of the advancement that did happen.

Compared to what came before—the Roman empire, with its Colosseum and aqueducts and roads—there’s a huge difference, especially at the start of the Dark Ages. In some parts of Europe, particularly those farthest from the imperial center, general conditions fell to their lowest levels in hundreds of years. While the Empire itself actually did survive in the east in the form of the Byzantines (who were even considered the “true” emperors by the first generations of barbarian kings), the west was shattered, and it showed. But they dug themselves out of that hole, as we know.

Dying light

So, even granting our more limited definition of “Dark Ages”, what caused them? Well, there are a lot of theories. Rome was sacked in 476, of course, and that’s usually considered a primary cause. A serious cold snap starting around 536 couldn’t have helped matters. Plagues around the same time combined with the war and famine to cause even greater death, completing the quartet of the Horsemen.

But all that together shouldn’t have been enough to devastate the society of western Europe, should it? If it happened today, it wouldn’t, because our world is so connected, so small, relative to Roman times. If the whole host of apocalyptic horror visited the EU today, hundreds of millions of people would die, but we wouldn’t have a new Dark Age. The reason can be summed up in one word: continuity.

Yes, half of the Roman Empire survived. In a way, it was the stronger half, but it was also the more distant half. When Rome fell, when all the other catastrophes visited its remnants, the effect was to cause a cultural break. Many parts of the empire were already more or less autonomous, growing ever more apart, and the loss of the “center of gravity” that was Rome merely hastened the process.

A look at Britain illustrates this. After Rome all but gave up on its island colony, England all but gave up on it. Outside of the monasteries, Rome was practically forgotten within a few generations, once the Saxons and their other Germanic friends rolled in. The Danes that started vacationing there in the ninth century cared even less for news from four hundred years ago. By the time William came conquering, Anglo-Saxon England was a far cry from Roman Britannia. This is an extreme example, though, because there was almost no continuity in Britain to start with, so there wasn’t much to lose. However, similar stories appear throughout Europe.

Recurring nightmare

Although Europe’s Dark Ages are a thousand years past, they aren’t the only example of the kind of discontinuity of a Dark Age. Something of the same sort happened in Greece two thousand years before that. The native peoples of America can be considered to have a Dark Age that started circa 1500, as the mighty empires of Mexico and Peru fell to Spanish invaders.

In every case, though, it’s more than just the fall of a civilization. A Dark Age needs a prolonged period of destruction, probably at least two generations long. To make an age go Dark requires severe population loss, a total breakdown of government, and the forcing of a kind of “siege mentality” on a society. Climatic shifts are just a bonus. In all, a Dark Age results from a perfect storm of causes, all of which combine to break the people. Eventually, due to the death, destruction, and constant need to be on guard, everything else falls by the wayside. There simply aren’t enough people to keep things going. Once those that are left start dying off, the noose closes. The circle is broken, and darkness settles in.

That naturally leads to another question: could we have a new Dark Age? It’s hard to imagine, in our present time of progress, something ever causing it to stop, but that doesn’t make it impossible. Indeed, almost the entire sub-genre of post-apocalyptic fiction hinges on this very event. It can happen, but—thankfully—it won’t be easy.

What would it take, then? Well, like the Dark Ages that have come before, it would be a combination of factors. Something causing death on a massive, unprecedented scale. Something to put humanity on the back foot, to disrupt the flow of society so completely that it would take more than a lifetime to recover. In that case, it would never recover, because there would be no one left who remembered the “old days”. There would be no more continuity.

I can think of a few ways that could work. The ever-popular asteroid or comet impact is an easy one, and it even has the knock-on effect of a severe climate shock. Nuclear war never really seemed likely in my lifetime, but I was born in 1983, so I missed the darker days of the Cold War. I did watch WarGames, though, and I remember seeing those world maps lighting up at the end. Two hundred years after that, and I don’t think we’re looking at a Fallout game.

Other options all have their problems. An incredibly virulent outbreak (Plague, Inc. or your favorite zombie movie) might work, but it would have to be so bad that it makes the 1918 flu look like the common cold. Zika is in the news right now, but it simply won’t cut it, nor would Ebola. You need something highly infectious, but with a long incubation period and a massive mortality rate. It’s hard to find a virus that fits all three of those, for evolutionary reasons. The other forms of infectious agents—bacteria, fungi, prions—all have their own disadvantages.

Climate change is the watchword of the day, but it won’t cause a Dark Age by itself. It’s too slow, and even the most alarming predictions don’t take us to temperatures much higher than a few thousand years ago, and that’s assuming that nobody ever does anything about it. No matter what you believe about global warming, you can’t make it enough to break us without some help.

Terminator-style AI is another possibility, one looking increasingly likely these days. It has some potential for catastrophe, but I’m not sure about using it as the continuity-breaker. The same goes for nanotech bots and the like. Maybe they’ll enslave us, but they won’t beat us down so badly that we lose everything.

And then there’s aliens. (Insert History Channel guy here.) An alien-imposed destruction of civilization would be the logical extension of the Roman hordes into the global future. Their attacks would likely be massive enough to influence the planet’s climate. They would cause us to huddle together for mutual defense, assuming they left any of us alive and alone. Yeah, that could work. It needs a lot of ifs, but it’s plausible enough to make for a good story.

The light returns

The Dark Age has to come to an end. It can’t last forever. But there’s no easy signal that it’s over. Instead, it’s a gradual thing. The key point here, though, is that what comes out of the Dark Age won’t be the same as what went in. Look again at Europe. After Rome fell, some of its advances—concrete is a good example—were lost to its descendants for a thousand years. Yet the continent did finally surpass the empire.

Over time, the natural course of progress will lift the Dark Age area to a level that is near enough where it left off, and things can proceed from there. It will be a different place, and that’s because of the discontinuity that caused the darkness in the first place. The old ways become lost, yes, but once we discover the new ways, they’ll be even better.

We stand on the shoulders of giants, as Newton said. Those giants are our ancestors, whether physically or culturally. Sometimes they fall, and sometimes the fall is bad enough that it breaks them. Then we must stand on our own and become our own giants. The Dark Age is that time when we’re standing alone.

Life below zero: building the Ice Age

As I write this post, parts of the US are digging themselves out of a massive snowstorm. (Locally, of course, the anti-snow bubble was in full effect, and the Tennessee Valley area got only a dusting.) Lots of snow, cold temperatures, and high winds create a blizzard, a major weather event that comes around once every few years.

But our world has gone through extended periods of much colder weather. In fact, we were basically born in one. I’m talking about ice ages. In particular, I’m referring to the Ice Age, the one that ended about 10,000 years ago, as it’s far better known and understood than any of the others throughout the history of the planet.

The very phrase “Ice Age” conjures up images of woolly mammoths lumbering across a frozen tundra, of small bands of humanity struggling to survive, of snow-covered evergreen forests and blue walls of ice. Really, if you think about it, it paints a picturesque landscape as fascinating as it seems inhospitable. In that, it’s no different from Antarctica or the Himalayas or Siberia today…or Mars tomorrow. The Earth of the Ice Age, as a place, is one that fuels the imagination simply because it is so different. But the question I’d like to ask is: is there a story in the Ice Age?

Lands of always winter

To answer that question, we first need to think about what the Ice Age is. A “glaciation event”, to use the technical term, is pretty self-explanatory. Colder global temperatures mean more of the planet’s surface is below freezing (0° Celsius, hence the name of this post), which means water turns to ice. The longer the subzero temps, the longer the ice can stick around. Although the seasons don’t actually change, the effect is a longer and longer winter, complete with all the wintry trappings: snow, frozen ponds and lakes, plant-killing frosts, and so on.

We don’t actually know what causes these glaciation events to start and stop. Some of them last for tens or even hundreds of thousands of years. The worst can cover the whole world in ice, creating a so-called “Snowball Earth” scenario. (While interesting in its own right, that particular outcome doesn’t concern us here. On a snowball world, there’s little potential for surface activity. Life can survive in the deep, unfrozen oceans, but that doesn’t sound too exciting, in my opinion.)

If that weren’t bad enough, an Ice Age can be partially self-sustaining. As the icecaps grow—not just the ones at the poles, but anywhere—the Earth can become more reflective. Higher surface reflectivity means that less heat is absorbed, dropping temperatures further. And that allows the ice to spread, in a feedback loop best served cold.

Living on the edge

But we know life survived the Ice Age. We’re here, after all. The planet-wide extinction event that ended the Pleistocene period came at the end of the glaciation event. So not only can life survive in the time of ice, it can thrive. How?

Well, that’s where the difference between “ice age” and “snowball” comes in. First off, the whole world wasn’t completely frozen over 20,000 years ago. Yes, there were glaciers, and they extended quite far from the poles. (Incidentally, the glaciers that covered the eastern half of America stopped not that far from where I live.) But plenty of ice-free land existed, especially in the tropics. Oh, and guess where humanity came from?

Even in the colder regions, life was possible. We see that today in Alaska, for instance. And the vagaries of climate mean that, strangely enough, that part of the world wasn’t much colder than it is today. So one lead on Ice Age life can be found by studying the polar regions of the present, from polar bears to penguins and Eskimos to explorers.

The changing face

But the world was a different place in the Ice Age, and that was entirely because of the ice. The climate played by different rules. Hundreds of feet of ice covering millions of square miles will do that.

The first thing to note is that the massive ice sheets that covered the higher latitudes function, climatically speaking, just like those at the poles. Cold air is denser than warm air, so it sinks. That creates a high-pressure area that doesn’t really move that much. In temperate regions, high pressure causes clockwise winds along their boundaries, but they tend to have stable interiors.

Anyone who lives in the South knows about the summer ridge that builds every year, sending temperatures soaring to 100°F and causing air-quality and fire danger warnings. For weeks, we suffer in miserable heat and suffocating humidity, with no rain in sight. It’s awful, and it’s the main reason I hate summer. But think of that same situation, changing the temperatures from the nineties Fahrenheit to the twenties. Colder air holds less moisture, so you have a place with dry, stale air and little prospect for relief. In other words, a cold desert.

That’s the case on the ice sheets, and some thinkers extend that to the area around them. Having so much of the Earth’s water locked into near-permanent glaciers means that there will be less precipitation overall, even in the warm tropics. That has knock-on effects in those climates. Rainforests will be smaller, for example, and much of the land will be more like savannas or steppes, like the African lands that gave birth to modern humans.

But there are still prospects for precipitation. The jet stream will move, stray winds will blow. And the borders of the ice sheets will be active. This is for two reasons. First, the glaciers aren’t stationary. They expand and contract with the subtle seasonal and long-term changes in temperature. Second, that’s where the strongest winds will likely be. Receding glaciers can form lakes, and winds can spread the moisture from those lakes. The result? Lake-effect precipitation, whether rain or snow. The lands of ice will be cold and dry, the subtropics warm (or just warmer) and dry, but the boundary between them has the potential to be vibrant, if cool.

Making it work

So we have two general areas of an Ice Age world that can support the wide variety of life necessary for civilization: the warmer, wetter tropics and the cool convergence zones around the bases of the glaciers. If you know history, then you know that those are the places where the first major progress occurred in our early history: the savannas of Africa, the shores of the Mediterranean, the outskirts of Siberia and Beringia.

For people living in the Ice Age, life is tough. Growing seasons are shorter, more because of temperature than sunlight; the first crops weren’t domesticated until after the ice was mostly gone, when more of the world could support agriculture. Staying warm is a priority, and making fire a core part of survival. Clothing reflects the cold: furs, wool, insulation. Housing is a must, if only to have a safe place for a fire and a bed. Society, too, will be shaped by these needs.

But the Ice Age is dynamic. Fixed houses are susceptible to moving or melting glaciers. A small shift in temperature (in either direction) changes the whole landscape. Nomadic bands might be better suited to the periphery of the ice sheets, with the cities at a safe distance.

The long summer

And then the Ice Age comes to an end. Again, there’s no real consensus on why, but it has to happen. We’re proof of that. And when it does happen…

Rising temperatures at the end of a glaciation event are almost literally earth-shattering. The glaciers recede and melt (not completely; we’ve still got a few left over from our last Ice Age, and not just at the poles), leaving destruction in their wake. Sea levels rise, as you’d expect, but they could also sink, as the continents rebound when the weight of the ice is lifted.

The tundra shrinks, squeezing out those plants and animals adapted to it. Conversely, those used to warmer climes now have a vast expanse of fresh, new land. Precipitation begins to increase as ice turns to water and then evaporates. The world just after the Ice Age is probably going to be a swampy one. Eventually, though, things balance out. The world’s climate reaches an island of stability. Except when it doesn’t.

Our last Ice Age ended in fits and starts. Centuries of relative warmth could be wiped out in a geological instant. The last gasp was the Younger Dryas, a cold snap that started around 13,000 years ago and lasted around a tenth of that time. To put that into perspective, if it were ending right now (2016), it would have started around the time of the Merovingians and the Muslim conquest of Spain. But we don’t even know if the Younger Dryas was part of the Ice Age, or if it had another cause. (One hypothesis even claims it was caused by a meteor striking the earth!) Whether it was or wasn’t the dying ember of the Ice Age doesn’t matter much, though; it was close enough that we can treat it as if it were.

In the intervening millennia, our climate has changed demonstrably. This has nothing to do with global warming, whatever you think on that topic. No, I’m talking about the natural changes of a planet leaving a glacial period. We can see the evidence of ancient sea levels and rainfall patterns. The whole Bering Strait was once a land bridge, the Sahara a land of green. And Canada was a frozen wasteland. Okay, some things never change.

All this is to say that the Ice Age doesn’t have to mean mammoths and tundra and hunter-gatherers desperate for survival. It can be a time of creation and advancement, too.

Colonization and the New World

It’s common knowledge that the Old World of Europe, Asia, and Africa truly met the New World of the Americas in 1492, when Columbus landed in the Caribbean. Of course, we now know that there was contact before that, such as the Vikings in Newfoundland, about a thousand years ago. But Columbus and those who followed him—Cortés, Pizarro, de Soto, Cabot, and all those other explorers and conquerors Americans learn about in history class—those were ones who truly made lasting contact between the two shores of the Atlantic.

Entire volumes have been written over the last five centuries about the exploration, the conquest, the invasion of the Americas. There’s no need to repeat any of it here. But the subject of the New World is one doesn’t seem to get a lot of exposure in the world of fiction, with the notable exception of science fiction. And I think that’s a shame, because it’s an awfully interesting topic for a story. It’s full of adventure, of gaining knowledge, of conflict and warfare. Especially for American writers (not limited to the United States, but all of North and South America), it’s writing about the legacy we inherited, and it’s odd that we would rather tell stories about the history of the other side of the ocean.

Written by the victors

Of course, one of the main reasons why we don’t write many stories about exploration and colonization is political. We know a lot about the Spaniards and Englishmen and Frenchmen that discovered (using that term loosely) the lands of America. We have written histories of those first conquistadors, of those that came after, and of the later generations that settled in the new lands. We don’t, however, have much of anything from the other side.

A lot of that is due to the way first contact played out. We all know the story. Columbus discovered his Indians (to use his own term), Cortés played them against each other to conquer them, and smallpox decimated them. Those that survived were in no position to tell their tale. Most of them didn’t have a familiar system of writing; most of those written works that did exist were destroyed. And then came centuries of subjugation. Put that all together, and it’s no wonder why we only have one side of the tale of the New World.

But this already suggests story possibilities. We could write from one point of view or the other (or both, for that matter), setting our tale in the time of first contact or shortly after, in the upheaval that followed. This is quite popular in science fiction, where the “New World” is really a whole new world, a planet that was inhabited when we arrived. That’s the premise of Avatar, for example.

Life of a colony

Colonization has existed for millennia, but it’s only since 1492 that it becomes such a central part of world history. The Europeans that moved into the Americas found it filled with wonders and dangers. For the Spanish, the chief problem—aside from the natives—was the climate, as Mexico, Central America, and the Caribbean mostly fall into the tropical belt, far removed from mid-latitude Spain.

The English had it a little better; the east coast of the United States isn’t all that different from England, except that the winters can be harsher. (This was even more the case a few hundred years ago, in the depths of the Little Ice Age.) It’s certainly easier to go from York to New York than Madrid to Managua.

No matter the climate, though, colonists had to adapt. Especially in those times, when a resupply voyage was a long and perilous journey, they had to learn to live off the land. And they did. They learned about the new plants (corn, potatoes, tomatoes, and many more) and animals (bison and llamas, to name the biggest examples), they mapped out river systems and mountain chains. And we have reaped the benefits ever since.

Building a colony can be fun in an interactive setting; Colonization wouldn’t exist otherwise. For a novel or visual work, it’s a little harder to make work, because the idea is that a colony starts out exciting and new, but it needs to become routine. Obviously, if it doesn’t, then that’s a place where we can find a story. Paul Kearney’s Monarchies of God is a great series that has a “settling new lands” sequence. In the science fiction realm of colonizing outer space, you also have such works as Kim Stanley Robinson’s Red Mars (and its colorful sequels).

Terra nullius

Whenever people moved into new land, there was always the possibility that they were the first ones there. It happened about 20,000 years ago in Alaska, about 50,000 in Australia, and less than 1,000 in Hawaii. Even in the Old World, there were firsts, sometimes even in recorded history. Iceland, for example, was uninhabited all the way through Roman times. And in space, everywhere is a first, at least until we find evidence of alien life.

Settling “no man’s land” is different from settling in land that’s already inhabited, and that would show in a story with that setting. There are no outsiders to worry about. All conflict is either internal to the colonists’ population or environmental. That makes for a harder story to write, I think, but one more suited to character drama and the extended nature of books and TV series. It doesn’t have to be entirely without action, though, but something like a natural disaster would be more likely than war.

This is one place where we can—must—draw the distinction between space-based sci-fi and earthly fiction or fantasy. On earth (or a similar fictitious world), we’re not alone. There are animals, plants, pests everywhere we go. We have sources of food and water, but also of disease. In deep space, such as a story about colonizing the asteroid belt, there’s nothing out there. Nothing living, at least. Settlers would have to bring their own food, their own water, their own shelter. They would need to create a closed, controlled ecosystem. But that doesn’t leave much room for the “outside” work of exploration, except as a secondary plot.

Go forth

I’m not ashamed to admit that I could read an entire book about nothing but the early days of a fictional colony, whether in the Americas or on an alien planet. I’ll also admit that I’m not your average reader. Most people want some sort of action, some drama, some reason for being there in the first place. And there’s nothing wrong with that.

But let’s look at that question. Why does the colony exist at all? The Europeans were looking for wealth at first, with things like religious freedom and manifest destiny coming later on. The exploration of space appears to be headed down the same path, with commercial concerns taking center stage, though pure science is another competitor. Even simple living space can be a reason to venture forth. That seems to be the case for the Vikings, and plenty of futuristic stories posit a horribly overcrowded Earth and the need to claim the stars.

Once you have a reason for having a colonial settlement, then you can turn to its nature. The English made villages and towns, the French trading posts. Antarctica isn’t actually settled—by international agreement, it can’t be—but the scientific outposts there point to another possibility. If there are preexisting settlements, like native cities, then there’s the chance that the colonists might move in to one of them instead of setting up their own place. That’s basically what happened to Tenochtitlan, now known as Mexico City.

Colonies are interesting, both in real history and in fiction. They can work as settings in many different genres, including space opera, fantasy, steampunk (especially the settling of the Wild West), and even mystery (we still don’t know what really happened at Roanoke Island). Even just a colonial backdrop can add flavor to a story, giving it an outside pressure, whether by restless natives or the cold emptiness of space. A colony is an island, in a sense, an island in a sea of hostility, fertile ground for one’s imagination.

Alternate histories

For a lot of people, especially writers and other dreamers, one of the great questions, a question that provokes more thought, debate, and even argument, is “What if?” What if one single part of history was changed? What would be the result? These alternate histories are somewhat popular, as fictional sub-genres go, and they aren’t just limited to the written word. It’s a staple of Star Trek series, for example, to travel into the past or visit the “mirror universe”, either of which involves a specific change that can completely alter the present (their present, mind you, which would be our future).

What-if scenarios are also found in nonfiction works. Look at the history section of your favorite bookstore, digital or physical. You’ll find numerous examples asking things like “What if the D-Day invasion failed?” or (much earlier in the timeline) “What if Alexander had gone west to conquer, instead of east?” Some books focus on a single one of these questions, concocting an elaborate alternative to our known history. Others stuff a number of possibilities in a single work, necessarily giving each of them a less-detailed look.

And altering the course of history is a fun diversion, too. Not only that, but it can make a great story seed. You don’t have to write a novel of historical fiction to use “real” history and change things around a little bit. Plenty of fantasy is little more than a retelling of one part of the Middle Ages, with only the names changed to protect the innocent. Sci-fi also benefits, simply because history, in the broadest strokes, does repeat itself. The actors are different, but the play remains the same.

Divergence

So, let’s say you do want to construct an alternate timeline. That could easily fill an entire book—there’s an idea—but we’ll stick to the basics in this post. First and foremost, believability is key. Sure, it’s easy to say that the Nazis and Japanese turned the tide in World War II, eventually invading the US and splitting it between them. (World War II, by the way, is a favorite for speculators. I don’t know why.) But there’s more to it than that.

The Butterfly Effect is a well-known idea that can help us think about how changing history can work. As in the case of the butterfly flapping its wings and causing a hurricane, small differences in the initial conditions can grow into much larger repercussions. And the longer the time since the breakaway point, the bigger the changes will be.

I’m writing this on September 21, and some of the recent headlines include the Emmy Awards, the Greek elections, and the Federal Reserve’s decision to hold interest rates, rather than raising them. Change any bit of any of these, and the world today isn’t going to be much different. Go back a few years, however, and divergences grow more numerous, and they have more impact. Obviously, one of the biggest events of the current generation is the World Trade Center attacks in 2001. Get rid of those (as Family Guy did in one of their time-travel episodes), and most of the people alive today would still be here, but the whole world would change around them.

It’s not hard to see how this gets worse as you move the breakaway back in time. Plenty of people—including some that might be reading this—have ancestors that fought in World War II. And plenty of those would be wiped out if a single battle went differently, if a single unit’s fortunes were changed. World War I, the American Civil War (or your local equivalent), and so on, each turning point causes more and more difference in the final outcome. Go back in time to assassinate Genghis Khan before he began his conquests, for instance, and millions of people in the present never would have been born.

Building a history

It’s not just the ways that things would change, or the people that wouldn’t have lived. Those are important parts of an alternate history, but they aren’t the only parts. History is fractal. The deeper you go, the more detail you find. You could spend a lifetime working out the ramifications of a single change, or you could shrug it off and focus on only the highest levels. Either way is acceptable, but they fit different styles.

The rest of this post is going to look at a few different examples of altering history, of changing a single event and watching the ripples in time that it creates. They go in reverse chronological order, and they’re nothing more than the briefest glances. Deeper delving will have to wait for later posts, unless you want to take up the mantle.

Worked example 1: The Nazi nuke

Both ways of looking at alternate timelines, however, require us to follow logical pathways. Let’s look at the tired, old scenario of Germany getting The Bomb in WWII. However it happens, it happens. It’s plausible—the Axis had a lot of scientific talent that defected around that time, including Albert Einstein, Werner von Braun, and Enrico Fermi. It’s not that great a leap to say that the atomic bomb could be pushed up a couple of years.

But what does that do to the world? Well, it obviously gives the Axis an edge in the war; given their leaders’ tendencies, it’s not too much of a stretch to say that such a weapon would have been used, possibly on a large city like London. (In the direst scenario, it’s used on Berlin, to stop the Red Army.) Nuclear weapons would still have the same production problems they had in our 1940s, so we wouldn’t have a Cold War-era “hundreds of nukes ready to launch” situation. At most, we’d have a handful of blasts, most likely on big cities. That would certainly be horrible, but it wouldn’t really affect the outcome of the war that much, only the scale of destruction. The Allies would likely end up with The Bomb, too, whether through parallel development, defections, or espionage. In this case, the Soviets might get it earlier, as well, which might lead to a longer, darker Cold War.

There’s not really a logical path from an earlier, more widespread nuclear weapon to a Nazi invasion of America, though. Russia, yes, although their army would have something to say about that. But invading the US would require a severe increase in manpower and a series of major victories in Europe. (The Japanese, on the other hand, wouldn’t have nearly as much trouble, especially if they could wrap up their problems with China.) The Man in the High Castle is a good story, but we need more than one change to make it happen.

Worked example 2: The South shall rise

Another what-if that’s popular with American authors involves the Civil War. Specifically, what if the South, the Confederacy, had fought the Union to a stalemate, or even won? On the surface, this one doesn’t have as much military impact, although we’d need to tweak the manpower and supply numbers in favor of our new victors. (Maybe France offered their help or something.) Economically and socially, however, there’s a lot of fertile ground for change.

Clearly, the first and most obvious difference would be that, in 1865 Dixie, slavery would still exist. That was, after all, the main reason for the war in the first place. So we can accept that as a given, but that doesn’t necessarily mean it would be the case 150 years later. Slavery started out as an economic measure as much as a racial one. Plantations, especially those growing cotton, needed a vast amount of labor. Slaves were seen as the cheapest and simplest way of filling that need. The racial aspects only came later.

Even by the end of the Civil War, however, the Industrial Revolution was coming into full force. Steam engines were already there, and railroads were growing all around. It’s not too far-fetched to see the South investing into machinery, especially if it turns out to be a better, more efficient, less rebellious method of harvesting. It’s natural—for a Yankee, anyway—to think of Southerners as backwards rednecks, but an independent Confederacy could conceivably be quite advanced in this specific area. (There are problems with this line of reasoning, I’ll admit. One of those is that the kind of cotton grown in the South isn’t as amenable to machine harvesting as others. Still, any automation would cut down on the number of slaves needed.)

The states of the Confederacy depended on agriculture, and that wouldn’t change much. Landowners would be reluctant to give up their slaves—Southerners, as I know from personal experience, tend to be conservative—but it’s possible that they could be wooed by the economic factors. The more farming can be automated, the less sense it makes for servile labor. Remember, even though slaves didn’t have to be paid, they did have costs: housing, for example. (Conversely, slavery can still exist if the economic factors don’t add up in favor of automation. We can see the same thing today, with low-wage, illegal immigrant labor, a common “problem” in the South.)

Socially, of course, the ramifications of a Confederate victory would be much more important. It’s very easy to imagine the racism of slavery coming to the fore, even if automation ends the practice itself. That part might not change much from our own history, except in the timing. Persecuted, separated, or disfavored minorities are easy to find in the modern world, and their experiences can be a good guide here. Not just the obvious examples—the Palestinians, the Kurds, and the natives of America and Australia—but those less noteworthy, like the Chechens or even the Ainu. Revolt and rebellion might become common, even to the point of developing autonomous regions.

This might even be more likely, given the way the Confederacy was made. It was intended to be a weak national government with strong member states, more like the EU than the US. That setup, as anyone familiar with modern Europe will attest, almost nurtures the idea of secession. It’s definitely within the realm of possibility that the Confederate states would break up even further, maybe even to the point of individual nations, and a “black” state might splinter off from this. If you look closely, you can see that the US became much more centralized after the Civil War, giving more and more power to the federal government. The Confederates might have to do that, too, which would smack of betrayal.

Worked example 3: Gibbon’s nightmare

One of the other big “change the course of history” events is the fall of the Roman Empire, and that will be our last example today. How we prevent such a collapse isn’t obvious. Stopping the barbarian hordes from sacking Rome really only buys time; the whole system was hopelessly corrupt already. For the sake of argument, let’s say that we found the single turning-point that will stop the whole house of cards from falling. What does this do to history?

Well, put simply, it wrecks it. The Western world of the last fifteen hundred years is a direct result of the Romans and their fall. Now, we can salvage a lot by deciding that the ultimate event merely shifted power away from Rome, into the Eastern (Byzantine) Empire centered on Constantinople. That helps a lot, since the Goths and Vandals and Franks and whatnot mostly respected the authority of the Byzantines, at least in the beginning. Doing it like this might delay the inevitable, but it’s not the fun choice. Instead, let’s see what happens if the Roman Empire as a whole remains intact. Decadent, perhaps, and corrupt at every level, but whole. What happens next?

If we can presume some way of keeping it together over centuries, down to the present day, then we have a years-long project for a team of writers, because almost every aspect of life would be different. The Romans had a slave economy (see above for how that plays out), a republican government, and some pretty advanced technology, especially compared to their immediate successors. We can’t assume that all of this would carry down through the centuries, though. Even the Empire went through its regressive times. The modern world might be 400 years more advanced, but it’s no less likely that development would be retarded by a hundred or more years. The Romans liked war, and war is a great driver of technology, but you eventually run out of people to fight, and a successful empire requires empire-building. And a Pax Romana can lead to stagnation.

But the Dark Ages wouldn’t have happened, not like they really did. The spread of Islam might have been stopped early on, or simply contained in Arabia, but that would have also prevented their own advances in mathematics and other sciences. The Mongol invasions could have been stopped by imperial armies, or they could have been the ruin of Rome on a millennium-long delay. Exploration might not have happened at the same pace, although expeditions to the Orient would be an eventual necessity. (It gets really fun if you posit that China becomes a superpower in the same timeline. You could even have a medieval-era Cold War.)

Today’s world, in this scenario, would be different in every way, especially in the West. Medieval Europe was held together by the Christian Church. Our hypothetical Romans would have that, sure, but also the threat of empire to go with it. Instead of the patchwork of nation-states that marked the Middle Ages, you would have a hegemony. There might be no need for the Crusades, but also no need for the great spiritual works iconic of the Renaissance. And how would political theory grow in an eternal empire? It likely wouldn’t; it’s only when people can see different states with different systems of government that such things come about. If everybody is part of The One Empire, what use is there in imagining another way of doing things?

I could go on, but I won’t. This is a well without a bottom, and it only gets deeper as you fall further. It’s the Abyss, and it can and will stare back at you. One of my current writing projects involves something like an alternate timeline—basically, it’s a planet where Native Americans were allowed to develop without European influence—and it has taken me down roads I’ve never dreamed of traveling. Even after spending hundreds of hours thinking about it, I still don’t feel like I’ve done more than scratch the surface. But that’s worldbuilding for you.