Future past: steam

Let’s talk about steam. I don’t mean the malware installed on most gamers’ computers, but the real thing: hot, evaporated water. You may see it as just something given off by boiling stew or dying cars, but it’s so much more than that. For steam was the fluid that carried us into the Industrial Revolution.

And whenever we talk of the Industrial Revolution, it’s only natural to think about its timing. Did steam power really have to wait until the 18th century? Is there a way to push back its development by a hundred, or even a thousand, years? We can’t know for sure, but maybe we can make an educated guess or two.


Obviously, knowledge of steam itself dates back to the first time anybody ever cooked a pot of stew or boiled their day’s catch. Probably earlier than that, if you consider natural hot springs. However you take it, they didn’t have to wait around for a Renaissance and an Enlightenment. Steam itself is embarrassingly easy to make.

Steam is a gas; it’s the gaseous form of water, in the same way that ice is its solid form. Now, ice forms naturally if the temperature gets below 0°C (32°F), so quite a lot of places on Earth can find some way of getting to it. Steam, on the other hand requires us to take water to its boiling point of 100°C (212°F) at sea level, slightly lower at altitude. Even the hottest parts of the world never get temperatures that high, so steam is, with a few exceptions like that hot spring I mentioned, purely artificial.

Cooking is the main way we come into contact with steam, now and in ages past. Modern times have added others, like radiators, but the general principle holds: steam is what we get when we boil water. Liquid turns to gas, and that’s where the fun begins.


The ideal gas law tells us how an ideal gas behaves. Now, that’s not entirely appropriate for gases in the real world, but it’s a good enough approximation most of the time. In algebraic form, it’s PV = nRT, and it’s the key to seeing why steam is so useful, so world-changing. Ignore R, because it’s a constant that doesn’t concern us here; the other four variables are where we get our interesting effects. In order: P is the pressure of a gas, V is its volume, n is how much of it there is (in moles), and T is its temperature.

You don’t need to know how to measure moles to see what happens. When we turn water into steam, we do so by raising its temperature. By the ideal gas law, increasing T must be balanced out by a proportional increase on the other side of the equation. We’ve got two choices there, and you’ve no doubt seen them both in action.

First, gases have a natural tendency to expand to fill their containers. That’s why smoke dissipates outdoors, and it’s why that steam rising from the pot gets everywhere. Thus, increasing V is the first choice in reaction to higher temperatures. But what if that’s not possible? What if the gas is trapped inside a solid vessel, one that won’t let it expand? Then it’s the backup option: pressure.

A trapped gas that is heated increases in pressure, and that is the power of steam. Think of a pressure cooker or a kettle, either of them placed on a hot stove. With nowhere to go, the steam builds and builds, until it finds relief one way or another. (With some gases, this can come in the more dramatic form of a rupture, but household appliances rarely get that far.)

As pressure is force per unit of area, and there’s not a lot of area in the spout of a teapot, the rising temperatures can cause a lot of force. Enough to scald, enough to push. Enough to…move?


That is the basis for steam power and, by extension, many of the methods of power generation we still use today. A lot of steam funneled through a small area produces a great amount of force. That force is then able to run a pump, a turbine, or whatever is needed, from boats to trains. (And even cars: some of the first automobiles were steam-powered.)

Steam made the Industrial Revolution possible. It made most of what came after possible, as well. And it gave birth to the retro fad of steampunk, because many people find the elaborate contraptions needed to haul superheated water vapor around to be aesthetically pleasing. Yet there is a problem. We’ve found steam-powered automata (e.g., toys, “magic” temple doors) from the Roman era, so what happened? Why did we need over 1,500 years to get from bot to Watt?

Unlike electricity, where there’s no obvious technological roadblock standing between Antiquity and advancement, steam power might legitimately be beyond classical civilizations. Generation of steam is easy—as I’ve said, that was done with the first cooking pot at the latest. And you don’t need an ideal gas law to observe the steam in your teapot shooting a cork out of the spout. From there, it’s not too far a leap to see how else that rather violent power can be utilized.

No, generating small amounts of steam is easy, and it’s clear that the Romans (and probably the Greeks, Chinese, and others) could do it. They could even use it, as the toys and temples show. So why didn’t they take that next giant leap?

The answer here may be a combination of factors. First is fuel. Large steam installations require metaphorical and literal tons of fuel. The Victorian era thrived on coal, as we know, but coal is a comparatively recent discovery. The Romans didn’t have it available. They could get by with charcoal, but you need a lot of that, and they had much better uses for it. It wouldn’t do to cut down a few acres of forest just to run a chariot down to Ravenna, even for an emperor. Nowadays, we can make steam by many different methods, including renewable variations like solar boilers, but that wasn’t an option back then. Without a massive fuel source, steam—pardon the pun—couldn’t get off the ground.

Second, and equally important, is the quality of the materials that were available. A boiler, in addition to eating fuel at a frantic pace, also has some pretty exacting specifications. It has to be built strong enough to withstand the intense pressures that steam can create (remember our ideal gas law); ruptures were a deadly fixture of the 19th century, and that was with steel. Imagine trying to do it all with brass, bronze, and iron! On top of that, all your valves, tubes, and other machinery must be built to the same high standard. It’s not just a gas leaking out, but efficiency.

The ancients couldn’t pull that off. Not from lacking of trying, mind you, but they weren’t really equipped for the rigors of steam power. Steel was unknown, except in a few special cases. Rubber was an ocean away, on a continent they didn’t know existed. Welding (a requirement for sealing two metal pipes together so air can’t escape) probably wasn’t happening.

Thus, steam power may be too far into the future to plausibly fit into a distant “retro-tech” setting. It really needs improvements in a lot of different areas. That’s not to say that steam itself can’t fit—we know it can—but you’re not getting Roman railroads. On a small scale, using steam is entirely possible, but you can’t build a classical civilization around it. Probably not even a medieval one, at that.

No, it seems that steam as a major power source must wait until the rest of technology catches up. You need a fuel source, whether coal or something else. You absolutely must have ways of creating airtight seals. And you’ll need a way to create strong pressure vessels, which implies some more advanced metallurgy. On the other hand, the science isn’t entirely necessary; if your people don’t know the ideal gas law yet, they’ll probably figure it out pretty soon after the first steam engine starts up. And as for finding uses, well, they’d get to that part without much help, because that’s just what we do.

Future past: Electricity

Electricity is vital to our modern world. Without it, I couldn’t write this post, and you couldn’t read it. That alone should show you just how important it is, but if not, then how about anything from this list: air conditioning, TVs, computers, phones, music players. And that’s just what I can see in the room around me! So electricity seems like a good start for this series. It’s something we can’t live without, but its discovery was relatively recent, as eras go.


The knowledge of electricity, in some form, goes back thousands of years. The phenomenon itself, of course, began in the first second of the universe, but humans didn’t really get to looking into it until they started looking into just about everything else.

First came static electricity. That’s the kind we’re most familiar with, at least when it comes to directly feeling it. It gives you a shock in the wintertime, it makes your clothes stick together when you pull them out of the drier, and it’s what causes lightning. At its source, static electricity is nothing more than an imbalance of electrons righting itself. Sometimes, that’s visible, whether as a spark or a bolt, and it certainly doesn’t take modern convenience to produce such a thing.

The root electro-, source of electricity and probably a thousand derivatives, originally comes from Greek. There, it referred to amber, that familiar resin that occasionally has bugs embedded in it. Besides that curious property, amber also has a knack for picking up a static charge, much like wool and rubber. It doesn’t take Ben Franklin to figure that much out.

Static electricity, however, is one-and-done. Once the charge imbalance is fixed, it’s over. That can’t really power a modern machine, much less an era, so the other half of the equation is electric current. That’s the kind that runs the world today, and it’s where we have volts and ohms and all those other terms. It’s what runs through the wires in your house, your computer, your everything.


The study of current, unlike static electricity, came about comparatively late (or maybe it didn’t; see below). It wasn’t until the 18th century that it really got going, and most of the biggest discoveries had to wait until the 19th. The voltaic pile—which later evolved into the battery—electric generators, and so many more pieces that make up the whole of this electronic age, all of them were invented within the last 250 years. But did they have to be? We’ll see in a moment, but let’s take a look at the real world first.

Although static electricity is indeed interesting, and not just for demonstrations, current makes electricity useful, and there are two ways to get it: make it yourself, or extract it from existing materials. The latter is far easier, as you might expect. Most metals are good conductors of electricity, and there are a number of chemical reactions which can cause a bit of voltage. That’s the essence of the battery: two different metals, immersed in an acidic solution, will react in different ways, creating a potential. Volta figured this much out, so we measure the potential in volts. (Ohm worked out how voltage and current are related by resistance, so resistance is measured in ohms. And so on, through essentially every scientist of that age.)

Using wires, we can even take this cell and connect it to another, increasing the amount of voltage and power available at any one time. Making the cells themselves larger (greater cross-section, more solution) creates a greater reserve of electricity. Put the two together, and you’ve got a way to store as much as you want, then extract it however you need.

But batteries eventually run dry. What the modern age needed was a generator. To make that, you need to understand that electricity is but one part of a greater force: electromagnetism. The other half, as you might expect, is magnetism, and that’s the key to generating power. Moving magnetic fields generate electrical potential, i.e., current. And one of the easiest ways to do it is by rotating a magnet inside another. (As an experiment, I’ve seen this done with one of those hand-cranked pencil sharpeners, so it can’t be that hard to construct.)

One problem is that the electricity this sort of generator makes isn’t constant. Its potential, assuming you’ve got a circular setup, follows a sine-wave pattern from positive to negative. (Because you can have negative volts, remember.) That’s alternating current, or AC, while batteries give you direct current, DC. The difference between the two can be very important, and it was at the heart of one of science’s greatest feuds—Edison and Tesla—but it doesn’t mean too much for our purposes here. Both are electric.


What does it take to create electricity? Is there anything special about it that had to wait until 1800 or so?

As a matter of fact, not only was it possible to have something electrical before the Enlightenment, but it may have been done…depending on who you ask. The Baghdad battery is one of those curious artifacts that has multiple plausible explanations. Either it’s a common container for wine, vinegar, or something of that sort, or it’s a 2000-year-old voltaic cell. The simple fact that this second hypothesis isn’t immediately discarded answers one question: no, nothing about electricity requires advanced technology.

Building a rudimentary battery is so easy that it almost has to have been done before. Two coins (of different metals) stuck into a lemon can give you enough voltage to feel, especially if you touch the wires to your tongue, like some people do with a 9-volt. Potatoes work almost as well, but any fruit or vegetable whose interior is acidic can provide the necessary solution for the electrochemical reactions to take place. From there, it’s not too big a step to a small jar of vinegar. Metals known in ancient times can get you a volt or two from a single cell, and connecting them in series nets you even larger potentials. It won’t be pretty, but there’s absolutely nothing insurmountable about making a battery using only technology known to the Romans, Greeks, or even Egyptians.

Generators a bit harder. First off, you need magnets. Lodestones work; they’re naturally magnetized, possibly by lightning, and their curious properties were first noticed as early as 2500 years ago. But they’re rare and hard to work with, as well as probably being full of impurities. Still, it doesn’t take a genius (or an advanced civilization) to figure out that these can be used to turn other pieces of metal (specifically iron) into magnets of their own.

Really, then, creation of magnets needs iron working, so generators are beyond the Bronze Age by definition. But they aren’t beyond the Iron Age, so Roman-era AC power isn’t impossible. They may not understand how it works, but they have the means to make it. The pieces are there.

The hardest part after that would be wire, because shuffling current around needs that. Copper is a nice balance of cost and conductivity, which is why we use it so much today; gold is far more ductile, while silver offers better conduction properties, but both are too expensive to use for much even today. The latter two, however, have been seen in wire form since ancient times, which means that ages past knew the methods. (Drawn wire didn’t come about until the Middle Ages, but it’s not the only way to do it.) So, assuming that our distant ancestors could figure out why they needed copper wire, they could probably come up with a way to produce it. It might not have rubber or plastic insulation, but they’d find something.

In conclusion, then, even if the Baghdad battery is nothing but a jar with some leftover vinegar inside, that doesn’t mean electricity couldn’t be used by ancient peoples. Technology-wise, nothing at all prevents batteries from being created in the Bronze Age. Power generation might have to wait until the Iron Age, but you can do a lot with just a few cells. And all the pieces were certainly in place in medieval times. The biggest problem after making the things would be finding a use for them, but humans are ingenious creatures. They’d work something out.

Future past: Introduction

With the “Magic and Tech” series on hiatus right now (mostly because I can’t think of anything else to write in it), I had the idea of taking a look at a different type of “retro” technological development. In this case, I want to look at different technologies that we associate with our modern world, and see just how much—or how little—advancement they truly require. In other words, let’s see just what could be made by the ancients, or by medieval cultures, or in the Renaissance.

I’ve been fascinated by this subject for many years, ever since I read the excellent book Lost Discoveries. And it’s very much a worldbuilding pursuit, especially if you’re building a non-Earth human culture or an alternate history. (Or both, in the case of my Otherworld series.) As I’ve looked into this particular topic, I’ve found a few surprises, so this is my chance to share them with you, along with my thoughts on the matter

The way it works

Like “Magic and Tech”, this series (“Future Past”; you get no points for guessing the reference) will consist of an open-ended set of posts, mostly coming out whenever I decide to write them. Each post will be centered on a specific invention, concept, or discovery, rather than the much broader subjects of “Magic and Tech”. For example, the first will be that favorite of alt-historians: electricity. Others will include the steam engine, various types of power generation, and so on. Maybe you can’t get computers in the Bronze Age—assuming you don’t count the Antikythera mechanism—but you won’t believe what you can get.

Every post in the series will be divided into three main parts. First will come an introduction, where I lay out the boundaries of the topic and throw in a few notes about what’s to come. Next is a “theory” section: a brief description of the technology as we know it. Last and longest is the “practice” part, where we’ll look at just how far we can turn back the clock on the invention in question.

Hopefully, this will be as fun to read as it is to write. And I will get back to “Magic and Tech” at some point, probably early next year, but that will have to wait until I’m more inspired on that front. For now, let’s forget the fantasy magic and turn our eyes to the magic of invention.

On eclipses and omens

(I’m writing this post early, as I so often do. For reference, today, from the author’s perspective, is July 17, 2017. In other words, it’s 5 weeks before the posting date. In that amount of time, a lot can happen, but I can guarantee one thing: it will be cloudy on August 21. Especially in the hours just after noon.)

Today is a grand day, a great time to be alive, for it is the day of the Great American Eclipse. I’m lucky—except for the part where the weather won’t cooperate—because I live in the path of totality. Some Americans will have to travel hundreds of miles to see this brief darkening of the sun; I only have to step outside. (And remember the welding glasses or whatever, but that’s a different story.)

Eclipses of any kind are a spectacle. I’ve seen a handful of lunar ones in my 33 years, but never a solar eclipse. Those of the moon, though, really are amazing, especially the redder ones. But treating them as a natural occurrence, as a simple astronomical event that boils down to a geometry problem, that’s a very modern view. In ages past, an eclipse could be taken as any number of things, many of them bad. For a writer, that can create some very fertile ground.


Strictly speaking, an eclipse is nothing more unusual than any other alignment of celestial bodies. It’s just a lot more noticeable, that’s all. The new moon is always invisible, because its dark side is facing us, but our satellite’s orbital inclination means that it often goes into its new phase above or below the sun, relative to the sky. Only rarely does it cross directly in front of the solar disk from our perspective. Conversely, it’s rare—but not quite as rare—for the moon to fall squarely in the shadow created by the Earth when it’s full.

The vagaries of orbital mechanics mean that not every eclipse is the same. Some are total, like the one today, where the shadowing body completely covers the sun. For a solar eclipse, that means the moon is right between us and the sun—as viewed by certain parts of the world—and we’ll have two or three minutes of darkness along a long, narrow path. On the flip side, lunar eclipses are viewable by many more people, as we are the ones doing the shadowing.

Another possibility is the partial eclipse, where the alignment doesn’t quite work out perfectly; people outside of the path of totality today will only get a partial solar eclipse, and that track is so narrow that my aunt, who lives less than 15 miles to the south, is on its uncertain edge. Or you might get an annular solar eclipse, where the moon is at its apogee (farthest point in its orbit), so it isn’t quite big enough to cover the whole sun, instead leaving a blinding ring. And then there’s the penumbral lunar eclipse, essentially a mirrored version of the annular; in this case, the moon doesn’t go through the Earth’s full shadow, and most people barely even notice anything’s wrong.

However it happens, the eclipse is an astronomical eventuality. Our moon is big enough and close enough to cover the whole sun, so it’s only natural that we have solar eclipses. (On Mars, it wouldn’t work, because Phobos and Deimos are too tiny. Instead, you’d have transits, similar to the transit of Venus a few years ago.) Similarly, the moon is close enough to fall completely within its primary’s shadow on some occasions, so lunar eclipses were always going to happen.

These events are regular, precise. We can predict them years, even centuries in advance. Gravity and orbital mechanics give alignments a clockwork rhythm that can only change if acted upon by an outside body.

Days of old

In earlier days, some people saw a much different outside body at work in the heavens. Even once a culture reaches a level of mathematical and astronomical advancement where eclipses become predictable, that doesn’t mean the average person isn’t going to continue seeing them as portents. How many people believe in astrology today?

And let’s face it: an eclipse, if you don’t really know what’s going on, might be scary. Here’s the sun disappearing before our very eyes. Or the moon. Or, if it’s a particularly colorful lunar eclipse, then the moon isn’t vanishing, but turning red. You know, the color of blood. Somebody who doesn’t understand orbits and geometry would be well inclined to think something strange is going on.

Writers of fantasy and historical fiction can use this to great effect, because a rare event like an eclipse is a perfect catalyst for change and conflict. People might see it as an omen, a sign of impending doom. Then, seeing it, they might be moved to bring about the doom themselves. Seven minutes of darkness—the most we on Earth can get—might not be too bad, but a fantasy world with a larger moon may have solar eclipses that last for an hour or more, like our lunar eclipses today. That could be enough time to unnerve even the hardiest souls.

Science fiction can get into the act here, too, as in Isaac Asimov’s Nightfall. If a culture only sees an eclipse once every thousand years or so, then even the memory of the event might be forgotten by the next time it comes around. And then what happens? In the same vein, the eclipse of Pitch Black releases the horrors of that story; working that out provides a good mystery to be solved, while the partial phase offers a practical method of building tension.

Beyond the psychological effects and theological implications of an eclipse, they work well in any case where astronomy and the predictive power of science play a role. Recall, if you will, the famous story of Columbus using a known upcoming eclipse as a way to scare an indigenous culture that lacked the knowledge of its arrival. Someone who has that knowledge can very easily lord it over those who do not, which sets up potential conflicts—or provides a way out of them. “Release me, or I will take away the sun” works as a threat, if the people you’re threatening can’t be sure the sun won’t come back.

In fantasy, eclipses can even fit into the backstory. The titular character of my novel Nocturne was born during a solar eclipse (I wrote the book because of the one today, in fact), and that special quality, combined with the peculiar magic system of the setting, provides most of the forward movement of the story. On a more epic level, if fantasy gods wander the land, one of them might have the power to make his own eclipses. A good way of keeping the peasants and worshippers in line, wouldn’t you say?

However you do it, treating an eclipse as something amiss in the heavens works a lot better for a story than assuming it’s a normal celestial occurrence. Yes, they happen. Yes, they’re regular. But if they’re unexpected, then they can be so much more useful. But that’s true of science in general, at least when you start melding it with fantasy. The whole purpose of science is to explain the world in a rational manner, but fantasy is almost the antithesis of rationality. So, by keeping eclipses mysterious, momentous, portentous occasions, we let them stay in the realm of fantasy. For today, I think that’s a good thing.

On the elements

Very recently, a milestone was reached, an important goal in the study of chemistry. The seventh row of the periodic table was officially filled in. Now, almost nobody outside of a few laboratories cares anything about oganesson and tennessine (nice to see that my state finally gets its own element, though), and they’ll probably never have any actual use, but they’re there, and now we know they are.

Especially in science fiction, there’s the trope of the “unknown” element that has or allows some sort of superpowers. In some cases, this takes the form of a supposed chemical element, such as the fictitious “elerium”, “adamantium”, or even “unobtainium”. Other works instead use something that could better be described as a compound (“kryptonite”) or something else entirely (“element zero”). But the idea remains the same.

So this post is a quick overview of the elements we know. As a whole, science is quite confident that we do know all the elements in nature. Atomic theory is pretty clear on that point; the periodic table has no more “gaps” in the middle, and we’ve now filled in all the ones at the end. But element 118 only got named in 2016, and that’s proof that we didn’t always know everything.

The ancients

The classical idea of “element” wasn’t exactly chemically sound. We know the Greek division of earth, air, fire, and water, a four-way distinction still used in fantasy literature and other media; other cultures had similar concepts, if not always the same divisions.

But they also knew of chemical elements, particularly a few that occur naturally in “pure” form. Gold, silver, copper, tin, and lead are the ones most people recognize as being “prehistoric”. (Native copper is relatively rare, but it pops up in a few places, and most of those, coincidentally enough, show evidence of a bronze-working culture nearby.) Carbon, in the form of charcoal, doesn’t take too much work to purify. Meteorites provided early iron. Sulfur can be found anywhere there’s a volcano—probably a good reason to associate the smell of “brimstone” with eternal punishment. And don’t forget “quicksilver”, or mercury.

We’ve also got evidence of bismuth and antimony known in something like elemental form. Both found medicinal uses, despite being quite toxic. (Mercury was the same, and it’s even worse, because it’s a liquid at room temperature.) And then there’s the curious case of platinum. Some evidence points to it being used on either side of the Atlantic in olden times, which is good news for the fantasy types who need a coin more valuable than gold.

The alchemists

For most of Western history, chemists—or what passed for them—tended to focus on compounds rather than isolating elements. However, there were a few advances on that front, too. Albertus Magnus separated arsenic from its compounding partners in the 13th century, much to the delight of poisoners everywhere. Elemental zinc is also an alchemical discovery in Europe, though a few records point to it being made far earlier in India.

Around this time, the very definition of an element was in flux, especially in medieval and Renaissance Europe. You still had the Aristotelian view of the four elements, broadly supported by the Church, but then there were the alchemists and others working on their own things. Some of the questions they considered led to great discoveries later on, but the technology wasn’t yet ready to isolate all the elements. So, in this particular age (conveniently enough, the perfect era for fantasy), there’s still a lot left to find.

The enlightened ones

Henning Brand gets the credit for discovering phosphorus, according to the book I’m looking at right now. That was in 1669, almost a century and a half after Paracelsus possibly experimented with metallic zinc, and a full four hundred years after the last definitive evidence for discovery. The next on the timeline doesn’t come until 1735: cobalt.

Those opened the floodgates. By this point, you could hear the first stirrings of the Industrial Revolution, and that brought advances to the technology of chemistry. The more liberal academic climate led to greater experimentation, as well. All in all, the late 18th century was the beginning of an element storm. Thanks to electricity, the vacuum, and numerous other developments, enterprising chemists (no longer alchemists at this point) started finding elements seemingly everywhere.

It’s this era where the periodic table is a bit of a Wild West. Everything is up in the air, and nobody really knows what’s what. Indeed, there are quite a few mistaken discoveries in the years before Mendeleev, some of them even finding their way into actual chemistry textbooks. In most cases, these were simple mistakes or even rediscoveries; there were a few fights over primacy, too. But it shows that it wasn’t until relatively recently that we knew all these elements couldn’t exist.

The periodic age

Once the periodic table became the gold standard for chemistry, finding new elements became a matter of filling in the blanks. We know there’s an element that goes here, and it’ll be a little like these. So that’s how we got most of the rest of the gang in the late 1800s through about 1940 or so.

Ever since nuclear science came into existence, we’ve seen a steady stream of new elements being created in particle accelerators or other laboratory conditions. Strictly speaking, that began in 1937 with technetium (more on it in a moment), but it really got going after World War II. Over the next 70 years, scientists made from scratch a couple dozen new elements, none of which exist in nature, most tearing themselves apart within the barest fraction of a second.

Nuclear physics explains why these superheavy elements don’t work right. The way we make them is by forcing lighter elements to fuse, but that leaves them with too few neutrons to truly be stable. The island of stability hypothesis says that some of them could actually be stable enough to be useful…if we built them right. So, even though there’s no more room on the periodic table (unless Period 8 turns out to exist), that’s not to say all those spots along the bottom row have to disappear in the blink of an eye.

The oddballs

Last but not least, there are a few weirdos in the periodic table, and these deserve special mention. Two of them are quite odd indeed: technetium and promethium. By any reasonable standard, these should be stable. Technetium is element 43, a transition metal that should act a bit like a heavier manganese.

No such luck. Due to a curious quirk in atomic structure, 43 is a kind of “magic number”. An atom with 43 protons (which would be, by definition, technetium) can never be fully stable. At best, it can have a long half-life, and some isotopes do last for tens of thousands of years, but stable? Alas, no. Promethium, element 61, is the same way, for much the same reason.

Uranium is well-known as the last “stable” element, although none of its isotopes are truly stable; the most stable, 238, has a half-life around the current age of the Earth. Element 92 is also familiar as the fuel for a man-made fission reactor or a bomb, but it’s even more interesting than that. Because it’s radioactive, yet it can last for so long, uranium has the curious property of “spontaneous” fission. A few places in the world are actually natural nuclear reactors, though most have long since decayed below critical mass. A culture living near something like that, however, might discover neptunium, plutonium, and other decay byproducts long before they probably should. (They’ll likely find the link between radiation and cancer pretty early, too.)

The end

Depending on who you ask, we’re either at the end of the periodic table, or we’re not. Some theories have it running out at 118, some say 137, and one even says infinity. The patterns are already clear, though. If there’s no true island of stability, then most anything else we find is going to be extremely short-lived, highly radioactive, or both. Probably that last one.

Today, then, there’s not really the possibility for an “undiscovered” element. We simply don’t have a place to put it. That doesn’t mean your sci-fi is out of luck, though. There could be isotopes of existing elements that we don’t have; this is especially true of the transuranic elements. More likely, though, would be a compound not seen on Earth. A crystal structure we don’t have, or an alloy, or something of that sort—a novel combination of existing elements, rather than a single new one.

And then you have the more bizarre forms of matter. Neutronium (the stuff of neutron stars), if you could make it stable when you don’t have an Earth mass of the stuff packed into something the size of your house, would be a true “element zero”, and it may have interesting properties. Antimatter atoms would annihilate their “normal” cousins, but we don’t know much about them other than that. You might even be able to handwave something using other particles, like muons, or different arrangements of quarks. These wouldn’t create new elements in the traditional sense, but an entire new branch of chemistry.

So don’t get discouraged. Just because there’s no place on the periodic table to put your imaginary elements, that doesn’t mean you have to choose between them and scientific rigor. You just have to think outside the 118 boxes.

Magic and tech: privacy

Privacy is a major topic in today’s world. We hear about surveillance, privacy rights, wiretapping, and so much else that it’s hard not to have at least some knowledge of the subject. Whether it’s privacy in the real world, on the Internet, or wherever, it’s really a big deal.

Although we may talk about privacy in strictly modern terms, that doesn’t mean it’s a modern invention. Previous generations had privacy, and they had the attacks on it, the dangers to it, and the need for it. It’s only in recent times that “bad” actors (e.g., foreign—or domestic—government agents) have such a capacity for invading our privacy so effortlessly, so imperceptibly.

Private eyes

The easiest way to keep something private, of course, is to never make it public in the first place. If you’re putting every detail of your life on Facebook, then you really only have yourself to blame when it’s used against you. In general, that applies in any era, with the caveat that what’s considered “public” now might not have been so, say, a century ago. Now, this isn’t to say that not posting something guarantees it’ll never be seen in public (look at, for example, FBI-made spyware or NSA-developed cryptography algorithms), but it’s a good start.

Throughout history, privacy has also been a fight against those who are deliberately trying to invade your personal space. Today, it’s governments and corporations. Years ago, it was governments and neighborhood activist groups (is your neighbor a Communist?). In earlier times, it was governments and rival merchants. All of them would employ spies, informants, private detectives, and the like in their efforts to expose your secrets. And if you were important enough, you were almost obliged to do the same in retaliation.

Those things we need to keep private haven’t really changed, either. We still want to cover up our earlier transgressions, possibly illegal deeds, and all those things we wouldn’t be comfortable having “out there”. Yesterday’s scarlet letter is today’s racist tweet, a reminder of what happens when privacy fails. And the lengths we go to, the things we do to keep such parts of our past out of the public eye, those are becoming more important every day, because our world is getting more connected, but also less forgetful.

Today, we might use a VPN to hide our browsing history. We’ll clear cookies and block tracking scripts. Some people go even farther outside the Internet, avoiding entire city blocks because of surveillance, using burner phones, paying with cash wherever possible, and so on. Those are modern methods of protecting our privacy, but they have their roots in older ways. Hired runners, safe houses, ciphers—it’s all the same, just under a different name.

Magic-eye puzzles

Now, if you add magic, that breaks some of those methods. First off, if you’re in a D&D-style fantasy world, where any hedge wizard has access to the entire Player’s Handbook, you’ve got serious problems. A wizard who can use a scrying spell to see anywhere makes the NSA look like amateur hour. If he can pick up more senses—hearing, specifically—then privacy is essentially dead on arrival. Unless scry-blocking spells and enchantments are available, cheap, and useful, there’s nothing stopping such a setting from becoming the Panopticon.

But let’s take a step back, because the magical realm we’ve been discussing so far isn’t like that. No, it’s a bit more…down to earth. So let’s see what tools it has to protect privacy. While we’re at it, we’ll also take a look at the other side, because that’s always so much easier.

First, there aren’t any invisibility cloaks or disguise spells, unfortunately. However, we do have, thanks to the greater advances in the sciences that magic has created, a lot more options for mundane disguises. Clothing is cheaper, for example, so it’s easier to procure a sizable wardrobe. And travel is not nearly as time-consuming as in pre-modern Earth, meaning that hopping over to the next town to do your dirty work isn’t impossible; you may be suspicious, but not if enough people are moving around.

Privacy in our magical setting, then, is going to be mostly a matter of hiding and deflection, just like it used to be here. It’s not so much a technical problem as a way of thinking about a problem. It faces the same obstacles as in the Industrial era, and the people will most likely develop the same kinds of responses as our ancestors then. To take another example, think back to our magical pseudo-telegraph. These can’t easily be wiretapped—the telegraph (and later telephone) is where the term comes from—because there aren’t any wires. But that doesn’t mean our equivalent to the operator can’t be bought or even replaced. So, if sensitive information has to be sent over the magical lines, it’ll need to be encrypted.

On the flip side, once we’ve established that there are ways of recording or transmitting images and sounds, there’s an obvious kind of surveillance that comes about naturally: the hidden camera. Although they’d be magical in nature, the principle would be the same as in any spy movie. Visiting dignitaries would be wise to bring in their own mages to inspect their lodgings. (Although our actions in real life can’t be encrypted, our communications can, and a good cipher wouldn’t get any easier to crack with magic. Not until computers come around, at least.)

Hiding in plain sight

To remain private in our low-magic setting, therefore, we have to be cautious, but not overly so. The availability of recording devices and other such subterfuge won’t be high; the devices are expensive to create, and they take mages away from other tasks. But that doesn’t mean vigilance isn’t needed. Like in today’s world, how far you need to go to ensure your privacy is directly proportional to the damage your secrets would cause if they got out. If you’re carrying around national secrets, then you’d be stupid not to use the best encryption available. You’d be a fool if you didn’t inspect every room you entered for hidden microphones, magical or mundane.

For most of us, though, it’s a matter of being careful. Don’t give out sensitive information, because you never know who might be listening. Unlike today, our magical kingdom doesn’t have government supercomputers listening to everything we say. It doesn’t have corporations scanning every word we write. But that doesn’t mean it’s easy to keep private matters private. There are always people snooping around. Magic won’t make them go away.

On neologisms

If you’re a writer of fiction that isn’t set wholly in Earth’s past or present, you’ve more than likely come across a situation requiring a word that simply does not exist. Science fiction has alien or future human technology; fantasy has magic and elves and the like. Sure, English has about a million words (depending on who’s counting) available for you to use, but sometimes that’s just not enough.

We’ve got a few ways we can fill this void. Which one is best depends on a lot of factors. For fantasy and aliens, you might need to come up with a fictional word from a fictional language. (If you do, well, maybe you should look at the Friday posts around here.) Established authors do this all the time, and not only to write epic conlang poetry. Tolkien casually dropped Elvish words like lembas into dialogue. Larry Niven’s Ringworld is constructed around a skeleton of scrith, an alien material stronger than anything humans could dream of making. And those are but two examples among many.

Technically, however, those are loanwords, linguistic borrowings that aren’t necessarily from any real language. For stories revolving around the interactions of disparate cultures, that might be exactly what you need. More human-focused writings, however, might want something else. This is especially true for, e.g., near-future sci-fi, where everything is mostly as it is today, apart from a few oddities. For these, we need to delve into the world of neologisms.

The making of a word

If you look at a dictionary of the English language, it’s obvious that no one sat down and came up with all of those hundreds of thousands of words in isolation. No, there are rules for most of them. Building blocks. Our language has a wide array of prefixes and suffixes, mostly borrowed from Latin and Greek in ages past, that allow us to create new terms with predictable meanings. (Linguists call this agglutination.) For example, we’ve got prefixes like un-, ex-, or over-, and then suffixes such as -ation, -ism, and -ness; Wikipedia, among others, has a whole list you can use.

Many of the new entries in the language—the more “technical” ones, at least—are fashioned by this process of agglutination: Internet, transgender, exoplanet, etc. All you have to do is snap the right pieces together to get the desired meaning, and there you go. In futuristic science fiction revolving around technological advancement, this may be all you really need.

Another option is even simpler: just use an existing word, but in a new context. We’re seeing that one a lot today, with terms like cast or stream or even tweet being reinterpreted to fit our modern world. Here, though, you have to be careful, because even if your characters understand the new meaning you’ve given these words, your readers might not. If you’re going this route, then, be sure to work in an explanation somewhere.

Compounding is another good option. Unlike agglutination, this sticks whole words together into a single, cohesive unit: swordmage, dragonborn. This process, in my opinion, is more suited to fantasy and such; it sounds less “scientific” to my ears. Your mileage may vary, however.

A kind of “opposite” of compounding and agglutination can be made by abbreviation. Different fields use this for jargon nowadays; in sci-fi, especially of the military or paramilitary varieties, this can make the narrator seem to “fit in” better. Shortened words like tac for tactical, vac for vacuum, and mag for magazine are mainly what I’m talking about here. They work best in dialogue, but putting them in narration is fine, as long as you make sure the reader is on board.

Last is the option of pure coinage—making a word from scratch. Unless you really know what you’re doing (or you’re not opposed to some serious linguistic construction), you might want to steer clear of this one. Here, you’re making a word that doesn’t actually exist, in whole or in part, and that’s a lot harder than you might think. When it’s not intended to be an “alien” word, whatever that may mean for your story, it’s actually quite difficult to come up with something that doesn’t sound corny and forced. For this one, I can’t really give much advice beyond “Play it by ear.”

In conclusion

However you choose to do it, adding new words (or new meanings for old words) really can help set the “otherness” of a world. An unfamiliar or nonexistent term is a sure sign that we’re not dealing with the ordinary anymore, whether it’s in there because you’re talking about aliens, elves, assault weapons, or the mysteries of the universe. (On a personal note, my forthcoming novel Nocturne uses neologisms to describe its magic; they’re all compounds.) Now, if you want to make a whole language, then check the “conlang” section of the site. And if you’re simply looking for technobabble that would make a Trekkie proud, well, that’s a different post. Maybe I’ll write it soon.

Building theocracy in fiction

Ask a lot of Americans (and other Westerners in general) what the scariest form of government is, and you’ll probably get the same answer from most of them: Islamic fundamentalist. We’re constantly bombarded (no pun intended) with all kinds of news about ISIS, Iran, the Taliban, sharia law, and the like. Some of it is exaggerated, but not all. For many people, a legal system constructed around strict Islamic principles is indeed a frightening prospect. (Funnily enough, some of those same people wouldn’t mind a strict Christian code of laws, but that’s neither here nor there.)

Islamic government and law form a subset of the general notion of a theocracy: government by religion. Although we strongly associate it with the Middle East today, it has always been around, in many different guises through the ages. The Vatican is essentially a theocracy, for example. Many medieval European nations, where kings were considered to rule by divine will and church law was sacrosanct, could be said to have theocratic underpinnings. The Puritans who came to America did so because they wanted a utopia where everyone followed their interpretation of the Bible. And that’s just in the West.

Theocracy is also one of those forms of government that appears often in fiction. Especially fantasy, where there’s the very real possibility of gods walking the earth; here, the literal translation of the term, “rule by god”, can be entirely accurate. But theocracy can pop up in historical fiction, too, and even sci-fi. Religion is a fact of life, as long as we live in modernish human societies, and there’s always the possibility that someone decides to invert the American ideal of separation of church and state.

Now, by our standards, theocracy is quite obviously a bad thing. We see ISIS lopping off heads, we hear tales of women being stoned to death because they were raped, we listen to talking heads speaking of the evils of sharia law, and it’s not hard to draw the conclusion that, hey, this isn’t a good idea.

On the other side of the aisle, we then see members of a different faith arguing that the Ten Commandments should be posted in courthouses, that Muslims should be banned from entering our country just on account of their beliefs, and that it’s okay for children to be forced to recite an oath calling the US “one nation under God“. Those are theocratic trappings, as well, and they’re no more wholesome than requiring a woman to wear a burqa in public.

Of gods and men

But enough politics. Let’s talk about theocracy as an institution, and how you can use it in your fictional worlds.

The basic idea, obviously, is that the government is constructed in such a way as to give primacy to religion. That can come in many forms, however, ranging from token to suffocating.

First, a “lighter” theocracy exists in places like Elizabethan England or the modern United States. Orthodoxy is paramount. Heresy and apostasy are denounced, possibly outlawed, but only outright persecuted when they reach a critical mass. Laws show deference to religion, and government quite clearly favors the majority or plurality, but there is also a significant secular code that must be followed. These theocracies can almost be considered benign, especially if you’re one of those who follows the “favored” faith.

Second are the medieval-style theocracies. Here, it’s not that church officials run the country, or that scripture is considered the first and last word in justice. No, this “medium” theocracy has religion as subtle yet pervasive. One sect is explicitly established as primary, and its teachings are used as a basis for law, but it is open to interpretation, and there stand some (such as kings) above the law by divine fiat. Following a different religion will mark you as an outcast in this style of theocracy, but it’s not an automatic death sentence. There may even be enclaves for non-believers, much as Jews often had their ghettoes in medieval times (and much later).

Higher on the scale are the “hard” theocracies like Saudi Arabia, and these, when they appear in fiction, are almost always of the “evil empire” sort. This is where beliefs have the power of law…but only if they don’t simply replace it. Not only is scriptural text the basis for the law code, it is the law. Violating holy precepts is considered a crime, ranging from a petty misdemeanor all the way up to high treason. Worse, it’s usually the faction in charge who gets to decide how the holy books are interpreted. Heresy is effectively rebellion, etc.

Last is the “literal” theocracy I mentioned above. This one can’t possibly exist in our natural world, but it’s doable in fantasy fiction. Here, a divine (or presumed divine, or just divinely inspired) being actually rules a nation. His word is both law and holy writ, and there’s no way that can be good. Usually, this type is more a foil for the protagonists, as in Brandon Sanderson’s Mistborn, Ian C. Esselmont’s Stonewielder, or Brian McClellan’s An Autumn Republic. Another option is that it’s a kind of utopian facade, where it looks like the godhead is benevolent and peaceful, but there are deeper strains; this one is especially good for polytheistic theocracies, and you could make an argument that that’s the case in Tolkien’s Silmarillion.

In the shadow of the gods

Depending on how heavy the theocratic leanings of a government, living can be essentially normal or worse than Communist Russia. It’s not that theocracy implies a police state or tyrannical overlord; that’s just the natural tendencies of mankind. There’s nothing stopping a theocracy from being something great, except that old maxim: absolute power corrupts absolutely. And what more absolute power is there than godhood? We see something similar with autocratic nations like North Korea, where the leader isn’t necessarily deified, but he’s the next best thing. Making government infallible (as a strong theocracy does) also makes it unimpeachable.

But a lot of it depends on the religion. Not merely what the holy texts say, but how they’re read. Moderate Muslims despise ISIS for cherry-picking verses, using them and only them to justify their ways. It’s no different from would-be Christian theocrats in America, quoting Leviticus as an argument to make homosexuality illegal while ignoring all the other awful stuff that book (and the rest of the Bible) contains. And it’s not limited to the Abrahamic faiths. Buddhist governments have done some pretty awful things. The Romans tolerated other religions until their followers got too uppity. Look through history, and you’ll see the same thing repeated everywhere.

That’s the bad, but is there good? Can there be good in theocracy? As a writer, I say yes. Maybe not in the way actual humans would do it, but I can construct a plausible chain of events that would lead to a relatively benign faith-based government. It would almost have to be a polytheistic faith, I think, one involving multiple “parties” of gods who often face off against one another. One probably without a lot of written scripture, maybe, or where that’s mostly limited to mythological tales. Something where “good” qualities are similar to our own. Imagine, for instance, a theocracy based on the Greek pantheon.

Getting to that point

But it’s those in-between events that I find more fascinating. How does a theocracy arise? How does it end?

Charisma, I believe, plays a factor in developing a theocracy. It doesn’t have to be individual, though that’s certainly an option, but charismatic religious leaders could convince the populace that theocratic rule is a good choice. Another possibility is a converted king, because converts are always the most zealous adherents of a faith. And then there’s the force option, as theocracy is proclaimed as a result of a revolution, but that again takes a certain amount of diplomacy to get the general population on board.

Ending a theocracy is a bit harder, particularly if it’s one of the harder varieties. Of course, a literal gods-among-us fantasy theocracy has an easy solution: kill the god. When you’re dealing with his subordinates, however, that doesn’t quite work; there’s always more to take their place. So, you need something stronger.

Outside influence can work, and that can take any form ranging from propaganda to direct interference to invasion. (“It’s not invasion, but liberation,” the outsiders would say in that case.) Popular revolt is another method that has been shown to work in the real world, but that implies two things. One, there really is support for overthrowing the priesthood—not always a given, especially on the eve of rebellion. Second, there’s a plan for replacing the theocracy itself, not just those at its head. It’s one thing to talk about turning, say, Iran into a democracy. Doing it (and not making the people there hate you for it) is another matter entirely.

The future of theocracy

Last, let’s talk about the idea of theocracy in science fiction. Now, that’s something that may not seem like it makes much sense. The future is supposed to be humanist, agnostic, or irreligious. Maybe all the people aren’t, but the setting itself typically considers religion to be, at best, a character quirk.

It doesn’t have to be that way. If you’re dealing with a spacefaring humanity, then there’s the potential for having colonies (planets in other solar systems, local asteroids, O’Neill habitats, etc.) that are designed for one specific culture. For example, a generation ship designed and built for the Mormons figures in James S.A. Corey’s Leviathan Wakes (and the TV series The Expanse). One could just as easily imagine an orbital ring inhabited entirely by displaced Palestinians, or a literal Plymouth Rock in the asteroid belt, where next-century Puritans could build their new Eden. And once aliens get involved, then you have their religions to think about; Star Trek: Deep Space Nine shows one way that could go.

These futuristic theocracies will have much in common with their modern or older ancestors. How much, of course, depends on many factors. First, how did they arise? “ISIS in Space” is going to be an entirely different sort of theocracy than some billionaire resurrecting the Levellers on a kilometer-long spin station as a social experiment. Second, how deep are the theocratic roots? Are we talking about a serious attempt at “a Biblical way of life”, or just “I want to live in a place where everybody goes to church on Sundays”? These factors, among others, will determine the character of a theocratic culture. That, in turn, will give you a good idea of where it stands on the utopia to tyranny axis.

In the real world, theocracies are justifiably frightening. For people who are tolerant or even nonbelievers, they show the worst that religious thought can offer. But in fictional settings, they can be a valuable asset. Whether ideal or idol, the mixing of church and state can bring about interesting social dynamics, conflicts, and character growth.

Magic and tech: cities

In today’s world, over half the planet’s population lives in urban areas. In other words, cities. That’s a lot, and the number is only increasing as cities grow ever larger, ever more expansive. Even on the smaller end (my local “big” city, Chattanooga, has somewhere around a quarter of a million people, and it’s not exactly considered huge), the city is a marker of human habitation, human civilization, and human culture. It’s a product of its people, its time and place.

In the city

The oldest cities are really old. Seriously. The most ancient ones we’ve found date back about 10,000 years, places like Çatalhöyük. Ever since then, the history of the world has centered on the urban. These oldest cities might have housed a few hundred or thousand people, probably as a way of ensuring mutual protection and the sharing of goods. But some eventually grew into monsters, holding tens or even hundreds of thousands of people, primarily to ensure mutual protection and the sharing of goods.

Looked at a certain way, that’s really all a city is: a centralized place where people live together. The benefits are obvious. It’s harder to conquer a city’s multitudes. There’s always somebody around if you need help. Assuming it’s there, you don’t have to go very far to find what you’re looking for. In a rural area, you don’t have any of that.

Of course, clustering all those people together has its downsides. In pre-modern times, two of those were paramount. First, every person living in a city was one not working in the fields, which meant that somebody else had to do the work of growing the city-dweller’s food and shipping it to the urban market. Great for economics, but now you’re depending on a hinterland that you don’t necessarily have access to.

The second problem is one we still struggle with today, and that is sanitation. I’m not just talking about sewage (which wasn’t nearly as big a problem in some old cities as we typically imagine), but a more general idea of public health. Cities are dirty places, mostly because they have so many people. Infections are easier to spread. Waste has to go somewhere, as does trash. Industry, even the pre-industrial sort, produces pollution of the air and water. And water itself becomes a commodity; even though most older cities were built near rivers or lakes (for obvious reasons), it might not be the cleanest source, especially in an unusually dry season.

Through the ages

A city’s character has changed throughout history. While they’ve retained their original purpose of being a gathering place for humanity, the other purposes they serve fall into a few different categories, some of which are more important in certain eras.

First of all, a city is an economic center. It holds the markets, the fairs, the trading houses. Sure, a village can have a weekly market pretty easily, but it takes a city to provide the infrastructure necessary for permanent shops and vendors. This includes food sellers, of course, but also craftsmen and artisans in older days, factories and department stores today. You don’t see Wal-Mart sticking a new store out in the middle of nowhere (the nearest to me are each about 10 miles away, in cities of about 10,000), and that’s for the same reason why, say, a medieval village won’t have a general shop: it’s not profitable. (The Wild West trope of the dry goods store is a special case. They provided needed materials to settlers, miners, and railroad workers, which was profitable.)

Another purpose of a city is as an administrative center. It’s a seat of government, a home to whatever the culture’s notion of justice entails. In modern times, that means a police force, a city council or mayor, a courthouse, a fire department, and so on. Cultures with cities will begin to centralize around them, and these central cities may later grow into states, city-states, nations, and even empires. Larger cities also have a way of “projecting” themselves; all roads lead to Rome, and how many Americans can name all five of New York City’s boroughs, but can’t name that many counties in their home state? With national and imperial capitals, this projection is even greater, as seen in London, Washington, Beijing, etc. This ties into both the economic reason above, as capitals of administration are very often capitals of commerce, and the one we’re about to see.

Thirdly, cities become cultural centers. While projecting force and economic power outward, they do the same for their culture. This develops naturally from the greater audiences the city provides; it’s hard for an artist to find patronage when he lives out in the country. (That’s just as true in 2017 as it was in 1453, by the way.) And since cities provide stability that rural areas can’t, this creates more incentive for creative types to move downtown. This creates a snowball effect, often spurred on by government investment—grants in modern times, patronage in eras past—until the city begins to take on a cultural character all its own. Like begets like in this case, and in a larger nation with multiple big cities, a kind of specialization arises: movies are for Los Angeles, Memphis has the blues, Vegas is where you go to gamble.

Now with magic

So that’s cities in the real world: urban centers of commerce, government, art, defense, and so many other things. What about in a magical world?

In many cases, it depends on how magic works in the setting. Magic that can be “industrialized” is easy: it effectively becomes another public service (if it requires infrastructure such as artificial “ley lines”—I have written a series based on exactly this concept) or private industry (if it instead takes skilled craftsmanship, as with enchanters in fantasy RPGs). In both of these cases, magic can almost fade into the background, becoming a part of the city’s very fabric.

For the slightly rarer and much less powerful magic we’ve been talking about in this series, it’s a bit of a different story. Yes, there will be magical industries, crafts, and arts; we’ve seen them in earlier parts. As magic in our realm is predictable, almost scientific, it will be used by those who depend on that predictability and repeatability. That includes both the private and public sectors. And enterprising mages will certainly sell the goods they create. That may be in a free market, or their prices and supplies might be tightly controlled, creating a black market for magical items.

If magic can be harnessed for public works, then that implies that cities in our magical realm are, by default, cleaner than their real-world contemporaries. They won’t be dystopian disaster areas like Victorian London or modern Flint. They’ll have clean streets and healthier, longer-lived people than their predecessors. Again, the snowball starts rolling here, because those very qualities, along with the city’s other aspects, will function as advertising, drawing immigrants from the countryside. And the automation and advancement we’ve already said will come to food production lets them do it. Thus, it’s not nearly as hard as you think to get a magical city up to, say, half a million in population.

The main thrust of this series has been that magic can effectively replace technology in certain types of worldbuilding. That’s never more true than in the city. Technology has made cities possibly in every era. The first urban areas arose about the same time as farming, and there’s no denying a connection there. Iron Age advances created the conditions necessary for the first true metropolises, and industrialization, machinery, and electricity gave us our modern megacities. At each stage, magic can create a shortcut, allowing cities to grow as large as they could in the “next” technological leap forward.

On fantasy stasis

In fantasy literature, the medieval era is the most common setting. Sure, you get the “flintlock fantasy” that moves things forward a bit, and then there’s the whole subgenre of urban fantasy, but most of the popular works of the past century center on the High Middle Ages.

It’s not hard to see why. That era has a lot going for it. It’s so far back that it’s well beyond living memory, so there’s nobody who can say, “It’s not really like that!” Records are spotty enough that there’s a lot of room for “hidden” discoveries and alternate histories. You get all the knights and chivalry and nobility as a builtin part of the setting, but you don’t have to worry about gunpowder weapons if you don’t want to, or oceanic exploration, or some of the more complex scientific matters discovered in the Renaissance.

For a fantasy world, of course, medieval times give you mostly the same advantages, but also a few more. It’s less you have to do, obviously, as you don’t have the explosion of technology and discovery starting circa 1500. Medieval times were simpler, in a way, and simple makes worldbuilding easy. Magic fits neatly in the gaps of medieval knowledge. The world map can have the blank spaces needed to hide a dragon or a wizard’s lair.

Times are (not) changing

But this presents a problem, because another thing fantasy authors really, really want is a long history, yet they don’t want the usual pattern of advancement that comes with those long ages. Just to take examples from some of my personal favorites, let’s see what we’ve got.

  • A Song of Ice and Fire, by George R. R. Martin. You’ll probably know this better as Game of Thrones, the TV show, but the books go into far greater depth concerning the world history. The Others (White Walkers, in the show, for reasons I’ve never clearly understood) last came around some 8,000 years ago. About the only thing that’s changed since is the introduction of iron weaponry.

  • Lord of the Rings; J.R.R. Tolkien. Everybody knows this one, but how many know Middle Earth’s “internal” history? The Third Age lasts over 3,000 years with no notable technological progress, and that’s on top of the 3,500 years of the Second Age and a First Age (from The Silmarillion) that tacks on another 600 or so. Indeed, most technology in Middle Earth comes from the great enemies, Sauron and Morgoth and Saruman. That’s certainly no coincidence.

  • Mistborn; Brandon Sanderson. Here’s a case where technology actually regressed over the course of 1,000 years. The tyrannical Lord Ruler suppressed the knowledge of gunpowder (he preferred his ranged fighters to have skill) and turned society from seemingly generic fantasy feudalism into a brutal serfdom. (The newer trilogy, interestingly, upends this trope entirely; the world has gone from essentially zero—because of events at the end of Book 3—to Victorian Era in something like 500 years.)

  • Malazan Book of the Fallen; Steven Erikson. This series already has more timeline errors than I can count, so many that fans have turned the whole thing into a meme, and even the author himself lampooned it in the story. But Erikson takes the “fantasy stasis” to a whole new level. The “old” races are over 100,000 years old, there was an ice age somewhere in there, and the best anyone’s done is oceangoing ships and magical explosives, both within the last century or so.

Back in time

It’s a conundrum. Let’s look at our own Western history to see why. A thousand years ago was the Middle Ages, the time when your average fantasy takes place. It’s the time of William the Conqueror, of the Holy Roman Empire and the Crusades and, later, the Black Death. Cathedrals were being built, the first universities founded, and so on. But it was nothing like today. It was truly a whole different world.

Add another thousand years, and you’re in Roman times. You’ve got Caesar, Pliny the Elder, Vesuvius, Jesus. Here, you’re in a world of antiquity, but you have to remember that it’s not really any further back from medieval times than they are from us. If we in 2017 are at the destruction of the One Ring, the founding of the Shire was not long after all this, about at the fall of the Roman Empire.

Another millennium takes you to ancient Greece, to the Bronze Age. That’s “Bronze Age” as in “ironworking hasn’t been invented yet”, by the way. Well, it had been, but it was only used in limited circumstances. Three thousand years ago is about the time of the later Old Testament or Homer. Compared to us, it’s totally unrecognizable, but it’s about the same length of time between the first time the One Ring was worn by someone other than Sauron and the moment Frodo and Sam walked up to Mount Doom.

Let’s try 8,000, like in Westeros. Where does that put us in Earth history? Well, it would be 6000 BC, so before Egypt, Sumeria, Babylon, the Minoans…even the Chinese. The biggest city in the world might have a few thousand people in it—Jericho and Çatalhöyük are about that old. Domestication of animals and plants is still in its infancy at this point in time; you’re closer to the first crops than to the first computers. Bran the Builder would have to have magic to make the Wall. The technology sure wasn’t there yet.

Breaking the ice

And that’s really the problem with so many of these great epic fantasy sagas. Yes, we get to see the grand sweep of history in the background, but it’s only grand because it’s been stretched. In the real world, centuries of stasis simply don’t exist in the eras of these stories. Even the Dark Ages saw substantial progress in some areas, and that’s not counting the massive advancement happening in, say, the Islamic world.

To have this stasis and make it work (assuming it’s not just ancient tales recast in modern terms) requires something supernatural, something beyond what we know. That can be magic or otherworldly beings or even a “caretaker” ruler, but it has to be something. Left to their own devices, people will invent their way out of the Fantasy Dark Age.

Maybe magic replaces technology. That’s an interesting thought, and one that fits in with some of my other writings here. It’s certainly plausible that a high level of magical talent could retard technological development. Magic is often described as far easier than invention, and far more practical now.

Supernatural beings can also put a damper on tech levels, but they may also have the opposite effect. If the mighty dragon kills everything that comes within 100 yards, then a gun that can shoot straight at twice that would be invaluable. Frodo’s quest would have been a piece of cake if he’d had even a World War I airplane, and you don’t even have to bring the Eagles into that one! Again, people are smart. They’ll figure these things out, given enough time. Thousands of years is definitely enough time.

Call this a rant if you like. Maybe that’s what it really is. Now, I’m not saying I hate stories that assume hundreds or thousands of years of stagnation. I don’t; some of my favorite books hinge on that very assumption. But worldbuilding can do better. That’s what I’m after. If that means I’ll never write a true work of epic fantasy, then so be it. There’s plenty of wonder out there.