On eclipses and omens

(I’m writing this post early, as I so often do. For reference, today, from the author’s perspective, is July 17, 2017. In other words, it’s 5 weeks before the posting date. In that amount of time, a lot can happen, but I can guarantee one thing: it will be cloudy on August 21. Especially in the hours just after noon.)

Today is a grand day, a great time to be alive, for it is the day of the Great American Eclipse. I’m lucky—except for the part where the weather won’t cooperate—because I live in the path of totality. Some Americans will have to travel hundreds of miles to see this brief darkening of the sun; I only have to step outside. (And remember the welding glasses or whatever, but that’s a different story.)

Eclipses of any kind are a spectacle. I’ve seen a handful of lunar ones in my 33 years, but never a solar eclipse. Those of the moon, though, really are amazing, especially the redder ones. But treating them as a natural occurrence, as a simple astronomical event that boils down to a geometry problem, that’s a very modern view. In ages past, an eclipse could be taken as any number of things, many of them bad. For a writer, that can create some very fertile ground.


Strictly speaking, an eclipse is nothing more unusual than any other alignment of celestial bodies. It’s just a lot more noticeable, that’s all. The new moon is always invisible, because its dark side is facing us, but our satellite’s orbital inclination means that it often goes into its new phase above or below the sun, relative to the sky. Only rarely does it cross directly in front of the solar disk from our perspective. Conversely, it’s rare—but not quite as rare—for the moon to fall squarely in the shadow created by the Earth when it’s full.

The vagaries of orbital mechanics mean that not every eclipse is the same. Some are total, like the one today, where the shadowing body completely covers the sun. For a solar eclipse, that means the moon is right between us and the sun—as viewed by certain parts of the world—and we’ll have two or three minutes of darkness along a long, narrow path. On the flip side, lunar eclipses are viewable by many more people, as we are the ones doing the shadowing.

Another possibility is the partial eclipse, where the alignment doesn’t quite work out perfectly; people outside of the path of totality today will only get a partial solar eclipse, and that track is so narrow that my aunt, who lives less than 15 miles to the south, is on its uncertain edge. Or you might get an annular solar eclipse, where the moon is at its apogee (farthest point in its orbit), so it isn’t quite big enough to cover the whole sun, instead leaving a blinding ring. And then there’s the penumbral lunar eclipse, essentially a mirrored version of the annular; in this case, the moon doesn’t go through the Earth’s full shadow, and most people barely even notice anything’s wrong.

However it happens, the eclipse is an astronomical eventuality. Our moon is big enough and close enough to cover the whole sun, so it’s only natural that we have solar eclipses. (On Mars, it wouldn’t work, because Phobos and Deimos are too tiny. Instead, you’d have transits, similar to the transit of Venus a few years ago.) Similarly, the moon is close enough to fall completely within its primary’s shadow on some occasions, so lunar eclipses were always going to happen.

These events are regular, precise. We can predict them years, even centuries in advance. Gravity and orbital mechanics give alignments a clockwork rhythm that can only change if acted upon by an outside body.

Days of old

In earlier days, some people saw a much different outside body at work in the heavens. Even once a culture reaches a level of mathematical and astronomical advancement where eclipses become predictable, that doesn’t mean the average person isn’t going to continue seeing them as portents. How many people believe in astrology today?

And let’s face it: an eclipse, if you don’t really know what’s going on, might be scary. Here’s the sun disappearing before our very eyes. Or the moon. Or, if it’s a particularly colorful lunar eclipse, then the moon isn’t vanishing, but turning red. You know, the color of blood. Somebody who doesn’t understand orbits and geometry would be well inclined to think something strange is going on.

Writers of fantasy and historical fiction can use this to great effect, because a rare event like an eclipse is a perfect catalyst for change and conflict. People might see it as an omen, a sign of impending doom. Then, seeing it, they might be moved to bring about the doom themselves. Seven minutes of darkness—the most we on Earth can get—might not be too bad, but a fantasy world with a larger moon may have solar eclipses that last for an hour or more, like our lunar eclipses today. That could be enough time to unnerve even the hardiest souls.

Science fiction can get into the act here, too, as in Isaac Asimov’s Nightfall. If a culture only sees an eclipse once every thousand years or so, then even the memory of the event might be forgotten by the next time it comes around. And then what happens? In the same vein, the eclipse of Pitch Black releases the horrors of that story; working that out provides a good mystery to be solved, while the partial phase offers a practical method of building tension.

Beyond the psychological effects and theological implications of an eclipse, they work well in any case where astronomy and the predictive power of science play a role. Recall, if you will, the famous story of Columbus using a known upcoming eclipse as a way to scare an indigenous culture that lacked the knowledge of its arrival. Someone who has that knowledge can very easily lord it over those who do not, which sets up potential conflicts—or provides a way out of them. “Release me, or I will take away the sun” works as a threat, if the people you’re threatening can’t be sure the sun won’t come back.

In fantasy, eclipses can even fit into the backstory. The titular character of my novel Nocturne was born during a solar eclipse (I wrote the book because of the one today, in fact), and that special quality, combined with the peculiar magic system of the setting, provides most of the forward movement of the story. On a more epic level, if fantasy gods wander the land, one of them might have the power to make his own eclipses. A good way of keeping the peasants and worshippers in line, wouldn’t you say?

However you do it, treating an eclipse as something amiss in the heavens works a lot better for a story than assuming it’s a normal celestial occurrence. Yes, they happen. Yes, they’re regular. But if they’re unexpected, then they can be so much more useful. But that’s true of science in general, at least when you start melding it with fantasy. The whole purpose of science is to explain the world in a rational manner, but fantasy is almost the antithesis of rationality. So, by keeping eclipses mysterious, momentous, portentous occasions, we let them stay in the realm of fantasy. For today, I think that’s a good thing.

On the elements

Very recently, a milestone was reached, an important goal in the study of chemistry. The seventh row of the periodic table was officially filled in. Now, almost nobody outside of a few laboratories cares anything about oganesson and tennessine (nice to see that my state finally gets its own element, though), and they’ll probably never have any actual use, but they’re there, and now we know they are.

Especially in science fiction, there’s the trope of the “unknown” element that has or allows some sort of superpowers. In some cases, this takes the form of a supposed chemical element, such as the fictitious “elerium”, “adamantium”, or even “unobtainium”. Other works instead use something that could better be described as a compound (“kryptonite”) or something else entirely (“element zero”). But the idea remains the same.

So this post is a quick overview of the elements we know. As a whole, science is quite confident that we do know all the elements in nature. Atomic theory is pretty clear on that point; the periodic table has no more “gaps” in the middle, and we’ve now filled in all the ones at the end. But element 118 only got named in 2016, and that’s proof that we didn’t always know everything.

The ancients

The classical idea of “element” wasn’t exactly chemically sound. We know the Greek division of earth, air, fire, and water, a four-way distinction still used in fantasy literature and other media; other cultures had similar concepts, if not always the same divisions.

But they also knew of chemical elements, particularly a few that occur naturally in “pure” form. Gold, silver, copper, tin, and lead are the ones most people recognize as being “prehistoric”. (Native copper is relatively rare, but it pops up in a few places, and most of those, coincidentally enough, show evidence of a bronze-working culture nearby.) Carbon, in the form of charcoal, doesn’t take too much work to purify. Meteorites provided early iron. Sulfur can be found anywhere there’s a volcano—probably a good reason to associate the smell of “brimstone” with eternal punishment. And don’t forget “quicksilver”, or mercury.

We’ve also got evidence of bismuth and antimony known in something like elemental form. Both found medicinal uses, despite being quite toxic. (Mercury was the same, and it’s even worse, because it’s a liquid at room temperature.) And then there’s the curious case of platinum. Some evidence points to it being used on either side of the Atlantic in olden times, which is good news for the fantasy types who need a coin more valuable than gold.

The alchemists

For most of Western history, chemists—or what passed for them—tended to focus on compounds rather than isolating elements. However, there were a few advances on that front, too. Albertus Magnus separated arsenic from its compounding partners in the 13th century, much to the delight of poisoners everywhere. Elemental zinc is also an alchemical discovery in Europe, though a few records point to it being made far earlier in India.

Around this time, the very definition of an element was in flux, especially in medieval and Renaissance Europe. You still had the Aristotelian view of the four elements, broadly supported by the Church, but then there were the alchemists and others working on their own things. Some of the questions they considered led to great discoveries later on, but the technology wasn’t yet ready to isolate all the elements. So, in this particular age (conveniently enough, the perfect era for fantasy), there’s still a lot left to find.

The enlightened ones

Henning Brand gets the credit for discovering phosphorus, according to the book I’m looking at right now. That was in 1669, almost a century and a half after Paracelsus possibly experimented with metallic zinc, and a full four hundred years after the last definitive evidence for discovery. The next on the timeline doesn’t come until 1735: cobalt.

Those opened the floodgates. By this point, you could hear the first stirrings of the Industrial Revolution, and that brought advances to the technology of chemistry. The more liberal academic climate led to greater experimentation, as well. All in all, the late 18th century was the beginning of an element storm. Thanks to electricity, the vacuum, and numerous other developments, enterprising chemists (no longer alchemists at this point) started finding elements seemingly everywhere.

It’s this era where the periodic table is a bit of a Wild West. Everything is up in the air, and nobody really knows what’s what. Indeed, there are quite a few mistaken discoveries in the years before Mendeleev, some of them even finding their way into actual chemistry textbooks. In most cases, these were simple mistakes or even rediscoveries; there were a few fights over primacy, too. But it shows that it wasn’t until relatively recently that we knew all these elements couldn’t exist.

The periodic age

Once the periodic table became the gold standard for chemistry, finding new elements became a matter of filling in the blanks. We know there’s an element that goes here, and it’ll be a little like these. So that’s how we got most of the rest of the gang in the late 1800s through about 1940 or so.

Ever since nuclear science came into existence, we’ve seen a steady stream of new elements being created in particle accelerators or other laboratory conditions. Strictly speaking, that began in 1937 with technetium (more on it in a moment), but it really got going after World War II. Over the next 70 years, scientists made from scratch a couple dozen new elements, none of which exist in nature, most tearing themselves apart within the barest fraction of a second.

Nuclear physics explains why these superheavy elements don’t work right. The way we make them is by forcing lighter elements to fuse, but that leaves them with too few neutrons to truly be stable. The island of stability hypothesis says that some of them could actually be stable enough to be useful…if we built them right. So, even though there’s no more room on the periodic table (unless Period 8 turns out to exist), that’s not to say all those spots along the bottom row have to disappear in the blink of an eye.

The oddballs

Last but not least, there are a few weirdos in the periodic table, and these deserve special mention. Two of them are quite odd indeed: technetium and promethium. By any reasonable standard, these should be stable. Technetium is element 43, a transition metal that should act a bit like a heavier manganese.

No such luck. Due to a curious quirk in atomic structure, 43 is a kind of “magic number”. An atom with 43 protons (which would be, by definition, technetium) can never be fully stable. At best, it can have a long half-life, and some isotopes do last for tens of thousands of years, but stable? Alas, no. Promethium, element 61, is the same way, for much the same reason.

Uranium is well-known as the last “stable” element, although none of its isotopes are truly stable; the most stable, 238, has a half-life around the current age of the Earth. Element 92 is also familiar as the fuel for a man-made fission reactor or a bomb, but it’s even more interesting than that. Because it’s radioactive, yet it can last for so long, uranium has the curious property of “spontaneous” fission. A few places in the world are actually natural nuclear reactors, though most have long since decayed below critical mass. A culture living near something like that, however, might discover neptunium, plutonium, and other decay byproducts long before they probably should. (They’ll likely find the link between radiation and cancer pretty early, too.)

The end

Depending on who you ask, we’re either at the end of the periodic table, or we’re not. Some theories have it running out at 118, some say 137, and one even says infinity. The patterns are already clear, though. If there’s no true island of stability, then most anything else we find is going to be extremely short-lived, highly radioactive, or both. Probably that last one.

Today, then, there’s not really the possibility for an “undiscovered” element. We simply don’t have a place to put it. That doesn’t mean your sci-fi is out of luck, though. There could be isotopes of existing elements that we don’t have; this is especially true of the transuranic elements. More likely, though, would be a compound not seen on Earth. A crystal structure we don’t have, or an alloy, or something of that sort—a novel combination of existing elements, rather than a single new one.

And then you have the more bizarre forms of matter. Neutronium (the stuff of neutron stars), if you could make it stable when you don’t have an Earth mass of the stuff packed into something the size of your house, would be a true “element zero”, and it may have interesting properties. Antimatter atoms would annihilate their “normal” cousins, but we don’t know much about them other than that. You might even be able to handwave something using other particles, like muons, or different arrangements of quarks. These wouldn’t create new elements in the traditional sense, but an entire new branch of chemistry.

So don’t get discouraged. Just because there’s no place on the periodic table to put your imaginary elements, that doesn’t mean you have to choose between them and scientific rigor. You just have to think outside the 118 boxes.

Exoplanets for builders

In just over two decades, we’ve gone from knowing about nine planets (shut up, Pluto haters) to recognizing the existence of thousands of them. Almost all of those are completely unsuitable for life as we know it, but researchers say it’s only a matter of time before we find “Earth 2.0”. Like any other 2.0 version, I’m sure that one will have fewer features and be harder to use, but never mind that.

As so many science fiction writers like to add in a large helping of verisimilitude, I thought I’d write a post summarizing what we know about planets outside our solar system, or exoplanets, as we enter 2017. Keep in mind that there’s a lot even I don’t know, although I’ve been following the field as a lay observer since 2000. Nonetheless, I hope there’s enough in here to stimulate your imagination. Also, this will necessarily be a technical post, so fantasy authors beware.

What we know

We know planets exist beyond our solar system. They’ve been detected by the way they pull on their stars as they orbit (the Doppler or radial velocity method), and that’s how we found most of the early ones. The majority of those known today, thanks to the Kepler mission, have been discovered by searching for the change in their stars’ light intensity as the planets pass before them: the transit method. In addition, we have a few examples of microlensing, where the gravity of a planet bends the light of a “background” star ever so slightly. And we’ve got a handful of cases where we’ve directly imaged the planets themselves, though these tend to be very, very large planets, many times the size of Jupiter.

However we see them, we’re sure they’re out there. They can’t all be false positives. And thanks to Kepler, we’ve got enough data to start drawing some conclusions. Of course, these must be considered subject to change, but that’s the way of science.

First, our solar system, with its G-type star orbited by anywhere from eight to twenty planets (depending on who’s counting) starting at about 0.3 AU, looks very much like an outlier. We don’t have a “hot Jupiter”, a gas giant exceedingly close to the star, with an orbit on the order of days. Nor do we have a “warm Neptune” (a mid-range gaseous planet somewhere in the inner system) or a “super-Earth” (a larger terrestrial world, possibly with a thick atmosphere). This doesn’t mean we’re unique, though, only that we can’t assume our situation is the norm.

Second, we’ve got a pretty good idea about which stars have planets. To a first approximation, that’s all of them, but the reality is a little more nuanced. Bright giants don’t have time to form planets. Small red dwarfs don’t have the material to create Jupiter-size giants. Neither of these statements is an absolute—we’ve got examples of gas giants around M-class stars—but they’re tendencies. Everything else, seemingly, is up in the air.

What we can guess

Planets do appear to be everywhere we look. There are more of them around M stars, but that’s largely because there are so many more M stars to begin with. A lot of stars have planets with much closer orbits, so close that you wouldn’t expect them to form. Gas giants aren’t restricted to the outer system, like they are here. And there’s a whole class, the super-Earths, that we never knew existed.

We can make some educated guesses about some of these planets. For example, many of the super-Earths, according to computer simulations, may actually be tiny versions of Neptune, so-called “gas dwarfs”. If that’s true, it severely cuts our number of potentially habitable worlds. On the other hand, the definition of the habitable zone has only expanded since we started finding exoplanets. (Even in our own solar system, what once was merely Earth and maybe the Martian underground now includes Europe, Titan, Enceladus, Ganymede, Ceres, the cloud tops of Venus, and about a dozen more exotic locales.) Likewise, studies suggest that a tide-locked planet around a red dwarf star doesn’t have to be frozen on one side and scorched on the other.

We’ve got a few points where we don’t even have data, though. One of these, possibly the most important for a writer, is the frequency of Earthlike worlds. By “Earthlike”, I don’t simply mean terrestrial, but terrestrial and capable of having liquid water on the surface. Where’s the closest one of those? Until about a year ago, the answer might have been anywhere from 15 to 500 light-years away. But then came Proxima b. If it turns out to be potentially habitable—in the month and a half between my writing this post and it going up, we may very well know—then that almost ensures that Earthlike worlds are everywhere. Because what are the chances that the next-closest star to the Sun has one, too?

Creating a planet

For the speculative writer, this lack of knowledge is a boon. We have the freedom to create, and there are few definite boundaries. Want to put a gas giant in the center of a star’s habitable zone, with multiple Earthlike moons? We can’t prove it’s impossible, and the real-life counterpart might really be out there, waiting to be found.

Basically, here’s a rundown of some of the factors that go into creating an exoplanet:

  • Star size: Bigger stars are shorter-lived, but smaller ones require their “classically” habitable planets to be much closer, to the point where they’ll likely be tide-locked. G-type dwarfs like ours are a happy medium, but not a common one: something like 1% of stars are in the G class, and there’s not much data saying that planets are more likely around them.

  • Star number: Most stars, it seems, are in multiple systems. Binaries can host planets, though; we’ve detected a class of “Tatooine” planets (named after the one in Star Wars, because scientists are nerds) circling binary systems. For close binaries, this is a fairly stable arrangement, but with huge complexities in working out parameters like temperature. Distant binaries like Alpha Centauri can instead have individual planetary systems.

  • Planet size: We used to think there was a sharp cutoff between terrestrial and gaseous planets, based on the difference between the largest terrestrial we knew (Earth) and the smallest gas planets (Uranus and Neptune). Now we know that’s simply not true. It’s more of a continuum, and there may be super-Earths much larger than the smallest mini-Neptunes. And those gas dwarfs appear to be the most common type of planet, but that could be nothing more than observation bias, the way we thought hot Jupiters were incredibly common ten years ago. On the smaller end of the scale, we haven’t found much, but there’s no reason to expect that exoplanet analogues of Mars, Mercury, Pluto, and Ganymede don’t exist.

  • Surface temperature: This is a big one, as it’s critical for life as we know it. We know that liquid water exists between 0° and 100°C (32–212°F), with the upper bound being a bit fluid due to atmospheric pressure. That 100 (or 180) degrees is a lot of room to play with, but remember that it’s not all available. DNA, for example, can break down above about 50°C. Below freezing, of course, you get into subsurface oceans, which might be fun for exploration purposes.

  • Atmosphere: Except for a couple of gas giants, we’ve got nothing here. We have no idea if the nitrogen-oxygen mix of Earth is common, or if most planets we find would be CO2 pressure cookers like Venus. Or they could retain their primordial hydrogen-helium atmospheres, or be nearly airless like Mars. Something tells me that we’ll find all of those soon enough.

  • Life: And so we come to this. Life, we know, changes a planet, just as the planet changes it. A biosphere will be detectable, even from the distance of light-years. It will get noticed, once telescopes and instruments are sensitive enough to see it. And it will stand out. Some chemicals just don’t show up without life, or at least not in the quantities that it brings. Methane, O2, and a few others are considered likely biotic markers. The million-dollar question is just how likely life really is. Is it everywhere? Are there aliens on Proxima b right now? If so, are they single-celled, or possibly advanced enough to be looking back at us? Here is the writers’ playground.

What’s to come

Assuming the status quo—never a safe assumption—our capability for detecting and classifying exoplanets is only expected to increase in the coming years. But I’ve heard that one before. Once upon a time, the timeline looked like this: Kepler in 2004 or 2005, the Space Interferometry Mission (SIM) in 2009, and the Terrestrial Planet Finder (TPF) in 2012. In reality, we got Kepler in 2009 (it’s now busted and on a secondary mission). TPF was “indefinitely deferred”, and SIM was left to languish before being mercy-killed some years ago. The Europeans did no better; their Darwin mission suffered the same let’s-not-call-it-cancelled fate as TPF. Now, both missions might get launched in the 2030s…but they probably won’t.

On the bright side, we’ve got a small crop of upcoming developments. TESS (Transiting Exoplanet Sky Survey, I think) is slated to launch this year—I’ll believe it when I see it. The James Webb Space Telescope, the Hubble’s less-capable brother, might go up in 2018, but its schedule is going to be too crowded to allow it to do more than confirm detections made by other means.

Ground-based telescopes are about at their limit, but that hasn’t stopped us from trying. The E-ELT is expected to start operations in 2024, the Giant Magellan Telescope in 2025, and these are exoplanet-capable. The Thirty Meter Telescope was supposed to join them in about the same timeframe, but politically motivated protests stopped that plan, and the world is poorer for it.

Instead of focusing on the doom and gloom, though, let’s look on the bright side. Even with all the problems exoplanet research has faced, it’s made wonderful progress. When I was born, we didn’t know for sure if there were any planets outside our own solar system. Now, we’re finding them everywhere we look. They may not be the ones science fiction has trained us to imagine, but truth is always stranger than fiction. Forget about “Earth 2”. In a few years, we might have more Earths than we know what to do with. And wouldn’t that make a good story?

On space battles

It’s a glorious thing, combat in space, or so Hollywood would have us believe. Star Wars shows us an analog of carrier warfare, with large ships (like Star Destroyers) launching out wing after wing of small craft (TIE Fighters and X-Wings) that duke it out amid the starry expanse. That other bastion of popular science fiction, Star Trek, also depicts space warfare in naval terms, as a dark, three-dimensional version of the ship-to-ship combat of yore. Most “smaller” universes ape these big two, so the general idea in modern minds is this: space battles look like WWII, but in space.

Ask anyone who has studied the subject in any depth, however, and they’ll tell you that isn’t how it would be. There’s a great divide between what most people think space combat might be like, and the form the experts have concluded it would take. I’m not here to “debunk”, though. If you’re a creator, and you want aerial dogfighting, then go for it, if that’s what your work needs. Just don’t expect the nitpickers to care for it.

Space is big

The first problem with most depictions of space battles is one of scale. As the saying goes, space is big. No, scratch that. I’ll tell you right now that saying is wrong. Space isn’t big. It’s so huge, so enormous, that there aren’t enough adjectives in the English language to encompass its vastness.

That’s where Hollywood runs into trouble. Warfare today is often conducted via drone strikes, controlled by people sitting at consoles halfway around the world from their targets. We rightfully consider that an impersonal way of fighting, but what’s striking is the 10,000 miles standing between offense and defense. How many Americans could place Aleppo on a map? (The guy that finished third in the last presidential election couldn’t.) Worse, how would you make a drone strike dramatic?

In space, the problem is magnified greatly. Ten thousand miles gets you effectively nowhere. From the surface of Earth, that doesn’t even take you past geostationary satellites! It’s over twenty times that to the Moon, and Mars is (at best) about another 100 times that. In naval warfare, it became a big deal when guns got good enough to strike something over the horizon. Space has no horizon, but the principle is the same. With as much room as you’ve got to move, there’s almost no reason why two craft would ever come close enough to see as more than a speck. A range of 10,000 miles might very well be considered point-blank in space terms, which is bad news for action shots.

Space is empty (except when it isn’t)

Compounding the problem of space’s size is its relative emptiness. There’s simply nothing there. Movies show asteroid belts as these densely packed regions full of rocks bumping into each other and sleek smuggler ships weaving through them. And some stars might even have something like that. (Tabby’s Star, aka KIC 8462852, almost requires a ring of this magnitude, unless you’re ready to invoke Dyson spheres.) But our own Solar System doesn’t.

We’ve got two asteroid belts, but the Kuiper Belt is so diffuse that we’re still finding objects hundreds of miles across out there! And the Main Belt isn’t that much better. You can easily travel a million miles through it without running across anything bigger than a baseball. Collisions between large bodies are comparatively rare; if they were common, we’d know.

Space’s emptiness also means that stealth is quite difficult. There’s nothing to hide behind, and the background is almost totally flat in any spectrum. And, because you’re in a vacuum, any heat emissions are going to be blindingly obvious to anyone looking in the right direction. So are rocket flares, or targeting lasers, radio transmissions…

Space plays its own game

The worst part of all is that space has its own rules, and those don’t match anything we’re familiar with here on Earth. For one thing, it’s a vacuum. I’ve already said that, but that statement points out something else: without air, wings don’t work. Spacecraft don’t bank. They don’t need to. (They also don’t brake. Once they’re traveling at a certain speed, they’ll keep going until something stops them.)

Another one of those pesky Newtonian mechanics that comes into play is the Third Law. Every action has an equal and opposite reaction. That’s how rockets work: they spit stuff out the back to propel themselves ahead. Solar sails use the same principle, but turned around. Right now, we’ve got one example (the EmDrive) of something that may get around this fundamental law, assuming it’s not experimental error, but everything in space now and for the near future requires something to push on, or something to push against it. That puts a severe limit on craft sizes, speeds, and operating environments. Moving, for example, the Enterprise by means of conventional thrusters is a non-starter.

And then there’s the ultimate speed limit: light. Every idea we’ve got to get around the light-speed barrier is theoretical at best, crackpot at worst. Because space is huge, light’s speed limit hampers all aspects of space warfare. It’s a maximum for the transmission of information, too. By the time you detect that laser beam, it’s already hitting you.

Reality check

If you want hyperrealism in your space battles, then, you’ll have to throw out most of the book of received wisdom on the subject. The odds are severely stacked against it being anything at all like WWII aerial and naval combat. Instead, the common comparison among those who have researched the topic is to submarine warfare. Thinking about it, you can probably see the parallels. You’ve got relatively small craft in a relatively big, very hostile medium. Fighting takes place over great distances, at a fairly slow speed. Instead of holding up Star Trek as our example, maybe we should be looking more at Hunt for Red October or Das Boot.

But that’s if reality is what you’re looking for. In books, that’s all well and good, because you don’t have to worry about creating something flashy for the crowd. TV and movies need something more, and they can get it…for a price. That price? Realism.

Depending on the assumptions of your universe, you can tinker a bit with the form of space combat. With reactionless engines, a lot of the problems with ship size and range go away. FTL travel based around “jump points” neatly explains why so many ships would be in such close proximity. Depending on how you justify your “hyperspace” or “subspace”, you could even find a way to handwave banked flight.

Each choice you make will help shape the “style” of combat. If useful reactionless engines require enormous power inputs, for instance, but your civilization has also invented some incredibly efficient rockets on smaller scales, then that might explain a carrier-fighter mode of warfare. Conversely, if everything can use “impulse” engines, then there’s no need for waves of smaller craft. Need super-high acceleration in your fighters, but don’t have a way to counteract its effects? Well, hope you like drones, because that’s what would naturally develop. But if FTL space can only be navigated by a human intelligence (as in Dune), then you’ve got room for people on the carriers.

In the end, it all comes down to the effect you’re trying to create. For something like space combat, this may mean working “backward”. Instead of beginning with the founding principles of your story universe, it might be better to derive those principles from the style of fighting you want to portray. It’s not my usual method of worldbuilding, but it does have one advantage: you’ll always get the desired result, because that’s where you started. For some, that may be all you need.

Magic and tech: heating and cooling

Humans are virtually unique among species in altering their environment to better suit their needs. (How much they alter that environment is a matter of some debate, but that doesn’t concern us now.) No other species that we know of has created an artificial means of changing the ambient temperature of an enclosed area. Some animals and plants can regulate their internal temperature, but not that of their surroundings. We’re alone in that.

Heating things up is fairly easy. Fire is one of the oldest inventions of mankind, and it’s practically the standard marker for human habitation. Almost nothing in nature can cause fires—lightning is one of a very few examples—and wildfires are uncontrolled by definition. A tended fire, then, screams for a human interpretation.

Fire, of course, has been useful for many things throughout history. Cooking was one of its earliest uses, with pottery and metalworking coming along later. And as the ages have passed, our command of the flame has only grown. We’ve gone from open fires to furnaces and ovens and incinerators. We’ve changed from using wood to coal to electricity and gas and even lasers.

On the other side of the coin, cooling is much, much harder. Fans are old, but they’re awfully inefficient. Ice melts, and if you don’t have a way to make it, you’ve got to carry it in from elsewhere, losing some (or most) along the way. Some places had the ability to store food in the frozen ground, but that usually only works about two or three months out of the year. It wasn’t until the Scientific Revolution that we starting developing ways to create artificial cold, through vacuum pumps and air compressors. Today, we can reach somewhere around a billionth of a degree above absolute zero, the coldest possible temperature, but the vast majority of our ancestors were virtually out of luck.

Where we stand

So, the state of our magical world is, compared to ours, pretty dire. We’ll start with cooling technology. That’s easy, because there basically isn’t any. Without magic, we’re mostly limited to fans and (when we can find it) ice. Instead of modern air conditioning, houses were built to control the flow of heat. High ceilings allowed hot air to rise, effectively cooling the lower floor. Houses could be constructed to take advantage of the prevailing winds. And food that needed to be preserved could be salted or smoked or pickled. Or kept in cellars, where the temperatures are fairly steady and cool.

As in our world, heat is another matter altogether. Our created world has a good command of fire, even before you add in the arcane. They can work (some) metals, which requires great heat and, more importantly, control of that heat. Houses have hearths and fireplaces, and sometimes ovens. A few public buildings have something similar to the Roman hypocaust, a kind of central heating created by piping hot air underneath a raised floor and behind the building’s walls.

Magic’s helping hands

In fantasy stories, fire is typically the most destructive magical element, as well as the most “flashy”. The fireball is the sword-and-sorcery spell. As usual in this series, however, we’ll eschew the over-the-top explosions and stick to something more low-key, but much more effective in advancing the state of a civilization.

It’s still simple to command fire in our magical world, and it is most certainly given to militaristic and destructive uses, but more peaceful mages have investigated arcane fire for its more beneficial properties. A reliable fire-starter is merely the first of these. Starting a fire in older days tended to be…difficult, but the mages have created a solution. It’s a tiny magical crystal, of the same kind we’ve seen in previous entries, but attuned to fire and heat. Attached to the end of a short stick, it causes tinder to ignite within a few seconds. In modern terms, it’s a lighter.

Larger versions of this produce much more heat, but they’re more expensive and less efficient, making it less than practical to use them for home heating. Mages are working on that problem, however. A few richer individuals can afford the waste, and they do use these fire crystals to heat their homes in the winter. But even their cooks prefer the tried-and-true methods of a proper fire, even if it was started by magic.

Cooling is a harder problem, even for magic. That’s because, technically speaking, there’s no such thing as cold. There’s only the absence of heat. Making something colder requires taking away some of its heat. Fans, for example, work by causing a breeze; the moving air carries away the heat near your body, which has the effect of cooling it. That’s one strategy that can be exploited by magical means, and our mages have done so. Electric fans obviously need electricity, but arcane ones can be powered by the same force providers we’ve already met. Those are expendable—and thus costly—but they get the job done.

Besides these forms of crystallized magic, the wizards of our magical world have a few other tricks up their sleeves. Personal spells, of course, are very important. Mages can light their own fires at the touch of a finger and an arcane word. They can provide their own cooling winds. And some of them can even use spells to increase their own ability to withstand extremes of hot and cold.

Far and wide

But the biggest impact of this greater command of fire is in the knock-on effects it brings to the rest of the world. Starting fires is great, but they’re only useful if you, well, use them, and it’s hard to find medieval-era technology that couldn’t benefit from better ways of making heat.

Metallurgy is the obvious winner here. With magic allowing bigger, hotter, more controllable fires and sources of heat, it becomes possible to melt and boil metals otherwise impervious to the era’s tech. This leads to better, purer alloys, among other things. Steel, naturally, will be one of the first. Historical methods of production were largely limited to small batches until the Industrial Revolution.

Cooking advances with better heat, too. So do many manufacturing professions. And if magical methods of heating become easier and cheaper—this is not a given in our setting, but it could be in others—then wood and charcoal fall out of favor everywhere, because magic takes over. Environmentalists rejoice, because even this modest level of magic means that coal never becomes needed for heat. Nor does oil. The entire fossil fuel industry is obsoleted before it’s even born.

It’s counterintuitive, but better heat technology will also lead to a greater understanding of cold. Most of the early discoveries about cold had to wait until things like steam power and vacuum pumps arrived. Magic short-circuits that, though. Magical means of power generation take the place of steam engines, even in laboratory settings, potentially allowing the science of refrigeration to progress much earlier. Our magical kingdom is on the verge of such discoveries, with all they represent. The first true refrigerators and freezers may be less than a lifetime away. Even if they aren’t, nothing more than an easy way of producing ice is a century or more of advancement.

Next time

The next part of this series will move on from heating a house to building it. We’ll see how magic aids in construction, from building materials to architectural designs. For now, since summer has started, find somewhere cold and enjoy the fact that you can.

Magic and tech: medicine

Human history is very much a history of medicine and medical technology. You can even make the argument that the very reason we’re able to have the advanced society we have today is because of medical breakthroughs. Increased life expectancy, decreased infant mortality, hospitals, vaccines, antibiotics—I can go on for hours. It all adds up to a longer, healthier life, and that means more time to participate in society. The usual retirement age is 65, and it’s entirely likely it’ll hit 70 before I do, and the quality of life at such an advanced age is also steadily rising. That means more living grandparents (and great-grandparents and great-uncles and so on) and more people with the wisdom that hopefully comes with age.

Not too long ago, things were different. The world was full of dangers, many of them fatal. Disease could strike at any time, without warning, and there was little to be done but wait or pray. Childbirth was far more often deadly to the mother or the child…or both. Even the simplest scratches could become infected. Surgery was as great a risk as the problems it was trying to solve. (Thanks to MRSA and the like, those last two are becoming true again.) If you dodged all those bullets, you still weren’t out of the woods, because you had to worry about all those age-related troubles: blindness, deafness, weakness.

Life in, say, the Middle Ages was very likely a life of misery and pain, but that doesn’t mean there wasn’t medicine, as we’ll see. It was far from what we’re used to today, but it did exist. And there is probably no part of civilization more strongly connected to magic than medicine. What would happen if the two really met?

Through the ages

Medicine, in the sense of “things that can heal you”, dates back about as far as humanity itself. And for all of that history except the last few centuries, it was almost exclusively herbal. Every early culture has its own collection of natural pharmaceuticals (some of them even work!) accompanied by a set of traditional cures. In recent decades, we’ve seen a bit of a revival of the old herbalism, and every drugstore is stocked with ginkgo and saw palmetto and dozens of other “supplements”. Whether they’re effective or not, they have a very long history.

Non-living cures also existed, and a few were well-known to earlier ages. Chemical medicine, however, mostly had to wait for, well, chemistry. The alchemists of old had lists of compounds that would help this or that illness, but many of those were highly toxic. We laugh and joke about the side effects of today’s drugs, but at least those are rare; mercury and lead are going to be bad for you no matter what.

Surgery is also about as old as the hills. The Egyptians were doing it on eyes, for example, although I think I’d rather keep the cataracts. (At least then I’d be like the Nile, right?) Amputation was one of the few remedies for infection…which could also come from surgery. A classic Catch-22, isn’t it? Oh, and don’t forget the general lack of anesthesia.

What the earlier ages lacked most compared to today was not the laundry list of pills or a dictionary of disorders. No, the thing that most separates us from earlier times when it comes to medicine is knowledge. We know how diseases spread, how germs affect the body, how eyes and ears go bad. We’re unsure on a few minor details, but we’ve got the basics covered, and that’s why we can treat the sick and injured so much better than before. Where it was once thought that an illness was the will of God, for instance, we can point to the virus that is its true cause.

And then comes magic

So let’s take that to the magical world. To start, we’ll assume the mundane niceties of medieval times. That’s easier than you might think, because our world’s magic won’t be enough to let its users actually see viruses and other infectious agents. Nor will it allow them to see into the human body at the same level of detail as a modern X-ray, CT scan, or ultrasound. And we’ll short-circuit the obvious idea by saying that there are no cure-all healing spells. Real people don’t have hit points.

But improvements aren’t hard to find. Most of medicine is observation, and we’ve already seen that the magical world has spells that can aid in knowledge, recall, and sensory perception. An increase in hearing, if done right, is just as good as a stethoscope, and we can imagine similar possibilities for the other senses.

Decreasing the ability of the senses is another interesting angle. In normal practice, it’s bad form to blind someone, but a numbing spell would be an effective anesthetic. A sleeping spell is easy to work and has a lot of potential in a hospital setting. And something to kill the sense of smell might be a requirement for a doctor or surgeon as much as the patient!

The practice of surgery itself doesn’t seem like it can benefit much from the limited magic we’re giving this world. It’s more the peripheral aspects that get improved, but that’s enough. Think sharper scalpels, better stitches, more sterilization.

Herbal medicine gets better in one very specific way: growth. It’s not that our mages can cast a spell to make a barren field bloom with plant life, but those plants that are already there can grow bigger and faster. That includes the pharmaceuticals herbs as well as grain crops. Magic and alchemy are closely related, so it’s not a stretch to get a few early chemical remedies; magic helps here by allowing easier distillation and the like.

Some of the major maladies can be cured by magical means in this setting. Mostly, this goes back to the sensory spells earlier, but now as enchantment. We’ve established that spells can be “stored”, and this gets us a lot of medical technology. An amulet or bracelet to deaden pain (pain is merely a subset of touch, after all) might be just as good as opium—or its modern equivalents. Sharpened eyesight could be achieved by magic as easily as eyeglasses or Lasik surgery.

In conclusion

The field of medicine isn’t one that can be solved by magic alone. Not as we’ve defined it, anyway. But our magical kingdom will have counterparts to a few of the later inventions that have helped us live longer, better lives. This world will still be dangerous, but prospects are a bit brighter than in the real thing.

What magic does give our fantasy world is a kind of analytical framework, and that’s a necessary step in developing modern medicine. Magic in this world follows rules, and the people living there know that. It stands to reason that they’ll wonder if other things follow rules, as well. Investigating such esoteric mysteries will eventually bear fruit, as it did here. Remember that chemistry was born from alchemy, and thus Merck and Pfizer owe their existence to Geber and Paracelsus.

Chemistry isn’t the only—or even the most important—part of medicine. Biology doesn’t directly benefit from magic, but it shares the same analytical underpinnings. Physical wellness is harder to pin down, but people in earlier times tended to be far more active than today. For the most part, they ate healthier, too. But magic won’t help much there. Indeed, it might make things worse, as it means less need for physical exertion. Also, the “smaller” world it creates is more likely to spread disease.

In the end, it’s possible that magic’s medical drawbacks outweigh its benefits. But that’s okay. Once the rest of the world catches up, it’ll be on its way to fixing those problems, just like we have.

Out of the dark: building the Dark Ages

We have an awful lot of fiction out there set in something not entirely unlike our Middle Ages. Almost every cookie-cutter fantasy world is faux-medieval, and that’s only the ones that aren’t trying to be. The Renaissance and early Industrial Era also get plenty of love, and Roman antiquity even comes up from time to time. But there’s one time period in our history that seems a bit…left out. I’m talking about those centuries after Rome fell to the barbarian hordes, but before William crossed the Channel to give England the same fate. I’m talking about the Dark Ages.

A brighter shade of dark

Now, as we know today, what previous generations called the Dark Ages weren’t really all that dark. Sure, there were Vikings and Vandals, barbarians and Britons, Goths and Gauls, but it wasn’t a complete disaster. The reason we speak of the “Dark Ages”, though, is contrast. Rome was a magnificent empire by any account, and the first to coin the “Dark Age” moniker on its fallen children were living in the equally “shining” Enlightenment. By comparison, the time between wasn’t exactly grand.

Even in our modern knowledge, the notion of a Dark Age is still useful, even if it doesn’t quite mean what we think it means. In general, we can use it to refer to any period of technological, social, and political stagnation and regression. That’s not to say there wasn’t progress in the Dark Ages. One great book about the period is titled Cathedral, Forge, and Waterwheel, and that’s a pretty good indication of some of the advancement that did happen.

Compared to what came before—the Roman empire, with its Colosseum and aqueducts and roads—there’s a huge difference, especially at the start of the Dark Ages. In some parts of Europe, particularly those farthest from the imperial center, general conditions fell to their lowest levels in hundreds of years. While the Empire itself actually did survive in the east in the form of the Byzantines (who were even considered the “true” emperors by the first generations of barbarian kings), the west was shattered, and it showed. But they dug themselves out of that hole, as we know.

Dying light

So, even granting our more limited definition of “Dark Ages”, what caused them? Well, there are a lot of theories. Rome was sacked in 476, of course, and that’s usually considered a primary cause. A serious cold snap starting around 536 couldn’t have helped matters. Plagues around the same time combined with the war and famine to cause even greater death, completing the quartet of the Horsemen.

But all that together shouldn’t have been enough to devastate the society of western Europe, should it? If it happened today, it wouldn’t, because our world is so connected, so small, relative to Roman times. If the whole host of apocalyptic horror visited the EU today, hundreds of millions of people would die, but we wouldn’t have a new Dark Age. The reason can be summed up in one word: continuity.

Yes, half of the Roman Empire survived. In a way, it was the stronger half, but it was also the more distant half. When Rome fell, when all the other catastrophes visited its remnants, the effect was to cause a cultural break. Many parts of the empire were already more or less autonomous, growing ever more apart, and the loss of the “center of gravity” that was Rome merely hastened the process.

A look at Britain illustrates this. After Rome all but gave up on its island colony, England all but gave up on it. Outside of the monasteries, Rome was practically forgotten within a few generations, once the Saxons and their other Germanic friends rolled in. The Danes that started vacationing there in the ninth century cared even less for news from four hundred years ago. By the time William came conquering, Anglo-Saxon England was a far cry from Roman Britannia. This is an extreme example, though, because there was almost no continuity in Britain to start with, so there wasn’t much to lose. However, similar stories appear throughout Europe.

Recurring nightmare

Although Europe’s Dark Ages are a thousand years past, they aren’t the only example of the kind of discontinuity of a Dark Age. Something of the same sort happened in Greece two thousand years before that. The native peoples of America can be considered to have a Dark Age that started circa 1500, as the mighty empires of Mexico and Peru fell to Spanish invaders.

In every case, though, it’s more than just the fall of a civilization. A Dark Age needs a prolonged period of destruction, probably at least two generations long. To make an age go Dark requires severe population loss, a total breakdown of government, and the forcing of a kind of “siege mentality” on a society. Climatic shifts are just a bonus. In all, a Dark Age results from a perfect storm of causes, all of which combine to break the people. Eventually, due to the death, destruction, and constant need to be on guard, everything else falls by the wayside. There simply aren’t enough people to keep things going. Once those that are left start dying off, the noose closes. The circle is broken, and darkness settles in.

That naturally leads to another question: could we have a new Dark Age? It’s hard to imagine, in our present time of progress, something ever causing it to stop, but that doesn’t make it impossible. Indeed, almost the entire sub-genre of post-apocalyptic fiction hinges on this very event. It can happen, but—thankfully—it won’t be easy.

What would it take, then? Well, like the Dark Ages that have come before, it would be a combination of factors. Something causing death on a massive, unprecedented scale. Something to put humanity on the back foot, to disrupt the flow of society so completely that it would take more than a lifetime to recover. In that case, it would never recover, because there would be no one left who remembered the “old days”. There would be no more continuity.

I can think of a few ways that could work. The ever-popular asteroid or comet impact is an easy one, and it even has the knock-on effect of a severe climate shock. Nuclear war never really seemed likely in my lifetime, but I was born in 1983, so I missed the darker days of the Cold War. I did watch WarGames, though, and I remember seeing those world maps lighting up at the end. Two hundred years after that, and I don’t think we’re looking at a Fallout game.

Other options all have their problems. An incredibly virulent outbreak (Plague, Inc. or your favorite zombie movie) might work, but it would have to be so bad that it makes the 1918 flu look like the common cold. Zika is in the news right now, but it simply won’t cut it, nor would Ebola. You need something highly infectious, but with a long incubation period and a massive mortality rate. It’s hard to find a virus that fits all three of those, for evolutionary reasons. The other forms of infectious agents—bacteria, fungi, prions—all have their own disadvantages.

Climate change is the watchword of the day, but it won’t cause a Dark Age by itself. It’s too slow, and even the most alarming predictions don’t take us to temperatures much higher than a few thousand years ago, and that’s assuming that nobody ever does anything about it. No matter what you believe about global warming, you can’t make it enough to break us without some help.

Terminator-style AI is another possibility, one looking increasingly likely these days. It has some potential for catastrophe, but I’m not sure about using it as the continuity-breaker. The same goes for nanotech bots and the like. Maybe they’ll enslave us, but they won’t beat us down so badly that we lose everything.

And then there’s aliens. (Insert History Channel guy here.) An alien-imposed destruction of civilization would be the logical extension of the Roman hordes into the global future. Their attacks would likely be massive enough to influence the planet’s climate. They would cause us to huddle together for mutual defense, assuming they left any of us alive and alone. Yeah, that could work. It needs a lot of ifs, but it’s plausible enough to make for a good story.

The light returns

The Dark Age has to come to an end. It can’t last forever. But there’s no easy signal that it’s over. Instead, it’s a gradual thing. The key point here, though, is that what comes out of the Dark Age won’t be the same as what went in. Look again at Europe. After Rome fell, some of its advances—concrete is a good example—were lost to its descendants for a thousand years. Yet the continent did finally surpass the empire.

Over time, the natural course of progress will lift the Dark Age area to a level that is near enough where it left off, and things can proceed from there. It will be a different place, and that’s because of the discontinuity that caused the darkness in the first place. The old ways become lost, yes, but once we discover the new ways, they’ll be even better.

We stand on the shoulders of giants, as Newton said. Those giants are our ancestors, whether physically or culturally. Sometimes they fall, and sometimes the fall is bad enough that it breaks them. Then we must stand on our own and become our own giants. The Dark Age is that time when we’re standing alone.

Life below zero: building the Ice Age

As I write this post, parts of the US are digging themselves out of a massive snowstorm. (Locally, of course, the anti-snow bubble was in full effect, and the Tennessee Valley area got only a dusting.) Lots of snow, cold temperatures, and high winds create a blizzard, a major weather event that comes around once every few years.

But our world has gone through extended periods of much colder weather. In fact, we were basically born in one. I’m talking about ice ages. In particular, I’m referring to the Ice Age, the one that ended about 10,000 years ago, as it’s far better known and understood than any of the others throughout the history of the planet.

The very phrase “Ice Age” conjures up images of woolly mammoths lumbering across a frozen tundra, of small bands of humanity struggling to survive, of snow-covered evergreen forests and blue walls of ice. Really, if you think about it, it paints a picturesque landscape as fascinating as it seems inhospitable. In that, it’s no different from Antarctica or the Himalayas or Siberia today…or Mars tomorrow. The Earth of the Ice Age, as a place, is one that fuels the imagination simply because it is so different. But the question I’d like to ask is: is there a story in the Ice Age?

Lands of always winter

To answer that question, we first need to think about what the Ice Age is. A “glaciation event”, to use the technical term, is pretty self-explanatory. Colder global temperatures mean more of the planet’s surface is below freezing (0° Celsius, hence the name of this post), which means water turns to ice. The longer the subzero temps, the longer the ice can stick around. Although the seasons don’t actually change, the effect is a longer and longer winter, complete with all the wintry trappings: snow, frozen ponds and lakes, plant-killing frosts, and so on.

We don’t actually know what causes these glaciation events to start and stop. Some of them last for tens or even hundreds of thousands of years. The worst can cover the whole world in ice, creating a so-called “Snowball Earth” scenario. (While interesting in its own right, that particular outcome doesn’t concern us here. On a snowball world, there’s little potential for surface activity. Life can survive in the deep, unfrozen oceans, but that doesn’t sound too exciting, in my opinion.)

If that weren’t bad enough, an Ice Age can be partially self-sustaining. As the icecaps grow—not just the ones at the poles, but anywhere—the Earth can become more reflective. Higher surface reflectivity means that less heat is absorbed, dropping temperatures further. And that allows the ice to spread, in a feedback loop best served cold.

Living on the edge

But we know life survived the Ice Age. We’re here, after all. The planet-wide extinction event that ended the Pleistocene period came at the end of the glaciation event. So not only can life survive in the time of ice, it can thrive. How?

Well, that’s where the difference between “ice age” and “snowball” comes in. First off, the whole world wasn’t completely frozen over 20,000 years ago. Yes, there were glaciers, and they extended quite far from the poles. (Incidentally, the glaciers that covered the eastern half of America stopped not that far from where I live.) But plenty of ice-free land existed, especially in the tropics. Oh, and guess where humanity came from?

Even in the colder regions, life was possible. We see that today in Alaska, for instance. And the vagaries of climate mean that, strangely enough, that part of the world wasn’t much colder than it is today. So one lead on Ice Age life can be found by studying the polar regions of the present, from polar bears to penguins and Eskimos to explorers.

The changing face

But the world was a different place in the Ice Age, and that was entirely because of the ice. The climate played by different rules. Hundreds of feet of ice covering millions of square miles will do that.

The first thing to note is that the massive ice sheets that covered the higher latitudes function, climatically speaking, just like those at the poles. Cold air is denser than warm air, so it sinks. That creates a high-pressure area that doesn’t really move that much. In temperate regions, high pressure causes clockwise winds along their boundaries, but they tend to have stable interiors.

Anyone who lives in the South knows about the summer ridge that builds every year, sending temperatures soaring to 100°F and causing air-quality and fire danger warnings. For weeks, we suffer in miserable heat and suffocating humidity, with no rain in sight. It’s awful, and it’s the main reason I hate summer. But think of that same situation, changing the temperatures from the nineties Fahrenheit to the twenties. Colder air holds less moisture, so you have a place with dry, stale air and little prospect for relief. In other words, a cold desert.

That’s the case on the ice sheets, and some thinkers extend that to the area around them. Having so much of the Earth’s water locked into near-permanent glaciers means that there will be less precipitation overall, even in the warm tropics. That has knock-on effects in those climates. Rainforests will be smaller, for example, and much of the land will be more like savannas or steppes, like the African lands that gave birth to modern humans.

But there are still prospects for precipitation. The jet stream will move, stray winds will blow. And the borders of the ice sheets will be active. This is for two reasons. First, the glaciers aren’t stationary. They expand and contract with the subtle seasonal and long-term changes in temperature. Second, that’s where the strongest winds will likely be. Receding glaciers can form lakes, and winds can spread the moisture from those lakes. The result? Lake-effect precipitation, whether rain or snow. The lands of ice will be cold and dry, the subtropics warm (or just warmer) and dry, but the boundary between them has the potential to be vibrant, if cool.

Making it work

So we have two general areas of an Ice Age world that can support the wide variety of life necessary for civilization: the warmer, wetter tropics and the cool convergence zones around the bases of the glaciers. If you know history, then you know that those are the places where the first major progress occurred in our early history: the savannas of Africa, the shores of the Mediterranean, the outskirts of Siberia and Beringia.

For people living in the Ice Age, life is tough. Growing seasons are shorter, more because of temperature than sunlight; the first crops weren’t domesticated until after the ice was mostly gone, when more of the world could support agriculture. Staying warm is a priority, and making fire a core part of survival. Clothing reflects the cold: furs, wool, insulation. Housing is a must, if only to have a safe place for a fire and a bed. Society, too, will be shaped by these needs.

But the Ice Age is dynamic. Fixed houses are susceptible to moving or melting glaciers. A small shift in temperature (in either direction) changes the whole landscape. Nomadic bands might be better suited to the periphery of the ice sheets, with the cities at a safe distance.

The long summer

And then the Ice Age comes to an end. Again, there’s no real consensus on why, but it has to happen. We’re proof of that. And when it does happen…

Rising temperatures at the end of a glaciation event are almost literally earth-shattering. The glaciers recede and melt (not completely; we’ve still got a few left over from our last Ice Age, and not just at the poles), leaving destruction in their wake. Sea levels rise, as you’d expect, but they could also sink, as the continents rebound when the weight of the ice is lifted.

The tundra shrinks, squeezing out those plants and animals adapted to it. Conversely, those used to warmer climes now have a vast expanse of fresh, new land. Precipitation begins to increase as ice turns to water and then evaporates. The world just after the Ice Age is probably going to be a swampy one. Eventually, though, things balance out. The world’s climate reaches an island of stability. Except when it doesn’t.

Our last Ice Age ended in fits and starts. Centuries of relative warmth could be wiped out in a geological instant. The last gasp was the Younger Dryas, a cold snap that started around 13,000 years ago and lasted around a tenth of that time. To put that into perspective, if it were ending right now (2016), it would have started around the time of the Merovingians and the Muslim conquest of Spain. But we don’t even know if the Younger Dryas was part of the Ice Age, or if it had another cause. (One hypothesis even claims it was caused by a meteor striking the earth!) Whether it was or wasn’t the dying ember of the Ice Age doesn’t matter much, though; it was close enough that we can treat it as if it were.

In the intervening millennia, our climate has changed demonstrably. This has nothing to do with global warming, whatever you think on that topic. No, I’m talking about the natural changes of a planet leaving a glacial period. We can see the evidence of ancient sea levels and rainfall patterns. The whole Bering Strait was once a land bridge, the Sahara a land of green. And Canada was a frozen wasteland. Okay, some things never change.

All this is to say that the Ice Age doesn’t have to mean mammoths and tundra and hunter-gatherers desperate for survival. It can be a time of creation and advancement, too.

The changing of the seasons

Winter is coming. It’s not just a catchy motto from Game of Thrones, you know. No, winter really is on its way, as the seasons move on their eternal cycle. And this change from fall to winter can make you wonder. We know why the seasons change: our planet’s tilt, combined with its movement around the sun. But what does that truly mean? And, from a worldbuilding perspective, does it have to be that way? Well, let’s take a look.

Reason for the season

The Earth is tilted on its axis. Anybody past about the third grade knows that, and it’s patently obvious just by looking at the sky at different points in the year. Right now, our world has somewhere in the vicinity of 23° of axial tilt, and that’s a fairly stable number. It hasn’t changed much at all in written history, and only within about a degree either way throughout all of human existence. In the distant past (millions of years ago), there were periods where it was much higher or lower, but things are much more settled in this modern era.

Now, the axis doesn’t move, at least on scales of a single year. (We’ll ignore precession and other effects for the moment, as they tend to work on much larger periods of time.) What does that mean for us? Only that different parts of the world will get more sunlight at different times of the year. And that’s what causes the seasons to change.

Summer, of course, is when your part of the world gets the most direct sunlight, and that happens when your half of the world points more towards the sun. Winter is the exact opposite, and it’s on the other side of the year. Spring and fall (autumn, if you prefer) are in the middle, when the planet’s tilt is roughly perpendicular to the sun’s rays. But the Earth has two hemispheres: northern and southern. They can’t both be pointed at the sun, thus the complementary seasons that make Christmas a summertime holiday in Australia.

Tropical highs and lows

There’s a lot more to it than that, though. Because of the Earth’s tilt of about 23°, we can divide the world into a few sections. First, we have the tropics, the area around the equator, from the Tropic of Cancer in the north, to the Tropic of Capricorn in the south. Coincidentally enough, these lines are at exactly the latitude equal to the axial tilt. (It’s not a coincidence at all; it’s the whole reason why they exist.) Every point in the tropics will have the sun directly overhead at some time in the year.

The polar regions are also defined by the tilt. The Arctic and Antarctic Circles are at a latitude of about 67°, or as far from the pole (90°) as the axial tilt, or in math terms: $90° – a$. Everywhere in a polar region will have a time when the sun is at the nadir, and a day where it doesn’t rise at all. But it will also have days where the sun doesn’t set, giving us the “midnight sun” of Alaska and Scandinavia.

In between the polar and tropical regions lie the temperate zones. In these, the sun will never be directly overhead or directly below, and it will rise and set every day. And it’s here that seasonal variation has the most visible effects.

Day and night

If the Earth wasn’t tilted, there wouldn’t be any seasons. Every night would be 12 hours long, no matter where you were. But we don’t live in that world, we live in one that is tilted. Thus, our nights change in length. At the equinoxes, the lengths of day and night are equal, hence the name. At the solstices, they’re as far apart as can be. In between, there’s a gradual shifting that gives us the feeling that days are growing longer or shorter.

As you get farther from the equator, the variation grows. Thus, at my latitude of around 35° north, I might only get about 9 hours or so of daylight on the winter solstice, but summer nights will also be that short. Up in New York, it might be split 16/8, while London might be 17/7 or 18/6. Helsinki, up near 60°, is going to have some long winter nights, but there will always be a sunrise. Barrow, Alaska and McMurdo Station in Antarctica are both inside the polar region, so they’ll have days without nights, or vice versa.

An added complication

The whole thing would be perfectly symmetrical but for one little detail. Earth’s orbit around the sun isn’t a perfect circle. It’s an ellipse. That ellipse doesn’t move any more than the axis does. (Again, we’re ignoring precession.) As of right now, the perihelion, the point closest to the sun, comes around in January, during the northern winter. Orbital mechanics dictates that the aphelion, then, is six months later.

As anyone who has played Kerbal Space Program knows, things move more slowly at apoapsis. (“Aphelion” is just the apoapsis of something orbiting the sun.) Therefore, since our apoapsis occurs in July, northern summer is a little bit longer than winter, while the southern hemisphere is the other way around. It’s not much of a difference, only about one or two days, so it doesn’t affect the climate that much. But it’s something you may have to keep in mind.

Another world

So all that works for Earth. How about a different planet? How would the seasons work? The answer: about the same. Earth is simply the most convenient example, since we’re already living here. Mars has seasons, too; the Phoenix lander was killed by the rigors of a Martian polar winter. For the rest of the solar system, things get dicey. Jupiter doesn’t have much tilt, for example, while Uranus is practically lying on its side. Mercury has its resonance-lock thing going on, which screws everything up. And moons don’t really work the same way.

But for your ordinary, habitable, terrestrial world, seasons are going to be like Earth’s. Summer and winter, spring and fall, they’re all going to be there. They may be different lengths, based on the planet’s orbital period and eccentricity. The tropical and polar zones may be larger or smaller, if the tilt isn’t our 23°. The division of day and night might scale differently, due to these same factors. But from a scientific point of view, that’s all you have to worry about. The years-long summers and winters of Westeros are scientifically implausible; you need magic to account for them.

Summer is always going to be the hottest part of the year, with the most sunlight and shortest nights. Winter will be the coldest; the sun will hang low in the sky, and its rays will strike more glancing blows on the world. Spring and autumn will both be marked by equinoxes, days when the periods of daylight and darkness are the same length. Spring tends to get warmer as you go through it, while autumn cools down.

In the tropics of your fictional world, there won’t be as much seasonal variation, especially close to the equator. The poles, by contrast, will be marked by long summer days, cold winter nights, and periods of total darkness or everlasting sunshine. In between will be the temperate zones, where civilization tends to flourish. And the southern hemisphere will always be backwards when it comes to the calendar.

But this is all speaking from the view of orbital mechanics. On the ground, there is a lot of room for change. Latitude only determines the kinds of seasons you have, whether tropical, temperate, or polar. A location’s climate is certainly affected by this, but many more factors come into play, so many that I’ll dedicate a future post to them.

Mars: fantasy and reality

Mars is in the public consciousness right now. The day I’m writing this, in fact, NASA has just announced new findings that indicate flowing water on the Red Planet. Of course, that’s not what most people are thinking about; the average person is thinking of Mars because of the new movie The Martian, a film based on a realistic account of a hypothetical Mars mission from the novel of the same name.

We go through this kind of thing every few years. A while back, it was John Carter. A few years before that, we had Mission to Mars and Red Planet. Go back even further, and you get to Total Recall. It’s not really that Mars is just now appearing on the public’s radar. No, this goes in cycles. The last crop of Martian movies really came about from the runaway success of the Spirit and Opportunity rovers. Those at the turn of the century were inspired by earlier missions like Mars Pathfinder. And The Martian owes at least some of its present hype to Curiosity and Phoenix, the latest generation of planetary landers.

Move outside the world of mainstream film and into written fiction, though, and that’s where you’ll see red. Mars is a fixture of science fiction, especially the “harder” sci-fi that strives for realism and physical accuracy. The reasons for this should be obvious. Mars is relatively close, far nearer to Earth than any other body that could be called a planet. Of the bodies in the solar system besides our own world, it’s probably the least inhospitable, too.

Not necessarily hospitable, mind you, but Mars is the least bad of all our options. I mean, the other candidates look about as habitable as the current Republican hopefuls are electable. Mercury is too hot (mostly) and much too difficult to actually get to. Venus is a greenhouse pressure cooker. Titan is way too cold, and it’s about a billion miles away, to boot. Most everything else is an airless rock or a gas giant, neither of which scream “habitable” to me. No, if you want to send people somewhere useful in the next couple of decades, you’ve got two options: the moon and Mars. And we’ve been to the moon. (Personally, I think we should go back there before heading to Mars, but that seems to be a minority opinion.)

But say you want to write a story about people leaving Earth and venturing out into the solar system. Well, for the same reasons, Mars is an obvious destination. But the role it plays in a fictional story depends on a few factors. The main one of these is the timeframe. When is your story set? In 2050? A hundred years from now? A thousand? In this post, we’ll look at how Mars changes as we move our starting point ahead in time.

The near future

Thanks to political posturing and the general anti-intellectual tendencies of Americans in the last generation, manned spaceflight has taken a backseat to essentially everything else. As of right now, the US doesn’t even have a manned craft, and the only one on the drawing board—the Orion capsule—is intentionally doomed to failure through budget cuts and appropriations adjustments. The rest of the world isn’t much better. Russia has the Soyuz, but it’s only really useful for low-Earth orbit. China doesn’t have much, and they aren’t sharing, anyway. Private companies like SpaceX are trying, but it’s a long, hard road.

So, barring a reason for a Mars rush, the nearest future (say, the next 15-20 years) has our planetary neighbor as a goal rather than a place. It’s up there, and it’s a target, but not one we can hit anytime soon. The problem is, that doesn’t make for a very interesting story.

Move up to the middle of this century, starting around 2040, and even conservative estimates give us the first manned mission to Mars. Now, Mars becomes like the moon in the 1960s, a destination, a place to be conquered. We can have stories about the first astronauts to make the long trip, the first to blaze the trail through interplanetary space.

With current technology, it’ll take a few months to get from Earth to Mars. The best times happen once every couple of years; any other time would increase the travel duration dramatically. The best analogy for this is the early transoceanic voyages. You have people stuck in a confined space together for a very long time, going to a place that few (or none) have ever visited, with a low probability of survival. Returning early isn’t an option, and returning at all might be nearly impossible. They will run low on food, they will get sick, they will fight. Psychology, not science, can take center stage for a lot of this kind of story. A trip to Mars can become a character study.

The landing—assuming they survive—moves science and exploration back to the fore. It won’t be the same as the Apollo program. The vagaries of orbital mechanics mean that the first Mars missions won’t be able to pack up and leave after mere hours, as Apollo 11 did. Instead, they’ll be stuck for weeks, even months. That’s plenty of time to get the lay of the land, to do proper scientific experiments, to explore from ground level, and maybe even to find evidence of Martian life.

The middle

In the second half of this century, assuming the first trips are successful, we can envision the second stage of Mars exploration. This is what we should have had for the moon around 1980; the most optimistic projections from days gone by (Zubrin’s Mars Direct, for example) put it on Mars around the present day. Here, we’ve moved into a semi-permanent or permanent presence on Mars for scientific purposes, a bit like Antarctica today. Shortly after that, it’s not hard to envision the first true colonists.

Both of these groups will face the same troubles. Stories set in this time would be of building, expanding, and learning to live together. Mars is actively hostile to humans, and this stage sees it becoming a source of environmental conflict, an outside pressure acting against the protagonists. Antarctica, again, is a good analogy, but so are the stories of the first Europeans to settle in America.

The trip to Mars won’t get any shorter (barring leaps in propulsion technology), so it’s still like crossing the Atlantic a few centuries ago. The transportation will likely be a bit roomier, although it might also carry more people, offsetting the additional capacity. The psychological implications exist as before, but it’s reasonable to gloss over them in a story that doesn’t want to focus on them.

On the Red Planet itself, interpersonal conflicts can develop. Disasters—the Martian dust storm is a popular one—can strike. If there is native life in your version of Mars, then studying it becomes a priority. (Protecting it or even destroying it can also be a theme.) And, in a space opera setting, this can be the perfect time to inject an alien artifact into the mix.

Generally speaking, the second stage of Mars exploration, as a human outpost with a continued presence, is the first step in a kind of literary terraforming. By making Mars a setting, rather than a destination, the journey is made less important, and the world becomes the focus.

A century of settlement

Assuming our somewhat optimistic timeline, the 22nd century would be the time of the land grab. Propulsion or other advances at home make the interplanetary trip cheaper, safer, and more accessible. As a result, more people have the ability to venture forth. Our analogy is now America, whether the early days of colonization in the 17th century or the westward push of manifest destiny in the 19th.

In this time, as Mars becomes a more permanent human settlement, a new crop of plot hooks emerges. Social sciences become important once again. Religion and government, including self-government, would be on everyone’s minds. Offshoot towns might spring up.

And then we get to the harder sciences, particularly biology. Once people are living significant portions of their lives on a different planet, they’ll be growing their own food. They’ll be dying, their bodies the first to be buried in Martian soil. And they’ll be reproducing.

Evolution will affect every living thing born on Mars, and we simply don’t know how. The lower gravity, the higher radiation, the protective enclosure necessary for survival, how will these changes affect a child? It won’t be an immediate change, for sure, but the second or third generation to be born on Mars might not be able to visit the birthplace of humanity. Human beings would truly split into two races—a distinction that would go far beyond mere black and white—and the word Martian would take on a new meaning.

Mars remains just as hostile as before, but it’s a known danger now. It’s the wilderness. It’s a whole world awaiting human eyes and boots.

Deeper and deeper

As time goes by, and as Mars becomes more and more inhabited, the natural conclusion is that we would try to make it more habitable. In other words, terraforming. That’s been a presence in science fiction for decades; one of the classics is Kim Stanley Robinson’s Mars trilogy, starting with Red Mars.

In the far future, call it about 200 years from now, Mars can truly begin to become a second planet for humanity. At this point, people would live their whole lives there, never once leaving. Towns and cities could expand, and an ultimate goal might arise: planetary independence.

But the terraforming is the big deal in this late time. Even the best guesses make this a millennia-long process, but the first steps can begin once enough people want them to. Thickening the atmosphere, raising the worldwide temperature, getting water to flow in more than the salty tears NASA announced on September 28, these will all take longer than a human lifetime, even granting extensive life-lengthening processes that might be available to future medicine.

For stories set in this time, Mars can again become a backdrop, the set upon which your story will take place. The later the setting, the more Earth-like the world becomes, and the less important it is that you’re on Mars.

The problems these people would face are the same as always. Racial tensions between Earthlings and Martians. The perils of travel in a still-hostile land. The scientific implications of changing an entire world. Everything to do with building a new society. And the list goes on, limited only by your imagination.

Look up

Through the failings of our leaders, the dream of Mars has been delayed. But all is not lost. We can go there in our minds, in the visuals of film, the words of fiction. What we might find when we arrive, no one can say. The future is what we make of it, and that is never more true than when you’re writing a story set in it.