First glance: C++17, part 3

It’s not all good in C++ land. Over the past two posts, we’ve seen some of the great new features being added in next year’s update to the standard, but there are a few things that just didn’t make the cut. For some, that might be good. For others, it’s a shame.


Concepts have been a hot topic among C++ insiders for over a decade. At their core, they’re a kind of addition to the template system that would allow a programmer to specify that a template parameter must meet certain conditions. For example, a parameter must be a type that is comparable or iterable, because the function of the template depends on such behaviors.

The STL already uses concepts behind the scenes, but only as a prosaic description; adding support for them to the language proper has been a goal that keeps receding into the future, like strong AI or fusion power. Some had hoped they’d be ready for C++11, but that obviously didn’t happen. A few held out for C++14, but that came and went, too. And now C++17 has shattered the concept dream yet again. Mostly, that’s because nobody can quite agree on what they should look like and how they should work under the hood. As integral as they will be, these are no small disagreements.


Most “modern” languages have some sort of module system. In Python, for instance, you can say import numpy, and then NumPy is right there, ready to be used. Java, C#, JavaScript, and many others have similar functionality, often with near-identical syntax.

But C++ doesn’t. It inherited C’s “module” system: header files and the #include directive. But #include relies on the preprocessor, and a lot of people don’t like that. They want something better, not because it’s the hip thing to do, but because it has legitimate benefits over the older method. (Basically, if the C preprocessor would just go away, everyone would be a lot better off. Alas, there are technical reasons why it can’t…yet.)

Modules were to be something like in other languages. The reason they haven’t made the cut for C++17 is because there are two main proposals, neither truly compatible with the other, but both with their supporters. It’s almost a partisan thing, except that the C++ Standards Committee is far more professional than Congress. But until they get their differences sorted out, modules are off the table, and the preprocessor lives (or limps) on.

Coroutines and future.then

These fit together a bit, because they both tie in with the increased focus on concurrency. With multicore systems everywhere, threading and IPC are both more and less important than ever. A system with multiple cores can run more than one bit of code at a time, and that can give us a tremendous boost in speed. But that’s at the cost of increased complexity, as anyone who’s ever tried programming a threaded application can tell you.

C++, since its 2011 Great Leap Forward, has support for concurrency. And, as usual, it gives you more than one way to do it. You have the traditional thread-based approach in std::thread, mutex, etc., but then there’s also the fancier asynchronous set of promise, future, and async.

One thing C++ doesn’t have, however, is the coroutine. A function can’t just pause in the middle and resume where it left off, as done by Python’s yield keyword. But that doesn’t mean there aren’t proposals. Yet again, it’s the case that two varieties exist, and we’re waiting for a consensus. Maybe in 2020.

Related to coroutines is the continuation, something familiar to programmers of Lisp and Scheme. The C++ way to support these is with future.then(), a method on a std::future object that invokes a given function once the future is “ready”, i.e., when it’s done doing whatever it had been created to do. More calls to then() can then (sorry!) be added, creating a whole chain of actions that are done sequentially yet asynchronously.

Why didn’t then() make it? It’s a little hard to say, but it seems that the prevailing opinion is that it needs to be added in the company of other concurrency-related features, possibly including coroutines or Microsoft’s await.

Unified call syntax

From what I’ve read, this one might be the most controversial addition to C++, so it’s no surprise that it was passed over for inclusion in C++17. Right now, there are two ways to call a function in the language. If it’s a free function or some callable object, you write something like f(a, b, c), just like you always have. But member functions are different. With them, the syntax is o.f(a, b, c) for references, o->f(a, b, c) for pointers. But that makes it hard to write generic code that doesn’t care about this distinction.

One option is to extend the member function syntax so that o.f() can fall back on f(o) if the object o doesn’t have a method f. The converse is to let f(o) instead try to call o.f().

The latter form is more familiar to C++ coders. It’s basically how Modern C++’s std::begin and end work. The former, however, is a close match to how languages like Python define methods. Problem is, the two are mutually incompatible, so we have to pick one if we want a unified call syntax.

But do we? The arguments against both proposals make some good points. Either option will make parsing (both by the compiler and in the programmer’s head) much more complex. Argument-dependent lookup is already a difficult problem; this only makes it worse. And the more I think about it, the less I’m sure that we need it.


This, on the other hand, would be a godsend. Reflection in Java and C# lets you peer into an object at run-time, dynamically accessing its methods and generally poking around. In C++, that’s pretty much impossible. Thanks to templates, copy elision, proxy classes, binders, and a host of other things, run-time reflection simply cannot be done. That’s unfortunate, but it’s the price we pay for the unrivaled speed and power of a native language.

We could, however, get reflection in the compile-time stage. That’s not beyond the realm of possibility, and it’s far from useless, thanks to template metaprogramming. So a few people have submitted proposals to add compile-time reflection capabilities to C++. None of them made the cut for C++17, though. Granted, they’re still in the early stages, and there are a lot of wrinkles that need ironing out. Well, they’ve got three (or maybe just two) years to do it, so here’s hoping.

And that’s all

C++17 may not be as earth-shattering as C++11 was, but it is a major update to the world’s biggest programming language. (Biggest in sheer size and scope, mind you, not in usage.) And with the new, faster release schedule, it sets the stage for an exciting future. Of course, we’ll have to wait for “C++Next” to see how that holds up, but we’re off to a great start.

Magic and tech: art

Art is another one of those things that makes us human, and in more than one sense: some of the earliest evidence for human habitation comes in the form of artwork such as cave drawings or inscribed shapes on animal bones. As much as I hate to admit it (I failed art class in high school), we are artistic beings.

And art—specifically the visual arts such as painting, sculpture, etc.—has progressed through the ages. It has taken advantage of technological progress. Thus, there’s no reason why it wouldn’t also be affected by the development of magic. Although it may seem odd to consider art and science so intertwined, it’s not really that far out there.

The real way

Art history is practically a restatement of the history of materials. That’s our human nature coming out; almost the first thing we do with a newly developed article of clothing, for instance, is draw on it, or paint it, or dye it. Today, we’ve got fancy synthetics colored in thousands of different hues, but even our ancestors could do some remarkable things. Look at some of those Renaissance paintings if you don’t believe me.

What they had to work with was…not the same as what we use. Many of their paints and dyes were derived from plant or animal products, with a few popular pigments coming from minerals such as ochre. Their instruments were equally primitive. Pencils weren’t invented until comparatively recently, brushes were made from real animal hair (requiring a real animal to provide it), and those fancy feather quills we only use nowadays for weddings and The Price Is Right were once the primary Western tool for writing in ink.

For “3D” artwork, the situation was little better. Today, we have things like CNC mills and techniques to move mountains of metal or marble, but our ancestors made some of the most impressive monuments and structures in the world with little more than hammers and chisels. (In the Americas, they even built pyramids without metal tools. I couldn’t build a pyramid like that in Minecraft!)

Can magic help?

How would magic advance the world of art? Our usual approach of balls of stored elemental energy won’t do much, to be honest, but there is one way they could help, so we’ll get that out of the way first. Lighting has been a problem forever; getting it right is one of the hardest parts of a modern media production. (Supposedly, this is one of the reasons why the next season of Game of Thrones is delayed.) But we’ve already stated that magic can give us better artificial lights. Give them to artists, and you instantly make portraits that much better.

Other improvements are a little less obvious. Many mages will have an easy path to artistry, as the study of magic is as much art as science. It requires observational skills, creativity, and commitment—all the same qualities a good artist needs. And they can use personal spells to aid them. What artist wouldn’t want photographic memory, for example?

The materials will also benefit from the arcane, as we have seen. The earlier advent of chemistry means, among other things, better pigments. Upgraded tools allow for more exquisite and exotic sculpture. With the advanced crucibles and furnaces magic brings, our magical realm might see a boom in the casting of “harder” metals like iron or steel. Magical technology may also bring an increased emphasis on artistic architecture. All in all, the medieval realm will start to look a lot more like the Renaissance, if not more modern.

That’s not even including the entirely different styles of art magic makes possible. Maybe pyrotechnics displays (achieved through fire spells) become popular. Etching via jets of water is a modern invention, but the right system of magic might allow it centuries earlier. Welded sculptures? Why not? You can even posit a “magical” photograph apparatus, moving the whole genre of picture-taking several hundred years into the past. And it’s a small step from recording still images to recording a bunch of still images in succession, then playing them back at full speed, especially if you get a helping hand from a wizard.

Yes, I’m talking about movies. In a society outwardly based on medieval times. It’s a complex problem, but it’s not entirely infeasible. All you really need are two things. First, a projector, which magic can easily provide. (Hint: a magic light and a force-powered motor.) Second, film. That one’s a bit harder, but it only took a few decades for inventors to go from stills to moving pictures. There’s no reason why wizards couldn’t do the same thing, although they may be held up by the need for chemical advances to make a translucent photographic medium.

It’s magic

Magic is already art, but that doesn’t mean it can’t make the lives of artists easier and more interesting. It’s often been asked what a famous artist of the past (e.g., Leonardo da Vinci or Michelangelo) could create if they were given today’s tools. In a magical society, we can come one step closer to answering that question. And that’s with a low-magic setting. Imagine what a sword-and-sorcery mage-artist could accomplish.

Let’s make a language, part 18a: Geography (Intro)

The world is a very big place, and it contains a great many things. Even before you start counting those that are living—from plants and animals down to microbes—you can find a need for hundreds or thousands of words. So that’s what we’ll do in this entry. We’ll look at the natural world, but we’ll avoid talking about its flora and fauna for the moment. Instead, the focus will be on what we might call the natural geography. The lay of the land, if you will.

The world itself

For us, “world” is virtually synonymous with “earth” and “planet”. But that’s an artifact of our high-tech society. In older days, these concepts were pretty separate. The earth was the surface, the ground—the terra firma. Planets were wandering stars in the sky, so named because they seemed to change their positions from night to night, relative to the “fixed” background stars. And the world was everything that could be observed, closer to what we might call the “universe” or “cosmos”.

Within this definition of the world, many cultures (and thus languages) create a three-way distinction between the earth, sea, and sky. Earth is solid, dry land, where people live and work and farm and hunt. Sea is the open water, from the Mediterranean to the Pacific, but not necessarily rivers and lakes; it’s the place where man cannot live. And the sky is the vast dome above, home of the sun, moon, and stars, and often whatever deity or deities the speakers worship. In pre-flight cultures, it tends to have dreamlike connotations, due to its effective inaccessibility. People can visit the sea, even if they can’t stay there, but the sky is always out of our reach.

Here, the details of your speakers’ world come into play. If they’re on Earth, then they’ll probably follow this terrestrial model to some extent. Aliens, however, will tailor their language to their surroundings. A world without a large moon like ours likely won’t have a word for “moon”; ancient Martians, for instance, might consider Phobos and Deimos nothing more than faster planets. Those aliens lucky enough to have multiple moons, on the other hand, will develop a larger vocabulary for them. The same goes for other astronomical phenomena, from the sun to the galaxy.

Land and sea

Descending to that part of the world we can reach, we find a bounty of potential words. There’s flat land, in the form of plains and valleys and fields. More rugged are the hills and mountains, distinguished with separate words in many languages; hills are really not much more than small mountains, but few languages conflate the two. Abundant plant life can create forests or, in some places, jungles, and a culture adapted to either of these areas will likely make far finer distinctions than we do. On the opposite end are the dry deserts, which aren’t necessarily hot (the Gobi is a cold desert, as is Antarctica). These don’t seem truly hospitable for life, but desert cultures exist all across the globe, from the Bedouins of the Middle East to the natives of the American Southwest, but they’ll always seek out sources of water.

Fresh water is most evident in two forms. We have the static lakes and the moving rivers as the most generic descriptors, but they’re far from all there is. Ponds are small lakes, for example, and swamps are a bit like a combination of lake and land. Rivers, owing to their huge importance for travel in past ages, get a sizable list: streams, creeks, brooks, and so on. All of these have slightly different meanings, but those can vary between dialects: what I call a creek, someone in another state may deem a brook. And the shades of meaning don’t cross language barriers, either, but a culture depending on moving bodies of water will tend to come up with quite a few words describing different kinds of them.

In another of the grand cycles of life, fresh water spills into the seas. Now, English has two words for salty bodies of water, “sea” and “ocean”, but that doesn’t mean they’re two separate things. Many languages have only one word covering both, and that’s fine. Besides, a landlocked language won’t really need to spend two valuable words on something that might as well not exist.

In addition to the broad range of terrain, terms also exist for smaller features. Caves, beaches, waterfalls, islands, and cliffs are just some of the things we name. Each one tends to be distinctive, in that speakers of a language have a set image in their minds of the “ideal” cave or bluff or whatever. That ideal will be different for different people, of course, but few would, for instance, think of the fjords of Norway when imagining a beach.

Talking about the weather

The earth and sea are, for the most part, unchanging. Scientifically, we know that’s not the case, but it’s close enough for linguistic purposes. The weather, however, is anything but static. (Don’t like the weather in {insert place name here}? Wait five minutes.) Languages have lots of ways to talk about the weather, and not just so that speakers will have a default topic for conversation.

Clouds are the most visible sign of a change in weather, but the wind can also tell you what’s to come. And for reasons that are probably obvious, there seems to be a trend: the worse the weather, the more ways a language has to talk about it. We can have a rain shower, a drizzle, maybe some sprinkles, or the far more terrible torrent, deluge, or flood. Thunder, lightning, snow (in places that have it), and more also get in on the weather words. In some locales, you can add in the tornado (or whirlwind) and hurricane to that list.

Culture and geography

Hurricane is a good example of geographical borrowing. It refers to a storm that can only form in the tropics, generally moving westward. That’s why the Spanish had to borrow a name from Caribbean natives—it was something they never really knew. True, hurricanes can strike Spain. Hurricane Vince made landfall in 2005, but 2005 was a weird year for weather all around, and there’s no real evidence that medieval and Renaissance Spaniards had ever seen a hurricane.

And that’s an important point for conlangers. Speakers of languages don’t exist in a vacuum, but few languages ever achieve the size of English or Spanish. Most are more limited in area, and their vocabulary will reflect that. We’ll see it more in future parts looking at flora and fauna, but it’s easy to illustrate in geography, too, as the hurricane example shows.

People living in a land that doesn’t have some geographical or meteorological feature likely won’t have a native word for it. The Spanish didn’t have a word for a hurricane. England never experienced a seasonal change in prevailing winds, so English had to borrow the word monsoon. Europe doesn’t have a lot of tectonic activity, but Japan does, so they’re the ones that came up with tsunami. The fjords of Scandinavia are defining features, but ones specific to that region, so we use the local name for them.

Conversely, those things a culture experiences more often will gain the focus of its wordsmiths. It says something about the English speaker’s native climate that there are so many ways to describe rain. Eskimo words for “snow” are a running linguistic joke, but there’s a kernel of truth in there. And English’s history had plenty of snow, otherwise we wouldn’t have flurries, flakes, and blizzards.

Time is also a factor in which lexical elements a language will have. Some finer distinctions require a certain level of scientific advancement. The cloud types—cumulus, nimbus, cirrus, etc.—were only really named two centuries ago, and they used terms borrowed from Latin. That doesn’t mean no one noticed the difference between puffy clouds and the grim deck of a nimbostratus before 1800, just that there was never a concerted effort to adopt fixed names for them. The same can be said for most other classification schemes.

Weather verbs

Finally, the weather deserves a second look, because it’s the reason for a very special set of verbs. In English, we might say, “It’s raining.” Other languages use an impersonal verb in this situation, with no explicit subject. (Our example conlang Ardari uses a concord marker of -y in this case.) For whatever reason, weather verbs are some of the most likely to appear in a form like this.

Perhaps it’s because the weather is beyond anyone’s control. It’s a force of nature. There’s no subject making it rain. It’s just there. But it’s one more little thing to consider. How does your conlang talk about the weather? You need to know, because how else are you going to start a conversation with a stranger?

First glance: C++17, part 2

Last time, we got a glimpse of what the future of C++ will look like from the language perspective. But programming isn’t just about the language, so here are some of the highlights of the C++ Standard Library. It’s getting a bit of a makeover, as you’ll see, but not enough to cover its roots.


Without even looking through the whole list of changes and additions, I already knew this was the big one, at least for me. The variant is a type-safe (or tagged) union. It’s an object that holds a value of one type chosen from a compile-time list. You know, like C’s union. Except variant keeps track of what kind of value it’s holding at the moment, and it’ll stop you from doing bad things:

variant<int, double> v;
v = 42;

// This works...
auto w = get<int> v;    // w = 42

// ...but this one throws an error
auto u = get<double> v; // nope!

Optional values

This one’s similar, and it was supposed to be in C++14. An optional is either a value, or it’s not. In that, it’s like Haskell’s Maybe. You can use it to hold the value of a function that can fail, and it works like an error signal. When converted to a boolean (as in an if), it acts as true if it contains a value, or false if it doesn’t. Not huge, but a bit of a time-saver:

optional<unsigned int> i;

// some function that can return an int or fail
i = f();

if (i)
    // work with a proper value
    // handle an error condition

Any values

The third of this little trinity is any, an object that can—as you might imagine—hold a value of any type. You’re expected to access the value through the any_cast function, which will throw an exception if you try the wrong type. It’s not quite a dynamic variable, but it’s pretty close, and it’ll likely be faster.


If you’ve ever used JavaScript, you know about its apply method. Well, C++ will soon have something similar, but it’s a free function. It calls a function (or object or lambda or whatever) with a tuple of arguments, expanding the tuple as if it were a parameter pack.


Yes, C++ lacked a standard way of searching a sequence for a value until now. Rather, it lacked a general way of searching for a value. Some searches can be made faster by using a different algorithm, and that’s how C++17 upgrades std::search. And they’re nice enough to give you a couple to get started: bayer_moore_searcher and bayer_moore_horspool_searcher. No points for guessing which algorithms those use.


It’s common to need to clamp a value to within certain bounds, but programming languages don’t seem to realize this. Libraries have functions to this, but languages rarely do. Well, C++ finally did it. That’ll instantly shave off 50% of the lines of code working with lighting and colors, and the rest of us will find some way to benefit.

Mathematical special functions

C++ is commonly used for numeric computation, but this set of functions is something else. They likely won’t be of interest to most programmers, but if you ever need a quick beta function or exponential integral, C++17 has got you covered.


Alright, I’ll admit, I was holding back. Everything above is great, but the real jewel in the C++17 Standard Library is the Filesystem library. If you’ve ever used Boost.Filesystem, you’re in luck! It’s the same thing, really, but it’s now standard. So everybody gets to use it. Files, paths, directories, copying, moving, deleting…it’s all here. It certainly took long enough.

Still not done

That’s not nearly everything, but those are my favorite parts. In next week’s finale, we’ll switch to the lowlights. We’ll see those features that just didn’t make the cut.

First glance: C++17, part 1

C++ is a language that is about as old as I am. Seriously. It was first called “C++” in December 1983, two months after I was born, although it had been in the works for a few years before that. So it’s an old language, but that doesn’t mean it’s obsolete or dead. No, far from it. In fact, the latest update to the language, called C++17, is scheduled for release in—you guessed it—2017, i.e., next year.

Why is that important? Well, if you know the history of C++, you know the story of its standardization. The first true standard only came out in 1998, and it was only then that all the template goodness was finally available to all. (Let’s all try to imagine Visual C++ 6 never happened.) Five years later, in 2003, we got a slight update that didn’t do much more than fill in a few blanks. Really, for over a decade, C++ was essentially frozen in time, and that was a problem. It missed the dot-com boom and the Java explosion, and the growth of the Internet and dynamic scripting languages seemed to relegate it to the dreaded “legacy” role.

Finally, after what seemed like an eternity, C++11 came about. (It was so delayed that its original codename was C++0x, because everyone thought it’d be out before 2010.) And it was amazing. It was such a revolution that coders speak of two different languages: C++ and Modern C++. Three years later, C++14 added in a few new bits, but it was more evolutionary than revolutionary.

What C++ did, though, was prepare programmers for a faster release schedule. Now, we’ve seen how disastrous that has been for projects like Firefox, but hear them out. Instead of waiting forever for all the dust to settle and a new language standard to form, they want to do things differently, and C++17 will be their first shot.

C++ is now built on a model that isn’t too different from version control systems. There’s a stable trunk (standard C++, of whatever vintage), and that’s the “main” language. Individual parts are built in what they call Technical Specifications, which are basically like Git branches. There’s one for the standard library, networking, filesystem support, and so on. These are largely independent of the standard, at least in development terms. When they’re mature enough, they’ll get merged into the next iteration of Standard C++. (Supposedly, that’ll be in 2019, but 2020 is far more likely.) But compilers are allowed—required, actually, as the language needs implementations before standardization—to support some of these early; these go under std::experimental until they’ve cooked long enough.

So C++17 is not exactly the complete overhaul of C++11, but neither is it the incremental improvement of C++14. It stands between the two, but it sets the stage for a future more in line with, say, JavaScript.

New features

I have neither the time nor the knowledge to go through each new feature added to C++17. Instead, I’ll touch on those I feel are most important and interesting. Some of these are available in current compilers. Others are in the planning stages. None of that matters as long as we stay in the realm of theory.

Fold expressions

Okay, I don’t care much for Haskell, but these look pretty cool. They take a parameter pack and reduce or fold it using some sort of operation, in the same way as Haskell’s foldl and foldr. Most of the binary operators can be used, which gives us some nifty effects. Here are a few basic examples:

// Returns true if all arguments are true
template <typename... Args>
bool all(Args... args) { return (... && args); }

// Returns true if *any* arguments is true
template <typename... Args>
bool any(Args... args) { return (... || args); }

// Returns the sum of all arguments
template <typename... Args>
int sum(Args... args) { return (args + ... + 0); }

// Prints all values to cout (name references JS)
template <typename... Args>
void console_log(Args&&... args)
    { (std::cout << ... << args) << '\n'; }

Yeah, implementing any, all, and even a variadic logging function can now be done in one line. And any functional fan can tell you that’s only the beginning.

Structured bindings

Tuples were a nice addition to C++11, except that they’re not terribly useful. C++, remember, uses static typing, and the way tuples were added made that all too evident. But then there’s the library function std::tie. As its name suggests, one of its uses is to “wire up” a connection between a tuple and free variables. That can be used for a kind of destructuring assignment, as found in Python. But C++17 is going beyond that by giving this style of value binding its own syntax:

using Point3D = tuple<double, double, double>;

// This function gives us a point tuple...
Point3D doSomething() { /* ... */ }

// ...but we want individual X/Y/Z

// With std::tie, we have to do this:
// double x, y, z;
// std::tie(x,y,z) = doSomething();

// But C++17 will let us do it this way:
auto [x,y,z] = doSomething();

Even better: this works with arrays and pairs, and it’s a straight shot from there to any other kind of object. It’s a win all around, if you ask me.

if initializers

This one’s less “Wow!” than “Finally!”, but it’s good to have. With C++17, you’ll be able to declare a variable inside the conditional of an if or switch, just like you’ve been able to do with (old-style) for loops for decades:

if (int value; value >= 0)
    // do stuff for positive/zero values
    // do stuff for negative values
    // Note: value is still in scope!

Again, not that big a deal, but anything that makes an overcomplicated language more consistent is for the best.

constexpr if

This was one of the later additions to the standard, and it doesn’t look like much, but it could be huge. If you’ve paid any attention to C++ at all in this decade, you know it now has a lot of compile-time functionality. Really, C++ is two separate languages at this point, the one you run and the one that runs while you compile.

That’s all thanks to templates, but there’s one big problem. Namely, you can’t use the run-time language features (like, say, if) based on information known only to the compile-time half. Languages like D solve this with “static” versions of these constructs, and C++17 gives us something like that with the constexpr if:

template<typename H, typename... Ts>
void f(H&& h, Ts&& ...ts)

    // Now, we need to doItTo all of the ts,
    // but what if there aren't any?
    // That's where constexpr if helps.
    if constexpr(sizeof...(ts) > 0)

If implemented properly (and I trust that they’ll be able to do that), this will get rid of a ton of template metaprogrammming overhead. For simple uses, it may be able to replace std::enable_if and tag dispatch, and novice C++ programmers will never need to learn how to pronounce SFINAE.


Those are some of my favorite features that are on the table for C++17. In the next post, we’ll look at the changes to the standard library.

Building aliens – Evolution

Whether life is made from DNA, some sort of odd molecule, or binary data, it will be subject to evolution. That’s inherent in the definition of life. Everything living reproduces, and reproduction is the reason why evolution takes place. Knowing the how and the why of evolution can help you delve deeper into the creation of alien life.

How it happens

For life as we know it, evolution is the result of, basically, copying errors. DNA doesn’t replicate perfectly; there are always some bits that get flipped, or segments that are omitted or repeated. In that, our cells are a bit like an old record or CD player, skipping at the slightest bump. Sometimes, it knocks playback ahead, and you don’t get to hear a few seconds of your favorite song. Other times, it goes back, replaying the same snippet again. It’s the same for a strand of DNA.

Mutations, as these genetic alterations are called, happen for a variety of reasons. Maybe there was a glitch in the chemical reaction that produces the DNA replication. Perhaps a stray bit of radiation hit a base molecule at just the right time. (Digital organisms would not be immune to that one. Programs can crash due to bad memory, but also from cosmic rays—interstellar radiation—hitting the components. And as our processors and memory chips get ever smaller, the risk only increases.) Anything that can interrupt the reproduction process can be at fault, and there’s almost no way to predict what will happen on the base level.

Most of the time, these errors are harmless. A single base being swapped usually doesn’t do much by itself, although there are cases where they do. Our genetic code has builtin redundancy and error correction mechanisms to prevent this “drift” from causing too much harm. Single-celled organisms have a little more trouble, as they don’t have billions of copies of their genes lying around. They tend to bear the brunt of evolution, but it can be in their best interest, as anyone who knows about MRSA can attest.

A few larger errors (or a compounding of many smaller ones) can cause a greater change in an organism. That’s where natural selection comes in. Species adapt to their environments. All else being equal, those that are better adapted tend to reproduce more, thus ensuring their genes have a higher likelihood of passing on to further generations. Thus, evolution acts as a sort of feedback loop: beneficial mutations ensure their own survival, while harmful ones are stopped before they can get a foothold. Neutral mutations, however, can linger on, as they have little outward effect; its these that can give a species its variety, such as human hair and eye color.

How you can use it

Assuming current theories are anywhere close to correct, all life on Earth derives from some microbial organism that lived three or four billion years ago. Through evolution, everything from dogs to sharks to apple trees to, well, us came to be. There are a few open questions (What was that primordial organism? Is there a “shadow” biosphere? Etc.), but that’s the gist of it. And that tells us something important about alien life. If it exists, it’s probably going to work the same way. The Grays of Planet X, for example, would be related to everything native to their homeworld, but not to the aquatic beings of Planet Y. (Unless you count panspermia, but that’s another story.)

That does not mean that all life on a planet will look the same. How could it? A quick glance out your window should show you anywhere from ten to a thousand species, none of which are visibly alike, and that’s not counting the untold millions that we can’t see. Gut bacteria are necessary for life, and their also our ten-billionth cousins. Nobody would mistake a dog for a dogwood, but they both ultimately come from the same stock. So try to avoid the tired trope of “everything on this planet looks that same”.

On the other hand, the vagaries of evolution also mean that life on one planet probably won’t look like life on another. Sure, there may be broad similarities (physiology will be the subject of the next part of this series), but it’s highly unlikely that an alien world will have, say, lions or bears. (However, this doesn’t necessarily apply at microscopic scales, as there are fewer permutations.)


For worldbuilding, you’ll likely be most interested in the species level. That’s how we define humans, as well as many of the “higher” animals. We’re Homo sapiens, our faithful pets are Canis familiaris or Felis catus, and that nasty bug we picked up is Escherichia coli.

But closely related species share a genus, and this might be something to keep in mind, especially if you’re creating a…less-realistic race. Unfortunately for us, genus Homo doesn’t have any other (surviving) members; the Neanderthals, Homo erectus, and the “hobbits” of Flores Island were all wiped out millennia ago. But that doesn’t mean your world can’t have multiple intelligent species that are closely related. They can even interbreed.

Higher levels of classification (family, order, etc.) are less useful to the builder of worlds. The traits that members of these share are more broad, like mammals’ method of live birth or the social patterns of the hominids. Really, everything above the genus is an implementation detail, as far as we’re concerned.


Now, back to natural selection. Species, as I’ve already said, adapt to their environments over time. We can see that in animals, plants, and any other organism you care to name. Fur changes color to provide camouflage, beaks alter their shape to better fit in nooks and crannies. Blood cells change to protect against malaria—but that leaves them more susceptible to sickle-cell anemia.

If an organism’s environment shifts, then that can render the adaptations useless. The most dramatic instances of this are impact events such as the one that killed the dinosaurs, but ice ages, “super” El NiƱos, and other climate change can destroy those species that find themselves no longer suited to their surroundings. And species are interconnected, so the loss of population in one can trigger the same in another that depends on it, and so on.


Much of this is background material for most aliens. The ones that are most interesting to the public at large are those that are intelligent, civilized. Like us, in other words.

We are not immune to natural selection. Far from it. But we have managed to short-circuit it to a degree. People with debilitating disorders can live long lives, potentially even reproducing and thus furthering their genetic lines. Adding to this is artificial selection, as we have performed on hundreds of plant and animal species. That’s how domestication works, as much for a wolf as for a grapevine. We take those individuals with the most desirable qualities and work things out so those are the ones that get to reproduce. It works, as attested by the vast array of dog breeds.

So aliens like us—in the sense of having civilization and technology—won’t be as beholden to their environment as their “lesser” relations. They won’t be bound to a specific climate, and they’ll be largely immune to the small shifts. Does that mean evolution stops?

Nope. We’re still evolving. It’s just that the effects haven’t really shown themselves that much. We’re taller than our ancestors, for example, because taller men and women are generally seen as more attractive. (A personal data point: I’m 6 feet tall, a full 12 inches taller than my mother, and my father was 5’8″. Not that that seems to make me any more attractive.) We live longer, but that’s more a function of medicine, hygiene, and diet, not so much genetics. Parts of us that have evolved relatively recently include Caucasian skin and adult lactose tolerance.

If our species continues to thrive, it will continue to evolve. One sci-fi favorite is space colonization, and that’s a case where evolution will make a difference. It won’t take too many generations before denizens of Mars have adapted to lower gravity, for instance. People living on rotating stations might learn to cope with the Coriolis forces they would constantly feel. It’s possible that there may come a time when there are living humans that cannot survive on their original homeworld.

And the same may be true for aliens. As an example, take Mass Effect‘s quarians. In the third installment of the series, they can (if you play things right) return to their homeworld of Rannoch. But centuries of living as space nomads spoil the homecoming, as they find themselves poorly adapted to their species’ original environment. A race of many worlds will discover the same truth: evolution is unceasing.

On alliteration and assonance

When most people think about verse, they tend to think of rhyme first and foremost. Understandable, since that’s the defining quality of so much poetry. But there’s a whole other side of the word to explore, a front-end counterpart to the back-end rhyme.


Alliteration is the repetition of a sound at the beginning of a word, a mirror image to rhyming. It’s not quite as obvious these days, as rhyme and rhythm have won our hearts and minds, but it has an illustrious history. Some of the earliest Anglo-Saxon verse was composed using alliteration, as were epics from around the Western world. Classics such as “The Raven” and “Rime of the Ancient Mariner” have sections of alliterative verse, as do children’s nursery rhymes. Peter Piper probably needed something to catch the spit from all those P sounds. And who can forget all those old cartoons with hilariously alliterative newspaper headlines? Those were a thing, and they still are in places.

Echoes of alliteration are all around us. Like rhyme, the reason borders on the psychological. In oration, the beginning of the word tends to be more forceful than the end, more evocative. So punctuating your point with purpose (see what I did there?) helps to get your message stuck in the minds of your listeners. They can “latch on” to the repetition. Wikipedia’s article on alliteration uses King’s “I Have a Dream” speech as an example: “not by the color of their skin but by the content of their character.” Notice how the hard K sounds beginning each of the “core” words grab your attention.

To be alliterative, you don’t have to use the same sound at the beginning of every word. The rules of English simply can’t accommodate that. (Newspapers cheated by removing extraneous words such as “a” and “the”.) It’s the content words that are most important, especially the adjectives and nouns. However, alliteration tends to be stricter than rhyme in what’s considered the “same” sound. Voicing differences change the quality of the sound, so they’re out. Clusters are in the same boat. On the other hand, sometimes an unstressed syllable (like un- or a-) can be ignored for the purposes of alliteration.


Alliteration is concerned with consonant sounds. (I did it again!) Assonance is different; it’s all about the vowels. What’s more, it’s not limited to the beginnings of words. Rather, it’s a vowel sound repeated throughout a phrase or line of verse. Vowel rhyming can be considered a form of assonance, but it’s so much more than that.

Assonance pops up everywhere there are vowels, which means everywhere. It’s very well suited to small utterances, such as a single line of a song or a proverb. As with alliteration, it’s not an absolute requirement for all the vowels to be the same, but those that are need to be essentially identical. And it’s the content words that are most important. Schwas, ineffectual as they are, don’t even appear on the radar; a and the aren’t going to mess up assonance. But any other vowel is fair game, in English or whatever language you’re using.

In conlangs

Alliteration and assonance are perfectly usable in any context, and they can be made to fit any language. They might not be quite as permissive as rhyme, but they can have a greater lyrical effect when used properly. (And sparingly. Don’t overdo it.)

These literary devices work best in languages with patterns of stress. That stress can be fixed, but that narrows your options slightly. Inflectional languages with fixed final stress are probably the worst for alliteration, while initial stress gives the most “punch”. For assonance, it’s not so vital, but you want to make sure your vowels aren’t being forced to a fit a pattern.

Both alliteration and assonance are easiest to accomplish in languages with smaller phonemic inventories. That shouldn’t be surprising. It’s far less work to find two words that both begin with a P if your only other options are B, D, K, and S. With these smaller sound sets (are you kidding me?), you can even create more complex styles of alliterative verse. Imagine a CV-type language with interwoven alliteration patterns, where the first and third words of a line start with one sound, while the second and fourth begin with a different one.

The other end of the spectrum holds English and most European languages, and it’s less amenable. You need lots of words, or you’ll have to get some help from stress and syllabics. That’s how we can have alliterative English: by ignoring those tiny, unstressed prefixes that pop up everywhere. It’s possible to make it work, but you have to try harder. But trying is what this is all about.

On game jams

This post is going up in August, and that’s the month for the summer version of everyone’s favorite game jam, Ludum Dare. But I’m writing this at the end of June, when there’s still a bit of drama regarding whether the competition will even take place. If it does, then that’s great. If not, well, that’s too bad. Neither outcome affects the substance of this text.

Ludum Dare isn’t the only game jam on the market, anyway. It’s just the most popular. But all of them have a few things in common. They’re competitive programming, in the sense of writing a program that follows certain rules (such as a theme) in a certain time—two or three days, for example, or a week—with the results being judged and winners declared. In this, it’s a little more serious than something like NaNoWriMo.

And it’s not for me. Now, that’s just my opinion. I’m not saying game jams are a bad thing in general, nor am I casting aspersions at LD in particular. I simply don’t feel that something like this fits my coding style. It’s the same thing with NaNoWriMo, actually. I’ve never truly “competed” in it, though I have followed along with the “write 50,000 words in November” guideline. Again, that’s because it’s not my style.

One reason is shyness. I don’t want people to see my unfinished work. I’m afraid of what they’d say. Another reason is the schedule, and that’s far more of a factor for a three-day game jam than a month-long writing exercise. I don’t think I could stand to code for the better part of 48 or 72 hours. Call it flightiness or a poor attention span, but I can’t code (or write) for hours on end. I have to take a break and do something else for a while.

Finally, there are the rules themselves. I don’t like rules intruding on my creative expression. In my view, trying to direct art of any kind is a waste of time. I have my own ideas and themes, thank you very much. All I need from you is the gentle nudge to get me to put them into action. That’s why I do a kind of “shadow” NaNoWriMo, instead of participating in the “real thing”. It seems antisocial, but I feel it’s a better use of my time and effort. What’s important is the goal you set for yourself. Climbing into a straitjacket to achieve it just doesn’t appeal to me.

But I do see why others look at game jams differently. They are that nudge, that impetus that helps us overcome our writing (or coding) inertia. And that is a noble enough purpose. I doubt I’ll join the next Ludum Dare or whatever, but I won’t begrudge the existence of the game jam. It does what it needs to do: it gets people to express themselves. It gets them to write code when they otherwise wouldn’t dare. There’s nothing bad about that, even if it isn’t my thing.

Summer Reading List 2016: halfway home

We’re halfway through the official summer, about two-thirds of the way done with the unofficial season we’re using for our Summer Reading List. I don’t know about you, but I’ve got two out of three.


  • Title: Shadows of Self
  • Author: Brandon Sanderson
  • Genre: Fantasy
  • Year: 2015

This is the fifth book in Brandon Sanderson’s Mistborn series, the second of the second trilogy. It’s a pretty good one, though I feel it’s a bit weaker than some of the previous four. Compared to its predecessor, The Alloy of Law, it’s a bit lighter on the action, but far heavier on the worldbuilding. That’s fine by me. If you haven’t noticed, I love worldbuilding, and Sanderson is one of the best there is when it comes to it. I’ll definitely give this one high marks, and I can’t wait to read the trilogy’s finale, The Bands of Mourning. (It’s already out, by the way.)


  • Title: A Million Years in a Day: A Curious History of Everyday Life
  • Author: Greg Jenner
  • Genre: History
  • Year: 2016

I found this one not too long ago (somewhere…), and I’m glad I did. It’s a fun look back through the history of everyday things and activities, following and relating to one modern man’s Saturday. I love history, and I especially love those smaller, less popular bits of it. History is not all about wars and religion and politics and race. It’s about people living their lives, and those lives never really change that much. And that’s the message of this book. Definitely worth a look, especially from a worldbuilding perspective. (Funny how that works out, huh?)

And one more…

I haven’t decided what the final book on the list will be, but I’ve got another month, so I should be okay. I hope you’re playing along at home, and that you’re having fun doing it.