Governments in fiction

I’ll continue this ongoing not-quite-series for another week today by looking at the idea of government and how it is realized in fiction. This time, we almost have to lump science fiction and fantasy in together, simply because they share so many similarities. Most important among those is the fact that they are often (but not always) set in worlds besides our own, in societies besides our own. And a society needs a government of some sort; even true anarchy is, in effect, a form of government.

The rules of rule

Government is as varied as anything in this world. In the modern world, we have representative democracies (like the US is intended to be); parliamentary republics (much of Europe); monarchies (Thailand, Saudi Arabia); juntas (Burma, aka Myanmar); theocracies (ISIS, if you consider them an actual government); and dysfunctional anarchies (Somalia). Go back through history, and you find even more possibilities.

In fiction, though, many of the finer distinctions are lost. Much fantasy tends to go with the most known examples out of the Middle Ages: feudal monarchy, merchant republic, and a distant and inscrutable theocracy. Science fiction set in the future or on alien worlds prefers something more modern: democracy, corporate oligarchy, utopia (either libertarian or socialist), or a distant and inscrutable hive mind.

But there’s more to it than that.

Who rules?

That’s a simple question, but a profound one. Who’s in charge? We have options:

  • A single person: This is true of monarchies, dictatorships, and many theocracies. A single ruler is well-attested in history, from hunter-gatherer chieftains, to the pharaohs of Egypt, through the Chinese emperors, to the French kings and the Arabian emirs.

  • A small group of people: Not necessarily a council, but more of a cabal. They’re usually unelected, and they’re certainly not representative of the people as a whole. An example might be the two consuls of Rome, who had a sort of “power-sharing” system.

  • A larger group of people: Republics and democracies generally have a large government. This isn’t always because of bureaucracy; a system where each representative has N constituents will obviously need more and more representatives as the population grows. Of course, this smaller segment of society can then choose its own government and even a leader. Most modern countries in the West use a system like this, with a president or prime minister leading a larger body.

Why do they rule?

The rule of law is very important, but the reason the rulers are there in the first place shouldn’t be forgotten, either. Again, there are plenty of possibilities.

  • Will of the people: Ideally, this is the goal of representative democracies and republics. People vote for those most likely to support them. We can argue forever about just how effective this is, but the intent is clear.

  • Will of God: Leaders can claim their position is the result of divine will. Theocracies, quite obviously, follow this method, but medieval kings and emperors in West and East claimed the same thing. The coins of many countries in the British Commonwealth still bear an inscription that translates to “By the grace of God, Queen”, even if nobody really believes that anymore.

  • Family connections: Inheritance of rule was (and is) common in the world. Thrones and seats can be passed from father to son, mother to daughter, or any other relation. And nepotism remains a factor in any position of power. (There’s a reason why Jeb Bush and Hillary Clinton are two of the current election favorites in the US as I write this.)

  • Cold, hard cash: If you can’t get elected, you don’t know the right people, and God won’t help you, you can always buy your way in. Rule by the rich is a hallmark of feudalism and merchant republics alike, but oligarchs are the secret power in many countries today.

How do they rule?

This one’s a lot harder to break down into bullet points. How do your rulers rule? Do they have a codified set of laws, like the US Constitution? Or do they turn to holy scripture (or some facsimile thereof) for laws, morality, and punishment? Or is it simply power, the idea that might makes right?

A “lawful” system like those of most republics, democracies, and similar governments of today gives us a way to change the system from within. The Constitution can be amended, for example. And the turnover inherent in an electoral government means that outmoded ideas eventually get cast aside.

On the other hand, a theocratic system is, by definition, conservative. I don’t mean the political notion of conservative, here, but a philosophical one. Holy books can’t be changed, only reinterpreted, but there are some passages in every scripture that are all but absolute. There aren’t too many ways to read “Thou shalt not kill,” after all. (I don’t think there are too many ways to interpret “the right to keep and bear arms shall not be infringed,” either, but some disagree.)

Similarly, a government ruled by the powerful will tend to be conservative, simply because those in charge don’t want to change things enough to put themselves on the outside. Military conquerors and coups don’t like to reinstitute elections, and corporate overlords aren’t going to allow a higher corporate tax. Power takes care of itself, and a more pessimistic person might say that’s the general tendency of all governments. But that’s a different post.


Really, like I’ve said in many other posts, the best way to make your fictional culture more realistic is to work it out. Using logic, common sense, and the knowledge at everyone’s fingertips, you can figure out just what kind of society you’re making and how it relates to the ones we know.

Government has its reasons for existing, no matter what you might think of it. Those reasons will be the subject of a future post, but you can probably think of a few of them right now. Think of the three main questions I’ve asked so far. Who rules? Why are they the ones in charge? And how do they stay at the top? Answer those, and you’re well on your way to a properly realistic solution.

Let’s make a language – Part 5a: Verbs (Intro)

Last time around, we talked about nouns, the words of people, places, and things. This post will be the counterpoint to that one, because we’re going to look at verbs.

Verbs are words of action. They tell us what is happening. We might walk to the bathroom or drive to the grocery store, and verbs are the words that get us there. But they can also help describe what we are (“to be”), what we possess (“to have”), and what we do (“to do”), along with many other possibilities. Where a noun is an object or an idea, a verb is an action or a state of being.

Just like nouns, every conlang is going to have verbs (except those specifically designed to avoid them, and they do exist). And just like nouns, they have a lot of grammatical baggage. In inflectional languages, verbs will likely have a variety of forms (think of Latin’s verb conjugations). Isolating languages, by contrast, might have verbs that are constant, but they may be able to string them together in such a way that they can create the same shades of meaning. As before, the type of conlang you want to make will influence your verbal structure, but the basic idea of “verb” will remain the same.

Parts of a verb

Where the different categories for nouns are largely concerned with identifying a specific instance of something, verbal categories are more focused on the circumstances of the action in question. The most widely recognized of these include transitivity, tense, aspect, mood, and voice. Below, we’ll look at each of these in turn.

First, though, we need to decide what kind of word the verb will be. This will depend on your conlang, and it will follow the same general pattern as the noun. Isolating languages won’t have a lot of verbal morphology, relying instead on a lot of adverbs, adjectives, and preposition-like phrases, or just more than one verb in a phrase (“serial” verbs). More polysynthetic languages, on the other hand, will tend to concentrate a lot of information in the verbal word itself; agglutinative conlangs will likely have a series of affixes, leading to long words, while inflectional types will instead have fewer affixes each with more permutations.

Second, we need to know a little bit about verbs in relation to nouns. A typical sentence in most languages will have a single verb that acts as the “head”. For our running example, we’ll use the ridiculously simple English sentence the man drives a car. Here, drives is the verb, and you can see why it’s considered the head. Change the verb, and the whole meaning of the action changes as a result. If we say pushes instead, then the man probably ran out of gas. Say steals, and now he’s a thief.

Verbs, like people, have arguments. Here, the term “argument” just means a phrase that’s directly connected to the verb in some way. Our example has two arguments: a subject (the man) and a direct object (a car). If you remember when we were talking about noun case, well, that’s what some of the cases are for. The nominative and accusative (or ergative and absolutive, if you swing that way) basically represent the two main arguments of a verb, subject and object, while the dative indicates the indirect object. (Other cases, like the ablative, aren’t for verbal arguments, so we’ll mostly ignore them here.)


The idea of transitivity isn’t one that most people think about after high school English classes, but it’s central to the construction of a verb. A transitive verb has two arguments (subject and direct object), while an intransitive verb has only one. That would be simple enough, except for the exceptions.

Few languages directly mark transitivity. Some, like English, almost ignore it. Mostly, though, there might be a special verb form to temporarily change transitive to intransitive, or vice versa. Something like this can be seen in Spanish, where a number of intransitive-looking verbs actually have a direct object, typically a reflexive pronoun like se.

If that wasn’t bad enough, there are a few verbs that don’t really fit in the transitive dichotomy. The most important of these is give, which (in many languages) takes not two but three arguments. (This is where the dative comes into play, if the language has one.)

And then there are the “impersonal” verbs, which effectively have zero arguments. Weather verbs are the most common of these. Where English uses a dummy subject (it’s raining), Romance languages can just say the verb itself (Spanish llueve).


Tense describes when an action takes place in relation to outside events. Obviously, there are three main possibilities: past, present, and future. Not all languages use these, though. English, technically speaking, only has a grammatical distinction between past and present; the future tense is just a present-tense verb preceded by the auxiliary will. And this is a fairly common arrangement. Others prefer having three explicit tenses, while a few (such as Chinese) don’t really mark tense at all on the verb.

So, when we have tense at all, past and present are usually in, and future slides in there occasionally. What else is possible? Well, a few have the opposite distinction as English, marking the past and present the same, but future differently. Another option is to add tenses, splitting either the past or future into more than one. Plenty of real-life languages do this, although probably not any you’ve ever heard:

  • Cubeo (an Amazonian language) is one that has a “historical” past tense used for events long ago.
  • The Bantu language Mwera has a tense specifically for “today”.
  • The language of the Western Torres Strait Islanders, known as Kala Lagaw Ya, is said to have six tenses, with a present, “near” and “far” versions of past and future, and a “today” past tense.
  • A few languages, mostly in Africa, have special verbal forms for “yesterday” and “tomorrow”.

In our example, we’re talking in the present tense, but we can change it to the past by saying the man drove a car. That doesn’t tell us when he drove it, only that he did at some point before now.


Where tense is concerned with an absolute fixing in time of an event, aspect tells us more about the “internal” structure. Is the action complete? Is it still ongoing? Did it just start? These are the questions aspect answers, and it turns out that there can be a lot more of them than you might think.

The first distinction, the most basic and most common, is between events that are complete or ongoing. In linguistic terms, these are the perfective and imperfective, respectively. Taking our example sentence (we’ll need to switch it to the past tense for this, but bear with me), we have the perfective the man drove a car versus the imperfective the man was driving a car. As you can see, the later fixes the “reference point” of the sentence inside the action, while the perfective version looks at the act of driving from the outside.

There are dozens of aspects, but most languages don’t directly mark more than a handful. Perfective and imperfective are common, but they’re sometimes mixed with tense, too. That’s the source of the English perfect and pluperfect, which are kind of like crossing the past tense and perfective aspect, but the result can be treated as any tense: the man has driven/had driven/will have driven a car.

Wikipedia has a long list of aspects seen in various languages, but remember that many of these are restricted to just a very few languages.


Mood (or “modality”, a more technically nuanced term) talks about how a speaker feels towards the event he’s talking about. Is it a statement of fact? A command? A wish?

Moods probably aren’t marked quite as much as tense and aspect, but a few of them cross paths with those two in some languages. The subjunctive mood (which can be used for hypotheticals, opinions, desires, etc.) shows up in English, although it’s starting to disappear in the spoken language. In Romance languages, though, it’s still going strong. Imperatives, marking commands, are found in most languages, and they often have their own morphology.

The other moods don’t show up on verbs quite as often. Some languages have an optative mood specifically for hopes and dreams, wishes and desires. Arabic has the jussive, which is a kind of catch-all mood like the subjunctive. A few languages have a special mood marker for questions, for conditions, and for events that the speaker thinks are likely to occur.

As English doesn’t really have morphology for moods, our only change to the example sentence is the subjunctive that the man drive a car, which sounds overly formal, maybe even archaic.


Voice is a way to describe the relation between the verb and its arguments. The active voice is the main one, and it means that the subject is the main “doer” or agent, while the direct object (if there is one) is the “target” or patient.

The passive voice is a common alteration. Here, the subject and object switch places. The object becomes the subject, but it’s still the patient. The former subject is demoted to a prepositional phrase (or the language’s equivalent), or it’s dropped altogether. In our English example, we would have a car was driven. (Passives in English, incidentally, have an air of formality to them. It’s popular in business specifically because it de-emphasizes the subject, which minimizes liability.)

Some languages have a middle voice, where the subject is a little bit of both agent and patient. English doesn’t have this, but it can almost emulate it: the car drove. Obviously, in that sentence, the car isn’t driving something. In a sense, we’re saying that it’s driving itself, but that’s not exactly the middle voice, either. That would be the reflexive, which appears in a few languages.

Other moods include the antipassive (where it’s the object that gets dropped, instead of the subject), the applicative, and the causative. None of these are really present in the languages we’re most familiar with, but they pop up all over the world.

Odds and ends

All this, and we still haven’t touched on things like the infinitive, the gerund, and other miscellany. Well, this post is already getting pretty long, so we’ll look at those as they come up. They’re mostly concerned with larger phrases, anyway, and we haven’t even started on those.

Next time, we’ll look at how Isian and Ardari make their verbs. Along the way, we’ll cover some of the bits left out of this post, like grammatical concord. After that, our next topic will be word order, which means we can finally make a sentence in each of our conlangs.

Assembly: back to basics

When it comes to programming languages, there are limitless possibilities. We have hundreds of choices, filling just about every niche you can think of. Some languages are optimized for speed (C, C++), some for use in a particular environment (Lua), some to be highly readable (Python), some for a kind of purity (Haskell), and some for sheer perversity (Perl). Whatever you want to do, there’s a programming language out there for you.

But there is one language that underlies all the others. If you’ll pardon the cliché, there is one language to rule them all. And that’s assembly language. It’s the original, in a sense, as it existed even before the first compilers. It’s the native language of a computer, too. Like native languages, though, each computer has its own. So we can’t really talk about “assembly language” in the general sense, except to make a few statements. Rather, we have to talk about a specific kind of assembly, like x86, 6502, and so on. (We can also talk about “intermediate languages” as a form of assembly, like .NET’s IL or the LLVM instruction set. Here, I’ll mostly be focusing on processor-specific assembly.)

The reason for assembly

Of course, assembly is the lowest of low-level languages, at least of those a programmer can access. And that is the first hurdle, especially in these days of ever-increasing abstraction. When you have Python and JavaScript and Ruby, why would you ever want to do anything with assembly?

The main, overriding purpose of assembly today is speed. There’s literally nothing faster. Nothing can possibly be faster, because everything basically gets turned into assembly, anyway. Yes, compilers are good at optimizing. They’re great. On modern architectures, they’re almost always better than writing assembly by hand. But sometimes they aren’t. New processors have new features that older compiler versions might not know about, for example. And high-level languages, with their object systems and exceptions and lambdas, can get awfully slow. In a few cases, even the relatively tiny overhead of C might be too much.

So, for raw speed, you might need to throw out the trappings of the high level and drop down to the low. But size is also a factor, and for many of the same reasons. Twenty or thirty years ago, everybody worried about the size of a program. Memory was much smaller, hard drives less spacious (or absent altogether, in the earlier days), and networking horrendously slow or nonexistent. Size mattered.

At some point, size stopped mattering. Programs could be distributed on CDs, then DVDs, and those were both seen (in their own times) as near-infinite in capacity. And hard drives and installed memory were always growing. From about the mid 90s to the late 2000s, size was unimportant. (In a way, we’re starting to see that again, as “anti-piracy” measures. Look at PC games that now take tens of gigabytes of storage, including uncompressed audio and video.)

Then, just as suddenly, size became a big factor once again. The tipping point seems to be sometime around 2008 or so. While hard drives, network speeds, and program footprints kept on increasing, we started becoming more aware of the cost of size. That’s because of cache. Cache, if you don’t know, is nothing more than a very fast bit of memory tucked away inside your computer’s CPU. It’s way faster than regular RAM, but it’s much more limited. (That’s relative, of course. Today’s processors actually have more cache memory than my first PC had total RAM.) To get the most out of cache—to prevent the slowdowns that come from needing to fetch data from main memory—we do need to look at size. And there’s nothing smaller than assembly.

Finally, there are other reasons to study at least the basics of assembly language. It’s fun, in a bizarre sort of way. (So fun that there’s a game for it, which is just as bizarre.) It’s informative, in that you get an understanding of computers at the hardware level. And there are still a few places where it’s useful, like microcontrollers (such as the Arduino) and the code generators of compilers.

To be continued

If you made it this far, then you might even be a little interested. That’s great! Next week, we’ll look at assembly language in some of its different guises throughout history.

Writing inertia

It’s a well-known maxim that an object at rest tends to stay at rest, while an object in motion tends to stay in motion. This is such an important concept that it has its own name: inertia. But we usually think of it as a scientific idea. Objects have inertia, and they require outside forces to act on them if they are to start or stop moving.

Inertia, though, in a metaphorical sense, isn’t restricted to physical science. People have a kind of inertia, too. It takes an effort to get out of bed in the morning; for some people, this is a lot more effort than others. Athletic types have a hard time relaxing, especially after they’ve passed the apex of their athleticism, while those of us that are more…sedentary have a hard time improving ourselves, simply because it’s so much work.

Writers also have inertia. I know this from personal experience. It takes a big impetus to get me to start writing, whether a post like this, a short story, a novel, or some bit of software. But once I get going, I don’t want to stop. In a sense, it’s like writer’s block, but there’s a bit more to it.

Especially when writing a new piece of fiction (as opposed to a continuation of something I’ve already written), I’ve found it really hard to begin. Once I have the first few paragraphs, the first lines of dialogue, and the barest of setting and plot written down (or typed up), it feels like a dam bursting. The floodgates open, and I can just keep going until I get tired. It’s the same for posts like this. (“Let’s make a language” and the programming-related posts are a lot harder.)

At the start of a new story, I don’t think too much. The hardest part is the opening line, because that requires the most motivation. After that, it’s names. But the text itself, once I get over the first hurdles, seems to flow naturally. Sometimes it’s a trickle, others it’s a torrent, but it’s always there.

In a couple of months, I’ll once again take on the NaNoWriMo (National Novel Writing Month) challenge. Admittedly, I don’t keep to the letter of the rules, but I do keep the original spirit: write a novel of 50,000 words in the month of November. For me, that’s the important aspect. It doesn’t matter that it might be an idea I already had but never started because, as I said, writing inertia means it’s difficult for me to get over that hump and start the story. The timed challenge of NaNoWriMo is the impetus, the force that motivates me.

And I like that outside motivation. It’s why I’ve been “successful”, by my own definition, three out of the four times I’ve tried. In 2010, my first try, I gave up after 10 days and about 8,000 words. Real life interfered in 2011; my grandfather had a stroke on the 3rd of November, and nobody in my extended family got much done that month. Since then, though, I’m essentially 3-for-3: 50,000 words in 2012 (although that was only about a fifth of the whole novel); a complete story at 49,000 words in 2013 (I didn’t feel the need to pad it out); and 50,000 last year (that one’s actually getting released soon, if I have my way). Hopefully, I can make it four in a row.

So that’s really the idea of this post. Inertia is real, writing inertia doubly so. If you’re feeling it, and November seems too far away, find another way. There are a few sites out there with writing prompts, and you can always find a challenge to help focus you on your task. Whatever you do, it’s worth it to start writing. And once you start, you’ll keep going until you have to stop.

Irregularity in language

No natural language in the world is completely and totally regular. We think of English as an extreme of irregularity, and it really is, but all languages have at least some part of their grammar where things don’t always go as planned. And there’s nothing wrong with that. That’s a natural part of a language’s evolution.

Conlangs, on the other hand, are often far too regular. For an auxlang, intended for clear communication, that’s actually a good thing. There, you want regularity, predictability. You want the “clockwork morphology” of Esperanto or Lojban. The problem comes with the artistic conlangs. These, especially those made by novices, can be too predictable. It’s not exactly a big deal—every plural ending in -i isn’t going to break the immersion of a story for the vast majority of people—but it’s a little wart that you might want to do away with.

Count the ways

Irregularity comes in a few different varieties. Mostly, though, they’re all the same: a place where the normal rules of grammar don’t quite work. English is full of these, as everyone knows. Plurals are marked by -s, except when they’re not: geese, oxen, deer, people. Past tense is -ed, except that it sometimes isn’t: go and went. (“Strong” verbs like “get” that change vowels don’t really count, because they are regular, but in their own way.) And let’s not even get started on English orthography.

Some other languages aren’t much better. French has a spelling system that matches its pronunciation in theory only, and Irish looks like a keyboard malfunction. Inflectional grammars are full of oddities, ask any Latin student. Arabic’s broken plurals are just that: broken. Chinese tone patterns change in complex and unpredictable ways, despite tone supposedly being an integral part of a morpheme.

On the other hand, there are a few languages out there that seem to strive for regularity. Turkish is always cited as an example here, the joke being that there’s one irregular verb, and it’s only there so that students will know what to expect when they study other languages.

Conlangs are a sharp contrast. Esperanto’s plurals are always -j. There’s no small class of words marked by -m or anything like that. Again, for the purposes of clarity, that’s a good thing. But it’s not natural.

Phonological irregularity

Irregularity in a language’s phonology happens for a few different reasons. However, because phonology is so central to the character of a language, it can be hard to spot. Here are a few places where it can show up:

  • Borrowing: Especially as English (American English in particular) suffuses every corner of the planet, languages can pick up new words and bring new sounds with them. This did happen in English’s history, as it brought the /ʒ/ sound (“pleasure”, etc.) from French, but a more extreme example is the number of Bantu languages that borrowed click sounds from their Khoisan neighbors.

  • Onomatopoeia: The sounds of nature can be emulated by speech, but there’s not always a perfect correspondence between the two. The “meow” of a cat, for instance, contains a sequence of sounds rare in the rest of English.

  • Register: Slang and colloquialism can create phonological irregularities, although this isn’t all that common. English has “yeah” and “nah”, both with a final /æ/, which appears in no other word.

Grammatical irregularity

This is what most people think of when they consider irregularity in a language. Examples include:

  • Irregular marking: We’ve already seen examples of English plurals and past tense. Pretty much every other natural language has something else to throw in here.

  • Gender differences: I’m not just talking about the weirdness of having the word for “girl” in the neuter gender. The Romance languages also have a curious oddity where some masculine-looking words take a feminine article, as in Spanish la mano.

  • Number differences: This includes all those English words where the plural is the same as the singular, like deer and fish, as well as plural-only nouns like scissors.

  • Borrowing: Loanwords can bring their own grammar with them. What’s the plural of manga or even rendezvous?

Lexical irregularity

Sometimes words just don’t fit. Look at the English verb to be. Present, it’s is or are, past is was or were, and so on. Totally unpredictable. This can happen in any language, and one way is a drift in a word’s meaning.

  • Substitution: One word form can be swapped out for another. This is the case with to be and its varied forms.

  • Meaning changes: Most common in slang, like using “bad” to mean “good”.

  • Useless affixes: “Inflammable means flammable?” The same thing is presently ongoing as “irregardless” becomes more widespread.

  • Archaisms: Old forms can be kept around in fixed phrases. In English, this is most commonly the case with the Bible and Shakespeare, but “to and fro” is still around, too.

Orthographic irregularity

There are spelling bees for English. How many other languages can say that? How many would want to? As a language evolves, its orthography doesn’t necessarily follow, especially in languages where the standard spelling was fixed long ago. Here are a few ways that spelling can drift from pronunciation:

  • Silent letters: English is full of these, French more so. And then there are all those extra silent letters added to make words look more like Latin. Case in point, debt didn’t always have the b; it was added to remind people of debitus. (Silent letters can even be dialectal in nature. I pronounce wh and w differently, but few other Americans do.)

  • Missing letters: Nowhere in English can you have dg followed by a consonant except in the American spelling of words like judgment, where the e that would soften the g is implied. (I lost a spelling bee on this very word, in fact, but that was a long time ago.)

  • Sound changes: These can come from evolution or what seems like sheer perversity. (English gh is a case of the latter, I think.)

  • Borrowing: As phonological understanding has grown, we’ve adopted a kind of “standard” orthography for loanwords, roughly equivalent to Latin, Spanish, or Italian. Problem is, this is nothing at all like the standard orthography already present in English. And don’t even get me started on the attempts at rendering Arabic words into English letters.

In closing

All this is not to say that you should run off and add hundreds of irregular forms to your conlang. Again, if it’s an auxlang, you don’t want that. Even conlangs made for a story should use irregular words only sparingly. But artistic conlangs can gain a lot of flavor and “realism” from having a weird word here and there. It makes things harder to learn, obviously, but it’s the natural thing to do.

Transparent terrain with Tiled

Tiled is a great application for game developers. One of its niftiest features is the Terrain tool, which makes it pretty easy to draw a tilemap that looks good with minimal effort.

Unfortunately, the Terrain tool does have its limitations. One of those is a big one: it doesn’t work across layers. Layers are essential for any drawing but the simplest MS Paint sketches, and it’s a shame that such a valuable development tool can’t use them to their fullest potential.

Well, here’s a quick and dirty way to work around that inability in a specific case that I ran into recently.

The problem

A lot of the “indie” tile sets out there use transparency (or a color key, which has the same effect) to make nice-looking borders. The one I’m using here, Kenney’s excellent Roguelike/RPG pack, is one such set.

The problem comes when you want to use it in Tiled. Because of the transparency, you get an effect like this:

Transparent terrain

Normally, you’d just use layers to work around this, maybe by making separate “grass” and “road” layers. If you’re using the Terrain tool, though, you can’t do this. The tool relies on “transitions” between tile types. Drawing on a new layer means you’re starting with a blank slate. And that means no transitions.

The solution

The solution is simple, and it’s pretty much what you’d expect. In a normal tilemap, you might have the following layers (from the bottom up):

  1. The bare ground (grass, sand, water, whatever),
  2. Roads, paths, and other terrain modifications,
  3. Buildings, trees, and other placeable objects.

My solution to the Terrain tool’s limitation is to draw all the “terrain” effects on a single layer. Below that layer would be a “base”, which only contains the ground tiles needed to fill in the gaps. So our list would look more like this:

  1. Base (only needs to be filled in under tiles with transparency),
  2. Terrain, including roads and other mods,
  3. Placeable objects, as before.

For our road on grassland above, we can use the Terrain tool just as described in the official tutorial. After we’re done, we can create a new layer underneath that one. On it, we would draw the base grass tiles where we have the transparent gaps on our road. (Of course, we can just bucket fill the whole thing, too. That’s quicker, but this way is more flexible.) The end result? Something like this:

Filling in the gaps

It’s a little more work, but it ends up being worth it. And you were going to have to do it anyway.

Death and remembrance

Early in the morning of August 16 (the day I’m writing this), my stepdad’s mother passed away after a lengthy and increasingly tiresome battle with Alzheimer’s. This post isn’t a eulogy; for various reasons, I don’t feel like I’m the right person for such a job. Instead, I’m using it as a learning experience, as I have the past few years during her slow decline. So this post is about death, a morbid topic in any event. It’s not about the simple fact of death, however, but how a culture perceives that fact.

Weight of history

Burial ceremonies are some of the oldest evidence of true culture and civilization that we have. The idea of burying the dead with mementos even extends across species boundaries: Neanderthal remains have been found with tools. And the dead, our dead, are numerous, as the rising terrain levels in parts of Europe (caused by increasing numbers of burials throughout the ages) can attest. Death’s traditions are evident from the mummies of Egypt and Peru, the mausoleums of medieval Europe or the classical world, and the Terracotta Army of China. All societies have death, and they all must confront it, so let’s see how they do it.

The role of religion

Religion, in a very real sense, is ultimately an attempt to make sense of death’s finality. The most ancient religious practices we know deal with two main topics: the creation of the world, and the existence and form of an afterlife. Every faith has its own way of answering those two core mysteries. Once you wade through all the commandments and prohibitions and stories and revelations, that’s really all you’re left with.

One of the oldest and most enduring ideas is the return to the earth. This one is common in “pagan” beliefs, but it’s also a central concept in the Abrahamic religions of the modern West. “Ashes to ashes, dust to dust,” is one popular variation of the statement. And it fits the biological “circle of life”, too. The body of the deceased does return to the earth (whether in whole or as ashes), and that provides sustenance, allowing new life to bloom.

More organized religion, though, needs more, and that is where we get into the murky waters of the soul. What that is, nobody truly knows, and that’s not even a metaphor: the notion of “soul” is different for different peoples. Is it the essence of humanity that separates us from lower animals? Is it intelligence and self-awareness? A spark of the divine?

In truth, it doesn’t really matter. Once religion offers the idea of a soul that is separate from the body, it must then explain what happens to that soul once the body can no longer support it. Thousands of years worth of theologians have argued that point, up to—and including—starting wars in the name of their own interpretation. The reason they can do that is simple: all the ideas are variations on the same basic theme.

That basic them is thus: people die. That much can’t be argued. What happens next is the realm of God or gods, but it usually follows a general pattern. Souls are judged based on some subset of their actions in life, such as good deeds versus bad, adherence to custom or precept, or general faithfulness. Their form of afterlife then depends on the outcome. “Good” souls (whatever that is decided to mean) are awarded in some way, while “bad” souls are condemned. The harsher faiths make this condemnation last forever, but it’s most often (and more justly, in my opinion) for a period of time proportional to the evils committed in life.

The award, in general, is a second, usually eternal life spent in a utopia, however that would be defined by the religion in question. Christianity, for example, really only specifies that souls in heaven are in the presence of God, but popular thought has transformed that to the life of delights among the clouds that we see portrayed in media; early Church thought was an earthly heaven instead. Islam, popularly, has the “72 eternal virgins” presented to the faithful in heaven. In Norse mythology, valiant souls are allowed to dine with the gods and heroes in Valhalla, but they must then fight the final battle, Ragnarök (which they are destined to lose, strangely enough). In even these three disparate cases, you can see the similarities: the good receive an idyllic life, something they could only dream of in the confines of their body.

Ceremonies of death

Religion, then, tells us what happens to the soul, but there is still the matter of the body. It must be disposed of, and even early cultures understood this. But how do we dispose of something that was once human while retaining the dignity of the person once inhabited it?

Ceremonial burial is the oldest trick in the book, so to speak. It’s one of the markers of intelligence and organization in the archaeological record, and it dates back to long before our idea of civilization. And it’s still practiced on a wide scale today; my stepdad’s mother, the ultimate cause of this post, will be buried in the coming days.

Burial takes different forms for different peoples, but it’s always a ceremony. The dead are often buried with some of their possessions, and this may be the result of some primal belief that they’ll need them in the hereafter. We don’t know for sure about the rites and rituals of ancient cultures, but we can easily imagine that they were not much different from our own. We in the modern world say a few words, remember the deeds of the deceased, lower the body into the ground, leave a marker, and promise to come back soon. Some people have more elaborate shrines, others have only a bare stone inscribed with their name. Some families plant flowers or leave baubles (my cousin, who passed away at the beginning of last year, has a large and frankly gaudy array of such things adorning his grave, including solar-powered lights, wind chimes, and pictures).

Anywhere the dead are buried, it’s pretty much the same. They’re placed in the ground in a special, reserved place (a cemetery). The graves are marked, both for ease of remembrance and as a helpful reminder of where not to bury another. The body is left in some enclosure to protect it from prying eyes, and keepsakes are typically beside it.

Burial isn’t the only option, though, not even in the modern world. Cremation, where the body is burned and rendered into ash, is still popular. (A local scandal some years ago involved a crematorium whose owner was, in fact, dumping the bodies in a pond behind the place and filling the urns with things like cement or ground bones.) Today, cremation is seen as an alternative to burial, but some cultures did (and do) see it or something similar as the primary method of disposing of a person’s earthly remains. The Viking pyre is fixed in our imagination, and television sitcoms almost always have a dead relative’s ashes sitting somewhere vulnerable.

I’ll admit that I don’t see the purpose of cremation. If you believe in the resurrection of souls into their reformed earthly bodies, as in some varieties of Christianity and Judaism, then you’d have to view the idea of burning the body to ash as something akin to blasphemy. On the other hand, I can see the allure. The key component of a cremation is fire, and fire is the ultimate in human tools. The story of human civilization, in a very real sense, is the story of how we have tamed fire. So it’s easy to see how powerful a statement cremation or a funeral pyre can make.

Burying and burning were the two main ways of disposing of remains for the vast majority of humanity’s history. Nowadays, we have a few other options: donating to science, dissection for organs, cryogenic freezing, etc. Notice, though, that these all have a “technological” connotation. Cryogenics is the realm of sci-fi; organ donation is modern medicine. There’s still a ceremony, but the final result is much different.

Closing thoughts

Death in a culture brings together a lot of things: religion, ritual, the idea of family. Even the legal system gets involved these days, because of things like life insurance, death certificates, and the like. It’s more than just the end of life, and there’s a reason why the most powerful, most immersive stories are often those that deal with death in a realisic way. People mourn, they weep, they celebrate the life and times of the deceased.

We have funerals and wakes and obituaries because no man is an island. Everyone is connected, everyone has family and friends. The living are affected by death, and far more than the deceased. We’re the ones who feel it, who have to carry on, and the elaborate ceremonies of death are our oldest, most human way of coping.

We honor the fallen because we knew them in life, and we hope to know them again in an afterlife, whatever form that may take. But, curiously, death has a dichotomy. Religion clashes with ancient tradition, and the two have become nearly inseparable. A couple of days from now, my stepdad might be sitting in the local funeral home’s chapel, listening to a service for his mother that invokes Christ and resurrection and other theology, but he’ll be looking at a casket that is filled with tiny treasures, a way of honoring the dead that has continued, unbroken, for tens of thousands of years. And that is the truth of culture.

Let’s make a language – Part 4c: Nouns (Ardari)

For nouns in Ardari, we can afford to be a little more daring. As we’ve decided, Ardari is an agglutinative language with fusional (or inflectional) aspects, and now we’ll get to see a bit of what that entails.

Three types of nouns

Ardari has three genders of nouns: masculine, feminine, and neuter. Like languages such as Spanish or German, these don’t necessarily correspond to the notions of “male”, “female”, and “everything else”. Instead, they’re a little bit arbitrary, but we won’t make the same mistakes as natural languages when it comes to assigning nouns to genders. (Actually, we will make the same mistakes, but on purpose, not through the vagaries of linguistic evolution.)

Each noun is inflected not only for gender, but also for number and case. Number can be either singular or plural, just like with Isian. As for case, well, we have five of them:

  • Nominative, used mostly for subjects of sentences,
  • Accusative, used mainly for the direct objects,
  • Dative, occasionally seen for indirect objects, but mostly used for the Ardari equivalent of prepositional phrases,
  • Genitive, indicating possession, composition, and most places where English uses “of”,
  • Vocative, only used when addressing someone; as a result, it only makes sense with names and certain nouns.

So we have three genders, two numbers, and five cases. Multiply those together, and you get 30 possibilities for declension. (If you took Latin in school, that word might have made you shudder. Sorry.) It’s not quite that bad, since some of these will overlap, but it’s still a lot to take in. That’s the difficulty—and the beauty, for some—of fusional languages.


Masculine nouns in Ardari all have stems that end in -a. One example is kona “man”, and this table shows its declensions:

kona Singular Plural
Nominative kona kono
Accusative konan konon
Genitive kone konoj
Dative konak konon
Vocative konaj konaj

Roughly speaking, you can translate kono as “men”, kone as “of a man”, etc. We run into a bit of a problem with konon, since it could be either accusative or dative. That’s okay; things like this happen often in fusional languages. We’ll say it was caused by sound changes. We just have to remember that translating will need a bit more context.

Also, many of these declensions will change the stress of a word to the final syllable, following our phonological rules from Part 1.


Feminine noun stems end in -i, and they have these declensions (using chi “sun” as our example):

chi Singular Plural
Nominative chi chir
Accusative chis chell
Genitive chini chisèn
Dative chise chiti
Vocative chi chi

The same translation guides apply here, except we don’t have the problem of “syncretism”, where two cases share the same form.


Neuter nouns have stems that can end in any consonant. Using the example of tyèk “house”, we have:

tyèk Singular Plural
Nominative tyèk tyèkar
Accusative tyèke tyèkòn
Genitive tyèkin tyèkoj
Dative tyèkèt tyèkoda
Vocative tyèkaj tyèkaj

A couple of these (genitive plural, vocative) are recycled from the masculine table. Again, that’s fairly common in languages of this type, so I added it for naturalism.


Unlike Isian, Ardari doesn’t use separate words for its articles. Instead, it has a “definiteness” marker that can be added to the end of a noun. It changes form based on the gender and number of the noun you’re attaching it to, coming in one of a few forms:

  • -tö is the general singular marker, used on all three genders in all cases except the neuter dative.
  • -dys is used on masculine and most neuter plurals (except, again, the dative).
  • -tös is for feminine plurals.
  • Neuter nouns in the dative use for the singular and -s for the plural.

The neuter dative is weird, partly because of a phonological process called “haplology”, where consecutive sounds or syllables that are very close in sound merge into one. Take our example above of tyèk. You’d expect the datives to be tyèkètto and tyèkodadys. For the singular, the case marker already ends in -t, so it’s just a matter of dropping that sound from the “article” suffix. The plural would have two syllables da and dys next to each other. Speakers of languages are lazy, so they’d likely combine those into something a bit less time-consuming, thus we have tyèkodas “to the houses”.

New words

Even though I didn’t actually introduce any new vocabulary in this post, here’s the same word list from last week’s Isian post, now with Ardari equivalents. Two words are a little different. “Child” appears in three gendered forms (masculine, feminine, and a neuter version for “unknown” or “unimportant”). “Friend”, on the other hand, is a simple substitution of stem vowels for masculine or feminine, but you have to pick one, although a word like ast (a “neutered” formation) might be common in some dialects of spoken Ardari.

  • sword: èngla
  • cup: kykad
  • mother: emi
  • father: aba
  • woman: näli
  • child: pwa (boy) / gli (girl) / sèd (any or unknown)
  • friend: asta (male) / asti (female)
  • head: chäf
  • eye: agya
  • mouth: mim
  • hand: kyur
  • foot: allga
  • cat: avbi
  • flower: afli
  • shirt: tèwar

Fractal rivers with Inkscape

I’m not good with graphics. I’m awful at drawing. Maps, however, are one of the many areas where a non-artist like myself can make up for a lack of skill by using computers. Inkscape is one of those tools that can really help with map-making (along with about a thousand other graphical tasks). It’s free, it works on just about any computer you can imagine, and it’s very much becoming a standard for vector graphics for the 99% of people that can’t afford Adobe products or an art team.

For a map of a nation or world, rivers are an important yet difficult part of the construction process. They weave, meander, and never follow a straight line. They’re annoying, to put it mildly. But Inkscape has a tool that can give us decent-looking rivers with only a small amount of effort. To use it, we must harness the power of fractals.

Fractals in nature

Fractals, as you may know (and if you don’t, a quick search should net you more information than you ever wanted to know), are a mathematical construct, but they’re also incredibly good at modeling nature. Trees follow a fractal pattern, as do coastlines. Rivers aren’t exactly fractal, but they can look like it from a great enough distance, with their networks of tributaries.

The key idea is self-similarity; basically, a fractal is an object that looks pretty much the same no matter how much you zoom in. Trees have large branches, and those have smaller branches, and then those have the little twigs that sometimes branch themselves. Rivers are fed by smaller rivers, which are fed by streams and creeks and springs. The only difference is the scale.

Inkscape fractals

Inkscape’s fractals are a lot simpler than most mathematical versions. The built-in extension, from what I can tell, uses an algorithm called midpoint displacement. Roughly speaking, it does the following:

  • Find the midpoint of a line segment,
  • Move that point in a direction perpendicular to the line segment by a random amount,
  • Create two new segments that run from either endpoint to the new, displaced midpoint,
  • Start over with each of the new line segments.

The algorithm subdivides the segment a number of times. Each new stage has segments that are half the length of the old ones, meaning that, after n subdivisions, you end up with 2^n^ segments. How much the midpoint can be moved is another parameter, called smoothness. The higher the smoothness, the less the algorithm can move the midpoint, resulting in a smoother subdivision. (In most implementations of this algorithm, the amount of displacement is scaled, so each further stage can move a smaller absolute distance, though still the same relative to the size of the segment.)

The method

  1. First things first, we need to start drawing an outline of the shape of our river. It doesn’t have to be perfect. Besides, this sketch is going to be completely modified. Here, you can see what I’ve started; this was all done with the Line tool (Shift+F6):

    Designing the path

  2. Once you’ve got a rough outline, press Enter to end the path:

    Finishing the outline

  3. If you want to have curved segments, that’s okay, too. The fractal extension works just fine with them. Here, I’ve dragged some nodes and handles around using the path editor (F2):

    Adding some curves

  4. Now it’s time to really shake things up. Make sure your path is selected, and go to Extensions -> Modify Path -> Fractalize:

    Fractalize in the menus

  5. This displays a dialog box with two text inputs and a checkbox. This is the interface to the Fractalize extension. You have the option of changing the number of subdivisions (more subdivisions gives a more detailed path, at the expense of more memory) and the smoothness (as above, a higher smoothness means that each displacement has less room to maneuver, which makes the final result look smoother). “Live preview” shows you the result of the Fractalize algorithm before you commit to it, changing it as you change the parameters. Unless your computer seems to be struggling, there’s no reason not to have it on.

    The Fractalize extension

  6. When you’re happy with the settings, click Apply. Your outlined path will now be replaced by the fractalized result. I set mine to be blue. (Shift+click on the color swatch to set the stroke color.)

    The finished product

And that’s all there is to it! Now, you can go on from here if you like. A proper, natural river is a system, so you’ll want to add the smaller rivers that feed into this one. Inkscape has the option to snap to nodes, which lets you start a path from any point in your river. Since Fractalize keeps the endpoints the same, you can build your river outwards as much as you need.

Exoplanets: an introduction for worldbuilders

With the recent discovery of Kepler-452b, planets beyond our solar system—called extrasolar planets or exoplanets—have come into the news again. This has already happened a few times: the Gliese 581 system in 2007 (and again a couple of years ago); the early discoveries of 51 Pegasi b and 70 Virginis b in the mid 1990s; and Alpha Centauri, our nearest known celestial neighbor, in 2012.

For an author of science fiction, it’s a great time to be alive, reminiscent of the forties and fifties, when the whole solar system was all but unknown and writers were only limited by their imaginations. Planets, we now know, are just about everywhere you look. We haven’t found an identical match to Earth (yet), and there’s still no conclusive evidence of habitation on any of these newfound worlds, but we can say for certain that other planets are out there. So, as we continue the interminable wait for the new planet-hunters like TESS, the James Webb Space Telescope, Gaia, and all those that have yet to leave the drawing board, let’s take a quick look at what we know, and how we got here.

Before it all began: the 1980s

I was born in 1983, so I’ve lived in four different decades now, and I’ve been able to witness the birth and maturity of the study of exoplanets. But younger people, those who have lived their whole lives knowing that other solar systems exist beyond our own, don’t realize how little we actually knew not that long ago.

Thirty years ago, there were nine known planets. (I’ll completely sidestep the Pluto argument in this post.) Obviously, we know Earth quite well. Mars was a frontier, and there was still talk about near-term manned missions to go there. Venus had been uncovered as the pressure cooker that it is. Jupiter was on the radar, but largely unknown. Mercury was the target of flybys, but no orbiter—it was just too hard, too expensive. The Voyager mission gave us our first up-close looks at Saturn and Uranus, and Neptune would join them by the end of the decade.

Every star besides the Sun, though, was a blank slate. Peter van de Kamp claimed he had detected planets around Barnard’s Star in the 1960s, but his results weren’t repeatable. In any case, the instruments of three decades past simply weren’t precise enough or powerful enough to give us data we could trust.

What this meant, though, was that the field was fertile ground for science fiction. Want to put an Earthlike planet around Vega or Arcturus? Nobody could prove it didn’t exist, so nobody could say you were wrong. Solar systems were assumed to be there, if below our detection threshold, and they were assumed to be like ours: terrestrial planets on the inside, gas giant in the outer reaches, with one or more asteroid belts here or there.

The discoveries: the 1990s

As the 80s gave way to the 90s, technology progressed. Computers got faster, instruments better. Telescopes got bigger or got put into space. And this opened the door for a new find: the extrasolar planet. The first one, a huge gas giant (or small brown dwarf, in which case it doesn’t count), was detected in 1989 around the star HD 114762, but it took two years to be confirmed.

And then it gets weird. In 1992, Aleksander Wolszczan and Dale Frail discovered irregularities in the emissions of a pulsar designated PSR B1257+12. There’s not much out there that can mess up a pulsar’s, well, pulsing, but planets could do it, and that is indeed what they found. Two of them, in fact, with a third following a couple of years later, and the innermost is still the smallest exoplanet known. (I hope that will be changed in the not-too-distant future.) Of course, the creation of a pulsar is a wild, crazy, and deadly event, and the pulsar planets brought about a ton of questions, but that need not concern us here. The important point is that they were found, and this was concrete proof that other planets existed beyond our solar system.

Then, in the middle of the decade, the floodgates opened a crack. Planets began to be discovered around stars on the main sequence, stars like our sun. These were all gas giants, most of them far larger than Jupiter, and many of them were in odd orbits, either highly eccentric or much too close to their star. Either way, our solar system clearly wasn’t a model for those.

As these “hot Jupiters” became more and more numerous, the old model had to be updated. Sure, our solar system’s progression of terrestrial, gaseous, and icy (with occasional asteroids thrown in) could still work. Maybe other stars had familiar systems. After all, the hot Jupiters were an artifact of selection bias: the best method we had to detect planets—radial velocity, which relies on the Doppler effect—was most sensitive to large planets orbiting close to a star. But the fact that we had so many of them, with almost no evidence of anything resembling our own, meant that they had to be accounted for in fiction. Thus, the idea of a gas giant having habitable moons begins to grow in popularity. Again, there’s no way to disprove it.

Acceptance: the 2000s

With the turn of the millennium, extrasolar planets—soon to be shortened to the “exoplanet” moniker in popular use today—continued to come in. Advances in technology, along with the longer observation times, brought the “floor” of size further and further down. Jupiter analogues became fairly common, then Saturn-alikes. Soon, Uranus and Neptune had their clones in distant systems.

And Earth 2 was in sight, as the major space agencies had a plan. NASA had a series of three instruments, all space-based, each increasingly larger, that would usher in a new era of planetary research. Kepler would be launched around 2005-2007, and it would give us hard statistics on the population of planets in our galaxy. The Space Interferometry Mission (SIM) would follow a few years later, and it would find the first true Earthlike planets. Later, in the early to mid 2010s, the Terrestrial Planet Finder (TPF) would locate and characterize planets like Earth, showing us their atmospheres and maybe even ocean coverage. In Europe, ESA had a similar path, with CoRoT, Gaia, and Darwin.

And we know how that turned out. Kepler was delayed until 2009, and it stopped working a couple of years ago. SIM was defunded, then canceled. TPF never got out of the planning stages. Across the ocean, CoRoT launched, but it was nowhere near as precise as they thought; it’s given us a steady stream of gas giants, but not much else. Gaia is currently working, but also at a reduced capacity. Darwin met the same sad fate as TPF.

But after all that doom and gloom had passed, something incredible happened. The smallest of the new discoveries were smaller than Neptune, but still larger than Earth. That gap in mass (a factor of about 17) is an area with no known representatives in our solar system. Logically, this new category of planet quickly got the name “super-Earth”. And some of these super-Earths turned up in interesting places: Gliese 581 c was possibly within its star’s habitable zone, as was its sister planet, Gliese 581 d. Sure, Gliese 581 itself was a red dwarf, and “c” has a year that lasts less than one of our months, but it was a rocky planet in an orbit where liquid water was possible. And that’s huge.

By the end of 2009, super-Earths were starting to come into their own, and Kepler finally launched, promising to give us even more of them. Hot Jupiters suddenly became oddballs again. And science fiction has adapted. Now there were inhabited red dwarf planets, some five to ten times Earth’s mass, with double the gravity. New theories gave rise to imagined “carbon planets”— bigger, warmer versions of Titan, with lakes of oil and mountains of diamond—or “ocean worlds” of superheated water, atmospheric hydrogen and helium, and the occasional bit of rocky land.

Worldbuilding became an art of imagining something as different from the known as possible, as all evidence now pointed to Earth, and indeed the whole solar system, as being an outlier. For starters, it’s a yellow dwarf, a curious part of the main sequence. Just long-lived enough for planets to form and life to evolve, yet rare enough that they probably shouldn’t. Red dwarfs, by contrast, are everywhere, they live effectively forever, and we know a lot of them have planets.

Here and now: the 2010s

Through the first half of this decade, that’s pretty much the status quo. Super-Earths seem to be ubiquitous, “gas dwarfs” like Neptune numerous, and hot Jupiters comparatively rare. There’s still a lot of Kepler data to sift through, however.

But now we’ve almost come full circle. At the start of my lifetime, planets could be anything. They could be anywhere. And planetary systems probably looked a lot like ours.

Then, we started finding them, and that began to constrain our vision. The solar system was now rare, statistically improbable or even impossible. Super-Earths, though, were ascendant, and they offered a new inspiration.

And, finally, we come to Kepler-452b. It’s still a super-Earth. There’s no doubt about that, as even the smallest estimate puts it at 1.6 Earth masses. But it’s orbiting a star like ours, in a spot like ours, and it joins a very select group by doing that. In the coming years, that group should expand, hopefully by leaps and bounds. But it’s what 452b states that’s important: Earthlike planets are out there, in Earthlike orbits around Sunlike stars.

For worldbuilders, that means we can go back to the good old days. We can make our fictional worlds match our own, and nobody can tell us that they’re unlikely to occur. Thirty years ago, we could write whatever we wanted because there was no way to disprove it. Now, we can write what we want because it just might be proven.

What a time to build a world.