Assembly: back to basics

When it comes to programming languages, there are limitless possibilities. We have hundreds of choices, filling just about every niche you can think of. Some languages are optimized for speed (C, C++), some for use in a particular environment (Lua), some to be highly readable (Python), some for a kind of purity (Haskell), and some for sheer perversity (Perl). Whatever you want to do, there’s a programming language out there for you.

But there is one language that underlies all the others. If you’ll pardon the cliché, there is one language to rule them all. And that’s assembly language. It’s the original, in a sense, as it existed even before the first compilers. It’s the native language of a computer, too. Like native languages, though, each computer has its own. So we can’t really talk about “assembly language” in the general sense, except to make a few statements. Rather, we have to talk about a specific kind of assembly, like x86, 6502, and so on. (We can also talk about “intermediate languages” as a form of assembly, like .NET’s IL or the LLVM instruction set. Here, I’ll mostly be focusing on processor-specific assembly.)

The reason for assembly

Of course, assembly is the lowest of low-level languages, at least of those a programmer can access. And that is the first hurdle, especially in these days of ever-increasing abstraction. When you have Python and JavaScript and Ruby, why would you ever want to do anything with assembly?

The main, overriding purpose of assembly today is speed. There’s literally nothing faster. Nothing can possibly be faster, because everything basically gets turned into assembly, anyway. Yes, compilers are good at optimizing. They’re great. On modern architectures, they’re almost always better than writing assembly by hand. But sometimes they aren’t. New processors have new features that older compiler versions might not know about, for example. And high-level languages, with their object systems and exceptions and lambdas, can get awfully slow. In a few cases, even the relatively tiny overhead of C might be too much.

So, for raw speed, you might need to throw out the trappings of the high level and drop down to the low. But size is also a factor, and for many of the same reasons. Twenty or thirty years ago, everybody worried about the size of a program. Memory was much smaller, hard drives less spacious (or absent altogether, in the earlier days), and networking horrendously slow or nonexistent. Size mattered.

At some point, size stopped mattering. Programs could be distributed on CDs, then DVDs, and those were both seen (in their own times) as near-infinite in capacity. And hard drives and installed memory were always growing. From about the mid 90s to the late 2000s, size was unimportant. (In a way, we’re starting to see that again, as “anti-piracy” measures. Look at PC games that now take tens of gigabytes of storage, including uncompressed audio and video.)

Then, just as suddenly, size became a big factor once again. The tipping point seems to be sometime around 2008 or so. While hard drives, network speeds, and program footprints kept on increasing, we started becoming more aware of the cost of size. That’s because of cache. Cache, if you don’t know, is nothing more than a very fast bit of memory tucked away inside your computer’s CPU. It’s way faster than regular RAM, but it’s much more limited. (That’s relative, of course. Today’s processors actually have more cache memory than my first PC had total RAM.) To get the most out of cache—to prevent the slowdowns that come from needing to fetch data from main memory—we do need to look at size. And there’s nothing smaller than assembly.

Finally, there are other reasons to study at least the basics of assembly language. It’s fun, in a bizarre sort of way. (So fun that there’s a game for it, which is just as bizarre.) It’s informative, in that you get an understanding of computers at the hardware level. And there are still a few places where it’s useful, like microcontrollers (such as the Arduino) and the code generators of compilers.

To be continued

If you made it this far, then you might even be a little interested. That’s great! Next week, we’ll look at assembly language in some of its different guises throughout history.

Writing inertia

It’s a well-known maxim that an object at rest tends to stay at rest, while an object in motion tends to stay in motion. This is such an important concept that it has its own name: inertia. But we usually think of it as a scientific idea. Objects have inertia, and they require outside forces to act on them if they are to start or stop moving.

Inertia, though, in a metaphorical sense, isn’t restricted to physical science. People have a kind of inertia, too. It takes an effort to get out of bed in the morning; for some people, this is a lot more effort than others. Athletic types have a hard time relaxing, especially after they’ve passed the apex of their athleticism, while those of us that are more…sedentary have a hard time improving ourselves, simply because it’s so much work.

Writers also have inertia. I know this from personal experience. It takes a big impetus to get me to start writing, whether a post like this, a short story, a novel, or some bit of software. But once I get going, I don’t want to stop. In a sense, it’s like writer’s block, but there’s a bit more to it.

Especially when writing a new piece of fiction (as opposed to a continuation of something I’ve already written), I’ve found it really hard to begin. Once I have the first few paragraphs, the first lines of dialogue, and the barest of setting and plot written down (or typed up), it feels like a dam bursting. The floodgates open, and I can just keep going until I get tired. It’s the same for posts like this. (“Let’s make a language” and the programming-related posts are a lot harder.)

At the start of a new story, I don’t think too much. The hardest part is the opening line, because that requires the most motivation. After that, it’s names. But the text itself, once I get over the first hurdles, seems to flow naturally. Sometimes it’s a trickle, others it’s a torrent, but it’s always there.

In a couple of months, I’ll once again take on the NaNoWriMo (National Novel Writing Month) challenge. Admittedly, I don’t keep to the letter of the rules, but I do keep the original spirit: write a novel of 50,000 words in the month of November. For me, that’s the important aspect. It doesn’t matter that it might be an idea I already had but never started because, as I said, writing inertia means it’s difficult for me to get over that hump and start the story. The timed challenge of NaNoWriMo is the impetus, the force that motivates me.

And I like that outside motivation. It’s why I’ve been “successful”, by my own definition, three out of the four times I’ve tried. In 2010, my first try, I gave up after 10 days and about 8,000 words. Real life interfered in 2011; my grandfather had a stroke on the 3rd of November, and nobody in my extended family got much done that month. Since then, though, I’m essentially 3-for-3: 50,000 words in 2012 (although that was only about a fifth of the whole novel); a complete story at 49,000 words in 2013 (I didn’t feel the need to pad it out); and 50,000 last year (that one’s actually getting released soon, if I have my way). Hopefully, I can make it four in a row.

So that’s really the idea of this post. Inertia is real, writing inertia doubly so. If you’re feeling it, and November seems too far away, find another way. There are a few sites out there with writing prompts, and you can always find a challenge to help focus you on your task. Whatever you do, it’s worth it to start writing. And once you start, you’ll keep going until you have to stop.

Irregularity in language

No natural language in the world is completely and totally regular. We think of English as an extreme of irregularity, and it really is, but all languages have at least some part of their grammar where things don’t always go as planned. And there’s nothing wrong with that. That’s a natural part of a language’s evolution.

Conlangs, on the other hand, are often far too regular. For an auxlang, intended for clear communication, that’s actually a good thing. There, you want regularity, predictability. You want the “clockwork morphology” of Esperanto or Lojban. The problem comes with the artistic conlangs. These, especially those made by novices, can be too predictable. It’s not exactly a big deal—every plural ending in -i isn’t going to break the immersion of a story for the vast majority of people—but it’s a little wart that you might want to do away with.

Count the ways

Irregularity comes in a few different varieties. Mostly, though, they’re all the same: a place where the normal rules of grammar don’t quite work. English is full of these, as everyone knows. Plurals are marked by -s, except when they’re not: geese, oxen, deer, people. Past tense is -ed, except that it sometimes isn’t: go and went. (“Strong” verbs like “get” that change vowels don’t really count, because they are regular, but in their own way.) And let’s not even get started on English orthography.

Some other languages aren’t much better. French has a spelling system that matches its pronunciation in theory only, and Irish looks like a keyboard malfunction. Inflectional grammars are full of oddities, ask any Latin student. Arabic’s broken plurals are just that: broken. Chinese tone patterns change in complex and unpredictable ways, despite tone supposedly being an integral part of a morpheme.

On the other hand, there are a few languages out there that seem to strive for regularity. Turkish is always cited as an example here, the joke being that there’s one irregular verb, and it’s only there so that students will know what to expect when they study other languages.

Conlangs are a sharp contrast. Esperanto’s plurals are always -j. There’s no small class of words marked by -m or anything like that. Again, for the purposes of clarity, that’s a good thing. But it’s not natural.

Phonological irregularity

Irregularity in a language’s phonology happens for a few different reasons. However, because phonology is so central to the character of a language, it can be hard to spot. Here are a few places where it can show up:

  • Borrowing: Especially as English (American English in particular) suffuses every corner of the planet, languages can pick up new words and bring new sounds with them. This did happen in English’s history, as it brought the /ʒ/ sound (“pleasure”, etc.) from French, but a more extreme example is the number of Bantu languages that borrowed click sounds from their Khoisan neighbors.

  • Onomatopoeia: The sounds of nature can be emulated by speech, but there’s not always a perfect correspondence between the two. The “meow” of a cat, for instance, contains a sequence of sounds rare in the rest of English.

  • Register: Slang and colloquialism can create phonological irregularities, although this isn’t all that common. English has “yeah” and “nah”, both with a final /æ/, which appears in no other word.

Grammatical irregularity

This is what most people think of when they consider irregularity in a language. Examples include:

  • Irregular marking: We’ve already seen examples of English plurals and past tense. Pretty much every other natural language has something else to throw in here.

  • Gender differences: I’m not just talking about the weirdness of having the word for “girl” in the neuter gender. The Romance languages also have a curious oddity where some masculine-looking words take a feminine article, as in Spanish la mano.

  • Number differences: This includes all those English words where the plural is the same as the singular, like deer and fish, as well as plural-only nouns like scissors.

  • Borrowing: Loanwords can bring their own grammar with them. What’s the plural of manga or even rendezvous?

Lexical irregularity

Sometimes words just don’t fit. Look at the English verb to be. Present, it’s is or are, past is was or were, and so on. Totally unpredictable. This can happen in any language, and one way is a drift in a word’s meaning.

  • Substitution: One word form can be swapped out for another. This is the case with to be and its varied forms.

  • Meaning changes: Most common in slang, like using “bad” to mean “good”.

  • Useless affixes: “Inflammable means flammable?” The same thing is presently ongoing as “irregardless” becomes more widespread.

  • Archaisms: Old forms can be kept around in fixed phrases. In English, this is most commonly the case with the Bible and Shakespeare, but “to and fro” is still around, too.

Orthographic irregularity

There are spelling bees for English. How many other languages can say that? How many would want to? As a language evolves, its orthography doesn’t necessarily follow, especially in languages where the standard spelling was fixed long ago. Here are a few ways that spelling can drift from pronunciation:

  • Silent letters: English is full of these, French more so. And then there are all those extra silent letters added to make words look more like Latin. Case in point, debt didn’t always have the b; it was added to remind people of debitus. (Silent letters can even be dialectal in nature. I pronounce wh and w differently, but few other Americans do.)

  • Missing letters: Nowhere in English can you have dg followed by a consonant except in the American spelling of words like judgment, where the e that would soften the g is implied. (I lost a spelling bee on this very word, in fact, but that was a long time ago.)

  • Sound changes: These can come from evolution or what seems like sheer perversity. (English gh is a case of the latter, I think.)

  • Borrowing: As phonological understanding has grown, we’ve adopted a kind of “standard” orthography for loanwords, roughly equivalent to Latin, Spanish, or Italian. Problem is, this is nothing at all like the standard orthography already present in English. And don’t even get me started on the attempts at rendering Arabic words into English letters.

In closing

All this is not to say that you should run off and add hundreds of irregular forms to your conlang. Again, if it’s an auxlang, you don’t want that. Even conlangs made for a story should use irregular words only sparingly. But artistic conlangs can gain a lot of flavor and “realism” from having a weird word here and there. It makes things harder to learn, obviously, but it’s the natural thing to do.

Transparent terrain with Tiled

Tiled is a great application for game developers. One of its niftiest features is the Terrain tool, which makes it pretty easy to draw a tilemap that looks good with minimal effort.

Unfortunately, the Terrain tool does have its limitations. One of those is a big one: it doesn’t work across layers. Layers are essential for any drawing but the simplest MS Paint sketches, and it’s a shame that such a valuable development tool can’t use them to their fullest potential.

Well, here’s a quick and dirty way to work around that inability in a specific case that I ran into recently.

The problem

A lot of the “indie” tile sets out there use transparency (or a color key, which has the same effect) to make nice-looking borders. The one I’m using here, Kenney’s excellent Roguelike/RPG pack, is one such set.

The problem comes when you want to use it in Tiled. Because of the transparency, you get an effect like this:

Transparent terrain

Normally, you’d just use layers to work around this, maybe by making separate “grass” and “road” layers. If you’re using the Terrain tool, though, you can’t do this. The tool relies on “transitions” between tile types. Drawing on a new layer means you’re starting with a blank slate. And that means no transitions.

The solution

The solution is simple, and it’s pretty much what you’d expect. In a normal tilemap, you might have the following layers (from the bottom up):

  1. The bare ground (grass, sand, water, whatever),
  2. Roads, paths, and other terrain modifications,
  3. Buildings, trees, and other placeable objects.

My solution to the Terrain tool’s limitation is to draw all the “terrain” effects on a single layer. Below that layer would be a “base”, which only contains the ground tiles needed to fill in the gaps. So our list would look more like this:

  1. Base (only needs to be filled in under tiles with transparency),
  2. Terrain, including roads and other mods,
  3. Placeable objects, as before.

For our road on grassland above, we can use the Terrain tool just as described in the official tutorial. After we’re done, we can create a new layer underneath that one. On it, we would draw the base grass tiles where we have the transparent gaps on our road. (Of course, we can just bucket fill the whole thing, too. That’s quicker, but this way is more flexible.) The end result? Something like this:

Filling in the gaps

It’s a little more work, but it ends up being worth it. And you were going to have to do it anyway.

Death and remembrance

Early in the morning of August 16 (the day I’m writing this), my stepdad’s mother passed away after a lengthy and increasingly tiresome battle with Alzheimer’s. This post isn’t a eulogy; for various reasons, I don’t feel like I’m the right person for such a job. Instead, I’m using it as a learning experience, as I have the past few years during her slow decline. So this post is about death, a morbid topic in any event. It’s not about the simple fact of death, however, but how a culture perceives that fact.

Weight of history

Burial ceremonies are some of the oldest evidence of true culture and civilization that we have. The idea of burying the dead with mementos even extends across species boundaries: Neanderthal remains have been found with tools. And the dead, our dead, are numerous, as the rising terrain levels in parts of Europe (caused by increasing numbers of burials throughout the ages) can attest. Death’s traditions are evident from the mummies of Egypt and Peru, the mausoleums of medieval Europe or the classical world, and the Terracotta Army of China. All societies have death, and they all must confront it, so let’s see how they do it.

The role of religion

Religion, in a very real sense, is ultimately an attempt to make sense of death’s finality. The most ancient religious practices we know deal with two main topics: the creation of the world, and the existence and form of an afterlife. Every faith has its own way of answering those two core mysteries. Once you wade through all the commandments and prohibitions and stories and revelations, that’s really all you’re left with.

One of the oldest and most enduring ideas is the return to the earth. This one is common in “pagan” beliefs, but it’s also a central concept in the Abrahamic religions of the modern West. “Ashes to ashes, dust to dust,” is one popular variation of the statement. And it fits the biological “circle of life”, too. The body of the deceased does return to the earth (whether in whole or as ashes), and that provides sustenance, allowing new life to bloom.

More organized religion, though, needs more, and that is where we get into the murky waters of the soul. What that is, nobody truly knows, and that’s not even a metaphor: the notion of “soul” is different for different peoples. Is it the essence of humanity that separates us from lower animals? Is it intelligence and self-awareness? A spark of the divine?

In truth, it doesn’t really matter. Once religion offers the idea of a soul that is separate from the body, it must then explain what happens to that soul once the body can no longer support it. Thousands of years worth of theologians have argued that point, up to—and including—starting wars in the name of their own interpretation. The reason they can do that is simple: all the ideas are variations on the same basic theme.

That basic them is thus: people die. That much can’t be argued. What happens next is the realm of God or gods, but it usually follows a general pattern. Souls are judged based on some subset of their actions in life, such as good deeds versus bad, adherence to custom or precept, or general faithfulness. Their form of afterlife then depends on the outcome. “Good” souls (whatever that is decided to mean) are awarded in some way, while “bad” souls are condemned. The harsher faiths make this condemnation last forever, but it’s most often (and more justly, in my opinion) for a period of time proportional to the evils committed in life.

The award, in general, is a second, usually eternal life spent in a utopia, however that would be defined by the religion in question. Christianity, for example, really only specifies that souls in heaven are in the presence of God, but popular thought has transformed that to the life of delights among the clouds that we see portrayed in media; early Church thought was an earthly heaven instead. Islam, popularly, has the “72 eternal virgins” presented to the faithful in heaven. In Norse mythology, valiant souls are allowed to dine with the gods and heroes in Valhalla, but they must then fight the final battle, Ragnarök (which they are destined to lose, strangely enough). In even these three disparate cases, you can see the similarities: the good receive an idyllic life, something they could only dream of in the confines of their body.

Ceremonies of death

Religion, then, tells us what happens to the soul, but there is still the matter of the body. It must be disposed of, and even early cultures understood this. But how do we dispose of something that was once human while retaining the dignity of the person once inhabited it?

Ceremonial burial is the oldest trick in the book, so to speak. It’s one of the markers of intelligence and organization in the archaeological record, and it dates back to long before our idea of civilization. And it’s still practiced on a wide scale today; my stepdad’s mother, the ultimate cause of this post, will be buried in the coming days.

Burial takes different forms for different peoples, but it’s always a ceremony. The dead are often buried with some of their possessions, and this may be the result of some primal belief that they’ll need them in the hereafter. We don’t know for sure about the rites and rituals of ancient cultures, but we can easily imagine that they were not much different from our own. We in the modern world say a few words, remember the deeds of the deceased, lower the body into the ground, leave a marker, and promise to come back soon. Some people have more elaborate shrines, others have only a bare stone inscribed with their name. Some families plant flowers or leave baubles (my cousin, who passed away at the beginning of last year, has a large and frankly gaudy array of such things adorning his grave, including solar-powered lights, wind chimes, and pictures).

Anywhere the dead are buried, it’s pretty much the same. They’re placed in the ground in a special, reserved place (a cemetery). The graves are marked, both for ease of remembrance and as a helpful reminder of where not to bury another. The body is left in some enclosure to protect it from prying eyes, and keepsakes are typically beside it.

Burial isn’t the only option, though, not even in the modern world. Cremation, where the body is burned and rendered into ash, is still popular. (A local scandal some years ago involved a crematorium whose owner was, in fact, dumping the bodies in a pond behind the place and filling the urns with things like cement or ground bones.) Today, cremation is seen as an alternative to burial, but some cultures did (and do) see it or something similar as the primary method of disposing of a person’s earthly remains. The Viking pyre is fixed in our imagination, and television sitcoms almost always have a dead relative’s ashes sitting somewhere vulnerable.

I’ll admit that I don’t see the purpose of cremation. If you believe in the resurrection of souls into their reformed earthly bodies, as in some varieties of Christianity and Judaism, then you’d have to view the idea of burning the body to ash as something akin to blasphemy. On the other hand, I can see the allure. The key component of a cremation is fire, and fire is the ultimate in human tools. The story of human civilization, in a very real sense, is the story of how we have tamed fire. So it’s easy to see how powerful a statement cremation or a funeral pyre can make.

Burying and burning were the two main ways of disposing of remains for the vast majority of humanity’s history. Nowadays, we have a few other options: donating to science, dissection for organs, cryogenic freezing, etc. Notice, though, that these all have a “technological” connotation. Cryogenics is the realm of sci-fi; organ donation is modern medicine. There’s still a ceremony, but the final result is much different.

Closing thoughts

Death in a culture brings together a lot of things: religion, ritual, the idea of family. Even the legal system gets involved these days, because of things like life insurance, death certificates, and the like. It’s more than just the end of life, and there’s a reason why the most powerful, most immersive stories are often those that deal with death in a realisic way. People mourn, they weep, they celebrate the life and times of the deceased.

We have funerals and wakes and obituaries because no man is an island. Everyone is connected, everyone has family and friends. The living are affected by death, and far more than the deceased. We’re the ones who feel it, who have to carry on, and the elaborate ceremonies of death are our oldest, most human way of coping.

We honor the fallen because we knew them in life, and we hope to know them again in an afterlife, whatever form that may take. But, curiously, death has a dichotomy. Religion clashes with ancient tradition, and the two have become nearly inseparable. A couple of days from now, my stepdad might be sitting in the local funeral home’s chapel, listening to a service for his mother that invokes Christ and resurrection and other theology, but he’ll be looking at a casket that is filled with tiny treasures, a way of honoring the dead that has continued, unbroken, for tens of thousands of years. And that is the truth of culture.

Let’s make a language – Part 4c: Nouns (Ardari)

For nouns in Ardari, we can afford to be a little more daring. As we’ve decided, Ardari is an agglutinative language with fusional (or inflectional) aspects, and now we’ll get to see a bit of what that entails.

Three types of nouns

Ardari has three genders of nouns: masculine, feminine, and neuter. Like languages such as Spanish or German, these don’t necessarily correspond to the notions of “male”, “female”, and “everything else”. Instead, they’re a little bit arbitrary, but we won’t make the same mistakes as natural languages when it comes to assigning nouns to genders. (Actually, we will make the same mistakes, but on purpose, not through the vagaries of linguistic evolution.)

Each noun is inflected not only for gender, but also for number and case. Number can be either singular or plural, just like with Isian. As for case, well, we have five of them:

  • Nominative, used mostly for subjects of sentences,
  • Accusative, used mainly for the direct objects,
  • Dative, occasionally seen for indirect objects, but mostly used for the Ardari equivalent of prepositional phrases,
  • Genitive, indicating possession, composition, and most places where English uses “of”,
  • Vocative, only used when addressing someone; as a result, it only makes sense with names and certain nouns.

So we have three genders, two numbers, and five cases. Multiply those together, and you get 30 possibilities for declension. (If you took Latin in school, that word might have made you shudder. Sorry.) It’s not quite that bad, since some of these will overlap, but it’s still a lot to take in. That’s the difficulty—and the beauty, for some—of fusional languages.


Masculine nouns in Ardari all have stems that end in -a. One example is kona “man”, and this table shows its declensions:

kona Singular Plural
Nominative kona kono
Accusative konan konon
Genitive kone konoj
Dative konak konon
Vocative konaj konaj

Roughly speaking, you can translate kono as “men”, kone as “of a man”, etc. We run into a bit of a problem with konon, since it could be either accusative or dative. That’s okay; things like this happen often in fusional languages. We’ll say it was caused by sound changes. We just have to remember that translating will need a bit more context.

Also, many of these declensions will change the stress of a word to the final syllable, following our phonological rules from Part 1.


Feminine noun stems end in -i, and they have these declensions (using chi “sun” as our example):

chi Singular Plural
Nominative chi chir
Accusative chis chell
Genitive chini chisèn
Dative chise chiti
Vocative chi chi

The same translation guides apply here, except we don’t have the problem of “syncretism”, where two cases share the same form.


Neuter nouns have stems that can end in any consonant. Using the example of tyèk “house”, we have:

tyèk Singular Plural
Nominative tyèk tyèkar
Accusative tyèke tyèkòn
Genitive tyèkin tyèkoj
Dative tyèkèt tyèkoda
Vocative tyèkaj tyèkaj

A couple of these (genitive plural, vocative) are recycled from the masculine table. Again, that’s fairly common in languages of this type, so I added it for naturalism.


Unlike Isian, Ardari doesn’t use separate words for its articles. Instead, it has a “definiteness” marker that can be added to the end of a noun. It changes form based on the gender and number of the noun you’re attaching it to, coming in one of a few forms:

  • -tö is the general singular marker, used on all three genders in all cases except the neuter dative.
  • -dys is used on masculine and most neuter plurals (except, again, the dative).
  • -tös is for feminine plurals.
  • Neuter nouns in the dative use for the singular and -s for the plural.

The neuter dative is weird, partly because of a phonological process called “haplology”, where consecutive sounds or syllables that are very close in sound merge into one. Take our example above of tyèk. You’d expect the datives to be tyèkètto and tyèkodadys. For the singular, the case marker already ends in -t, so it’s just a matter of dropping that sound from the “article” suffix. The plural would have two syllables da and dys next to each other. Speakers of languages are lazy, so they’d likely combine those into something a bit less time-consuming, thus we have tyèkodas “to the houses”.

New words

Even though I didn’t actually introduce any new vocabulary in this post, here’s the same word list from last week’s Isian post, now with Ardari equivalents. Two words are a little different. “Child” appears in three gendered forms (masculine, feminine, and a neuter version for “unknown” or “unimportant”). “Friend”, on the other hand, is a simple substitution of stem vowels for masculine or feminine, but you have to pick one, although a word like ast (a “neutered” formation) might be common in some dialects of spoken Ardari.

  • sword: èngla
  • cup: kykad
  • mother: emi
  • father: aba
  • woman: näli
  • child: pwa (boy) / gli (girl) / sèd (any or unknown)
  • friend: asta (male) / asti (female)
  • head: chäf
  • eye: agya
  • mouth: mim
  • hand: kyur
  • foot: allga
  • cat: avbi
  • flower: afli
  • shirt: tèwar

Fractal rivers with Inkscape

I’m not good with graphics. I’m awful at drawing. Maps, however, are one of the many areas where a non-artist like myself can make up for a lack of skill by using computers. Inkscape is one of those tools that can really help with map-making (along with about a thousand other graphical tasks). It’s free, it works on just about any computer you can imagine, and it’s very much becoming a standard for vector graphics for the 99% of people that can’t afford Adobe products or an art team.

For a map of a nation or world, rivers are an important yet difficult part of the construction process. They weave, meander, and never follow a straight line. They’re annoying, to put it mildly. But Inkscape has a tool that can give us decent-looking rivers with only a small amount of effort. To use it, we must harness the power of fractals.

Fractals in nature

Fractals, as you may know (and if you don’t, a quick search should net you more information than you ever wanted to know), are a mathematical construct, but they’re also incredibly good at modeling nature. Trees follow a fractal pattern, as do coastlines. Rivers aren’t exactly fractal, but they can look like it from a great enough distance, with their networks of tributaries.

The key idea is self-similarity; basically, a fractal is an object that looks pretty much the same no matter how much you zoom in. Trees have large branches, and those have smaller branches, and then those have the little twigs that sometimes branch themselves. Rivers are fed by smaller rivers, which are fed by streams and creeks and springs. The only difference is the scale.

Inkscape fractals

Inkscape’s fractals are a lot simpler than most mathematical versions. The built-in extension, from what I can tell, uses an algorithm called midpoint displacement. Roughly speaking, it does the following:

  • Find the midpoint of a line segment,
  • Move that point in a direction perpendicular to the line segment by a random amount,
  • Create two new segments that run from either endpoint to the new, displaced midpoint,
  • Start over with each of the new line segments.

The algorithm subdivides the segment a number of times. Each new stage has segments that are half the length of the old ones, meaning that, after n subdivisions, you end up with 2^n^ segments. How much the midpoint can be moved is another parameter, called smoothness. The higher the smoothness, the less the algorithm can move the midpoint, resulting in a smoother subdivision. (In most implementations of this algorithm, the amount of displacement is scaled, so each further stage can move a smaller absolute distance, though still the same relative to the size of the segment.)

The method

  1. First things first, we need to start drawing an outline of the shape of our river. It doesn’t have to be perfect. Besides, this sketch is going to be completely modified. Here, you can see what I’ve started; this was all done with the Line tool (Shift+F6):

    Designing the path

  2. Once you’ve got a rough outline, press Enter to end the path:

    Finishing the outline

  3. If you want to have curved segments, that’s okay, too. The fractal extension works just fine with them. Here, I’ve dragged some nodes and handles around using the path editor (F2):

    Adding some curves

  4. Now it’s time to really shake things up. Make sure your path is selected, and go to Extensions -> Modify Path -> Fractalize:

    Fractalize in the menus

  5. This displays a dialog box with two text inputs and a checkbox. This is the interface to the Fractalize extension. You have the option of changing the number of subdivisions (more subdivisions gives a more detailed path, at the expense of more memory) and the smoothness (as above, a higher smoothness means that each displacement has less room to maneuver, which makes the final result look smoother). “Live preview” shows you the result of the Fractalize algorithm before you commit to it, changing it as you change the parameters. Unless your computer seems to be struggling, there’s no reason not to have it on.

    The Fractalize extension

  6. When you’re happy with the settings, click Apply. Your outlined path will now be replaced by the fractalized result. I set mine to be blue. (Shift+click on the color swatch to set the stroke color.)

    The finished product

And that’s all there is to it! Now, you can go on from here if you like. A proper, natural river is a system, so you’ll want to add the smaller rivers that feed into this one. Inkscape has the option to snap to nodes, which lets you start a path from any point in your river. Since Fractalize keeps the endpoints the same, you can build your river outwards as much as you need.

Exoplanets: an introduction for worldbuilders

With the recent discovery of Kepler-452b, planets beyond our solar system—called extrasolar planets or exoplanets—have come into the news again. This has already happened a few times: the Gliese 581 system in 2007 (and again a couple of years ago); the early discoveries of 51 Pegasi b and 70 Virginis b in the mid 1990s; and Alpha Centauri, our nearest known celestial neighbor, in 2012.

For an author of science fiction, it’s a great time to be alive, reminiscent of the forties and fifties, when the whole solar system was all but unknown and writers were only limited by their imaginations. Planets, we now know, are just about everywhere you look. We haven’t found an identical match to Earth (yet), and there’s still no conclusive evidence of habitation on any of these newfound worlds, but we can say for certain that other planets are out there. So, as we continue the interminable wait for the new planet-hunters like TESS, the James Webb Space Telescope, Gaia, and all those that have yet to leave the drawing board, let’s take a quick look at what we know, and how we got here.

Before it all began: the 1980s

I was born in 1983, so I’ve lived in four different decades now, and I’ve been able to witness the birth and maturity of the study of exoplanets. But younger people, those who have lived their whole lives knowing that other solar systems exist beyond our own, don’t realize how little we actually knew not that long ago.

Thirty years ago, there were nine known planets. (I’ll completely sidestep the Pluto argument in this post.) Obviously, we know Earth quite well. Mars was a frontier, and there was still talk about near-term manned missions to go there. Venus had been uncovered as the pressure cooker that it is. Jupiter was on the radar, but largely unknown. Mercury was the target of flybys, but no orbiter—it was just too hard, too expensive. The Voyager mission gave us our first up-close looks at Saturn and Uranus, and Neptune would join them by the end of the decade.

Every star besides the Sun, though, was a blank slate. Peter van de Kamp claimed he had detected planets around Barnard’s Star in the 1960s, but his results weren’t repeatable. In any case, the instruments of three decades past simply weren’t precise enough or powerful enough to give us data we could trust.

What this meant, though, was that the field was fertile ground for science fiction. Want to put an Earthlike planet around Vega or Arcturus? Nobody could prove it didn’t exist, so nobody could say you were wrong. Solar systems were assumed to be there, if below our detection threshold, and they were assumed to be like ours: terrestrial planets on the inside, gas giant in the outer reaches, with one or more asteroid belts here or there.

The discoveries: the 1990s

As the 80s gave way to the 90s, technology progressed. Computers got faster, instruments better. Telescopes got bigger or got put into space. And this opened the door for a new find: the extrasolar planet. The first one, a huge gas giant (or small brown dwarf, in which case it doesn’t count), was detected in 1989 around the star HD 114762, but it took two years to be confirmed.

And then it gets weird. In 1992, Aleksander Wolszczan and Dale Frail discovered irregularities in the emissions of a pulsar designated PSR B1257+12. There’s not much out there that can mess up a pulsar’s, well, pulsing, but planets could do it, and that is indeed what they found. Two of them, in fact, with a third following a couple of years later, and the innermost is still the smallest exoplanet known. (I hope that will be changed in the not-too-distant future.) Of course, the creation of a pulsar is a wild, crazy, and deadly event, and the pulsar planets brought about a ton of questions, but that need not concern us here. The important point is that they were found, and this was concrete proof that other planets existed beyond our solar system.

Then, in the middle of the decade, the floodgates opened a crack. Planets began to be discovered around stars on the main sequence, stars like our sun. These were all gas giants, most of them far larger than Jupiter, and many of them were in odd orbits, either highly eccentric or much too close to their star. Either way, our solar system clearly wasn’t a model for those.

As these “hot Jupiters” became more and more numerous, the old model had to be updated. Sure, our solar system’s progression of terrestrial, gaseous, and icy (with occasional asteroids thrown in) could still work. Maybe other stars had familiar systems. After all, the hot Jupiters were an artifact of selection bias: the best method we had to detect planets—radial velocity, which relies on the Doppler effect—was most sensitive to large planets orbiting close to a star. But the fact that we had so many of them, with almost no evidence of anything resembling our own, meant that they had to be accounted for in fiction. Thus, the idea of a gas giant having habitable moons begins to grow in popularity. Again, there’s no way to disprove it.

Acceptance: the 2000s

With the turn of the millennium, extrasolar planets—soon to be shortened to the “exoplanet” moniker in popular use today—continued to come in. Advances in technology, along with the longer observation times, brought the “floor” of size further and further down. Jupiter analogues became fairly common, then Saturn-alikes. Soon, Uranus and Neptune had their clones in distant systems.

And Earth 2 was in sight, as the major space agencies had a plan. NASA had a series of three instruments, all space-based, each increasingly larger, that would usher in a new era of planetary research. Kepler would be launched around 2005-2007, and it would give us hard statistics on the population of planets in our galaxy. The Space Interferometry Mission (SIM) would follow a few years later, and it would find the first true Earthlike planets. Later, in the early to mid 2010s, the Terrestrial Planet Finder (TPF) would locate and characterize planets like Earth, showing us their atmospheres and maybe even ocean coverage. In Europe, ESA had a similar path, with CoRoT, Gaia, and Darwin.

And we know how that turned out. Kepler was delayed until 2009, and it stopped working a couple of years ago. SIM was defunded, then canceled. TPF never got out of the planning stages. Across the ocean, CoRoT launched, but it was nowhere near as precise as they thought; it’s given us a steady stream of gas giants, but not much else. Gaia is currently working, but also at a reduced capacity. Darwin met the same sad fate as TPF.

But after all that doom and gloom had passed, something incredible happened. The smallest of the new discoveries were smaller than Neptune, but still larger than Earth. That gap in mass (a factor of about 17) is an area with no known representatives in our solar system. Logically, this new category of planet quickly got the name “super-Earth”. And some of these super-Earths turned up in interesting places: Gliese 581 c was possibly within its star’s habitable zone, as was its sister planet, Gliese 581 d. Sure, Gliese 581 itself was a red dwarf, and “c” has a year that lasts less than one of our months, but it was a rocky planet in an orbit where liquid water was possible. And that’s huge.

By the end of 2009, super-Earths were starting to come into their own, and Kepler finally launched, promising to give us even more of them. Hot Jupiters suddenly became oddballs again. And science fiction has adapted. Now there were inhabited red dwarf planets, some five to ten times Earth’s mass, with double the gravity. New theories gave rise to imagined “carbon planets”— bigger, warmer versions of Titan, with lakes of oil and mountains of diamond—or “ocean worlds” of superheated water, atmospheric hydrogen and helium, and the occasional bit of rocky land.

Worldbuilding became an art of imagining something as different from the known as possible, as all evidence now pointed to Earth, and indeed the whole solar system, as being an outlier. For starters, it’s a yellow dwarf, a curious part of the main sequence. Just long-lived enough for planets to form and life to evolve, yet rare enough that they probably shouldn’t. Red dwarfs, by contrast, are everywhere, they live effectively forever, and we know a lot of them have planets.

Here and now: the 2010s

Through the first half of this decade, that’s pretty much the status quo. Super-Earths seem to be ubiquitous, “gas dwarfs” like Neptune numerous, and hot Jupiters comparatively rare. There’s still a lot of Kepler data to sift through, however.

But now we’ve almost come full circle. At the start of my lifetime, planets could be anything. They could be anywhere. And planetary systems probably looked a lot like ours.

Then, we started finding them, and that began to constrain our vision. The solar system was now rare, statistically improbable or even impossible. Super-Earths, though, were ascendant, and they offered a new inspiration.

And, finally, we come to Kepler-452b. It’s still a super-Earth. There’s no doubt about that, as even the smallest estimate puts it at 1.6 Earth masses. But it’s orbiting a star like ours, in a spot like ours, and it joins a very select group by doing that. In the coming years, that group should expand, hopefully by leaps and bounds. But it’s what 452b states that’s important: Earthlike planets are out there, in Earthlike orbits around Sunlike stars.

For worldbuilders, that means we can go back to the good old days. We can make our fictional worlds match our own, and nobody can tell us that they’re unlikely to occur. Thirty years ago, we could write whatever we wanted because there was no way to disprove it. Now, we can write what we want because it just might be proven.

What a time to build a world.

Let’s make a language – Part 4b: Nouns (Isian)

Keeping in our pattern of making Isian a fairly simple language, there’s not going to be a lot here about the conlang’s simple nouns. Of course, when we start constructing longer phrases (with adjectives and the like), things will get a little hairier.

Noun roots

Isian nouns can look like just about anything. They don’t have a set form, much like their English counterparts. But we can divide them into two broad classes based on the last letter of their root morphemes: vowel-stems and consonant-stems. There’s no difference in meaning between the two, and they really only differ in how plural forms are constructed, as we shall see.


For all intents and purposes, Isian nouns don’t mark case. We’ll get to pronouns in a later post, and they will have different case forms (again, similar to English), but the basic nouns themselves don’t change when they take different roles in a sentence.

The plural (with added gender)

The plural is where most of Isian’s noun morphology comes in. For consonant-stems, it’s pretty simple: the plural is always -i. From last week, we have the nouns sam “man” and talar “house”. The plurals, then, are sami “men” and talari “houses”. Not much else to it.

For vowel-stems, I’ve added a little complexity and “naturalism”. We have three different choices for a plural suffix. (This shouldn’t be too strange for English speakers, as we’ve got “-s”, “-es”, and oddities like “-en” in “oxen”.) So the possibilities are:

  • -t: This will be the most common marker. If all else fails, we’ll use it. An example might be seca “sword”; plural secat.

  • -s: For vowel-stems whose last consonant is a t or d, the plural becomes -s. (We’ll say it’s from some historical sound change.) Example: deta “cup”; plural detas.

  • -r: This one is almost totally irregular. Mostly, it’ll be on “feminine” nouns; we’ll justify this by saying it’s the remnant of a proper gender distinction in Ancient Isian. An example: mati “mother”; matir “mothers”.

As we go along, I’ll point out any nouns that deviate from the usual -i or -t.


Like English, Isian has an indefinite article, similar to “a/an”, that appears before a noun. Unlike the one in English, Isian’s is always the same: ta. It’s never stressed, so the vowel isn’t really distinct; it would sound more like “tuh”.

We can use the indefinite when we’re talking about one or more of a noun, but not any specific instances: ta sam “a man”; ta hut “some dogs”. (Note that we can also use it with plurals, which is something “a/an” can’t do.)

The counterpart is the definite article, like English the. Isian has not one but two of these, a singular and a plural. The singular form is e, and the plural is es; both are always stressed.

These are used when we’re talking about specific, identifiable nouns: e sam “the man”; es sami “the men”.

More words

That’s all there really is to it, at least as far as the basic noun structure. Sure, it’ll get a lot more complicated once we through in adjectives and relative clauses and such, but we’ve got a good start here. So, here’s a few more nouns, all of which follow the rules set out in this post:

  • madi “mother” (pl. madir)
  • pado “father” (pl. pados)
  • shes “woman”
  • tay “child” (pl. tays)
  • chaley “friend”
  • gol “head”
  • bis “eye”
  • ula “mouth”
  • fesh “hand”
  • pusca “foot”
  • her “cat”
  • atul “flower”
  • seca “sword”
  • deta “cup” (pl. detas)
  • jeda “shirt” (pl. jedas)

ES6 iterators and generators

With ES6, JavaScript now has much better support for iteration. Before, all we had to work with was the usual for loop, either as a C-style loop or the property-based Then we got the functional programming tools for arrays: forEach, map, reduce, and so on. Now we have even more options that can save us from needing an error-prone C-style loop or the cumbersome

The new loop

ES6 adds a new subtype of for loop: for...of. At first glance, it looks almost exactly the same as, but it has one very important difference: works on property names, while for...of loops over property values. Prettified JavaScript variants (like CoffeeScript) have had this for years, but now it comes to the base language, and we get to do things like this:

var vowels = ['a','e','i','o','u','y'];

for (var v of vowels) {
    if (v != 'y') {
    } else {
        if (Math.random() < 0.5) {
            console.log("and sometimes " + v);

Most developers will, at first, use for...of to iterate through arrays, and it excels at that. Just giving the value in each iteration, instead of the index, will save the sanity of thousands of JavaScript programmers. And it’s a good substitute for Array.forEach() for those of you that don’t like the FP style of coding.

But for...of isn’t just for arrays. It works on other objects, too. For strings, it gives you each character (properly supporting Unicode, thanks to other ES6 updates), and the new Map and Set objects work in the way you’d expect (i.e., each entry, but in no particular order). Even better, you can write your own classes to support the new loop, because it can work with anything that uses the new iterable protocol.


Protocols are technically a new addition to ES6. They lurked behind the scenes in ES5, but they were out of sight of programmers. Now, though, they’re front and center, and the iterable protocol is one such example.

If you’ve ever written code in Java, C#, C++, Python, or even TypeScript, then you already have a good idea of what a protocol entails. It’s an interface. An object conforms to a protocol if it (or some other object up its prototype chain) properly implements the protocol’s methods. That’s all there is to it.

The iterable protocol is almost too easy. For a custom iterable object, all you need to do is implement a method called @@iterator that returns an object that meets the iterator protocol.

Okay, I know you’re thinking, “How do I make a @@iterator method? I can’t use at-signs in names!” And you’re right. You can’t, and they don’t even want you to try. @@iterator is a special method name that basically means “a symbol with the name of iterator“.

So now we need to know what a symbol is. In ES6, it’s a new data type that we can use as a property identifier. There’s a lot of info about creating your own symbols out there, but we don’t actually need that for the iterable protocol. Instead, we can use a special symbol that comes built-in: Symbol.iterator. We can use it like this:

var myIterable = {
    [Symbol.iterator]: function() {
        // return an iterator object

The square brackets mean we’re using a symbol as the name of the property, and Symbol.iterator internally converts to @@iterator, which is exactly what we need.


That gets us halfway to a proper iterable, but now we need to create an object that conforms to the iterator protocol. That’s not that hard. The protocol only requires one method, next(), which must be callable without arguments. It returns another object that has two properties:

  • value: Whatever value the iterator wants to return. This can be a string, number, object, or anything you like. Internally, String returns each character, Array each successive value, and so on.

  • done: A boolean that states whether the iterator has reached the end of its sequence. If it’s true, then value becomes the return value of the whole iterator. Setting it to false is saying that you can keep getting more out of the iterator.

So, by implementing a single method, we can make any kind of sequence, like this:

var evens = function(limit) {
    return {
        [Symbol.iterator]: function() {
            var nextValue = 0;
            return {
                next: function() {
                    nextValue += 2;
                    return { done: nextValue > limit, value: nextValue };

for (var e of evens(20)) {
} // prints 2, 4, 6..., each on its own line

This is a toy example, but it shows the general layout of an iterable. It’s a great idea, and it’s very reminiscent of Python’s iteration support, but it’s not without its flaws. Mainly, just look at it. We have to go three objects deep to actually get to a return value. With ES6’s shorthand object literals, that’s a bit simplified, but it’s still unnecessary clutter.


Enter the generator. Another new addition to ES6, generators are special functions that give us most of the power of iterators with much cleaner syntax. To make a function that’s a generator, in fact, we only need to make two changes.

First, generators are defined as function*, not the usual function. The added star indicates a generator definition, and it can be used for function statements and expressions.

Second, generators don’t return like normal functions. Instead, they yield a value. The new yield keyword works just like its Python equivalent, immediately returning a value but “saving” their position. The next time the generator is called, it picks up right where it left off, immediately after the yield that ended it. You can have multiple yield statements, and they will be executed in order, one for each time you call the function:

function* threeYields() {
    yield "foo";
    yield "bar";
    yield "The End";

var gen = threeYields();; // returns "foo"; // returns "bar"; // returns "The End"; // undefined

You can also use a loop in your generators, giving us an easier way of writing our evens function above:

var evens = function(limit) {
    return {
        [Symbol.iterator]: function*() {
            var nextValue = 0;
            while (nextValue < limit) {
                nextValue += 2;
                yield nextValue;

It’s still a little too deep, but it’s better than writing it all yourself.


Generators, iterators, and the for...of loop all have a common goal: to make it easier to work with sequences. With these new tools, we can now use a sequence as the sequence itself, getting its values as we need them instead of loading the whole thing into memory at once. This lazy loading is common in FP languages like Haskell, and it’s found its way into others like Python, but it’s new to JavaScript, and it will take some getting used to. But it allows a new way of programming. We can even have infinite sequences, which would have been impossible before now.

Iterators encapsulate state, meaning that generators can replace the old pattern of defining state variables and returning an IIFE that closes over them. (For a game-specific example, think of random number generators. These can now actually be generators.) Coroutines and async programming are two other areas where generators come into play, and a lot of people are already working on this kind of stuff. Looking ahead, there’s a very early ES7 proposal to add comprehensions, and these would be able to use generators, too.

Like most other ES6 features, these aren’t usable by everyone, at least not yet. Firefox and Chrome currently have most of the spec, while the others pretty much have nothing at all. For now, you’ll need to use something like Babel if you need to support all browsers, but it’s almost worth it.