Leap Day special

Do you know what today is? Yesterday was February 28, so some badly-written programs might think it’s the 1st of March, but it’s not. Yep, it’s every programmers worst nightmare: Leap Day. You’re not getting a “code” post on Monday because I can’t read a calendar. Today’s special, so this is a special post.

Dates are hard. There’s no getting around that. Every part of our calendar seems like it was made specifically to drive programmers insane. Most years have 365 days, but every fourth one has 366. Well, unless it’s a century year, then it’s back to 365. Except every 400 years, like in the year 2000—those are leap years again. Y2K wasn’t as bad as it could’ve been, true, but there were quite a few hiccups. (Thanks to JavaScript weirdness, those never really stopped. Long after millennial fever died down, I saw a website reporting the current year as “19108”!) But 2000 was a leap year, and that surprised older software almost as much as actually being in the year 2000.

It gets worse. How long is a month? The answer: it depends. Weeks are always 7 days, thankfully, but you can’t divide 365 or 366 into 7 without a remainder. You’ll always have one or two days left over. And a day isn’t necessarily 24 hours, thanks to DST. Topping it all off, you can’t even assume that there are 60 seconds in a minute, because leap seconds. (That one is subject to change, supposedly. I’ll believe it when I see it.)

That’s just for the calendar we use today in the Western world. Add in everything else involving dates, and you have a recipe for disaster. The days in a month are numbered consecutively, right? Wrong! If you’re using, say, the Jewish calendar, you can’t even guarantee that two years have the same number of months. The Islamic calendar once depended on the sighting of the moon. The Maya had a calendar with 5 or 6 days that weren’t part of any month. Even our own Gregorian calendar doesn’t have a year zero. At some point, you’re left wondering how society itself has made it this far.

I’ll leave it to sites like The Daily WTF for illustrated examples of date handling gone wrong. God knows there are enough to choose from. (Dates in Java are especially horrendous, I can say from experience.) Honestly, I’d have to say that it’s more amazing to see date handling done right. By “right”, I mean handling all the corner cases: leap seconds, leap years, week numbers, time zones, DST adjustments, etc. ISO has a standard for dates, but…good luck with that.

So don’t be surprised to see a few things break today. And if you’re not a programmer, don’t feel bad. Sometimes, it feels like we can’t figure this out any better than you. That’s never more true than on this day.

Let’s make a language – Part 13a: Numerals (Intro)

After learning how to speak, counting is one of the first things children tend to figure out, for obvious reasons. And language is set up to facilitate learning how to count, simply because it’s such an important part of our existence as human beings. The familiar “one, two, three” of English has its counterparts around the world, though each language has its own way of using them.

These numerals will be our focus today. (Note that we can’t really call them numbers in a linguistic context, because we’re already using the term “number” for the singular/plural distinction.) Specifically, we’ll look at how different languages count with their numerals; in math terms, these will be the cardinal numbers. In a later post, we can add in the ordinal numbers (like “first” and “third”), fractions, quantities, measurements, and all that other good stuff. For now, let’s talk about counting.

Oh, and since numerals lie at a kind of intersection of linguistics and mathematics, it’ll help if you’re familiar with a few concepts from math. While we won’t be going into things like positional number systems—I’ll save that for a post about writing systems, far into the future—the concept of powers will be important. More information shouldn’t be that hard to find on the Internet, so I’ll leave that in your capable hands.

Count the ways

How a language counts is highly dependent on its culture. Remember that counting and numeral words predate by far the invention of writing. Now think about how you can count if you can’t write. One of the best ways is by using parts of your body. After all, it’s always with you, unlike a collection of stones or some other preliterate method. Thus, bodily terms often pop up in the context of numerals.

In fact, that’s one of the simplest methods of creating numerals: just start numbering parts of your body. A few languages from Pacific islands still use this today, and it’s entirely possible that it’s how all ancestor languages did it. Words for the fingers of one hand usually cover 1-4, with the thumb standing for 5. After that, it depends on the language. Six could be represented by the word for the palm or wrist, and larger numbers by points further up the arm. In this way, you can continue down the opposite arm, to its hand, and then on to the rest of the body.

Once you need to work with larger numbers, however, you’ll want a better way of creating them. The “pointing” method is inefficient—you need to remember each point on the body in order—and there are only so many body parts. This is fine for a hunter-gatherer society, and many of those have a very small selection of numerals (anywhere from one to five), using a word for “many” for anything higher. But we “advanced” peoples do need to refer to greater quantities. The solution, then, is to use a smaller set of numerals and construct larger ones from that. That’s how we do it in English: “twenty-five” is nothing more than “twenty” plus “five”.

For our language, the key number is 10. Every number up to this one has its own numeral, while larger ones are mostly derived. The only exceptions are words like “hundred” and “thousand” which, incidentally enough, represent higher powers of 10. Thus, we can say that English uses base-10 counting—or decimal, if you prefer fancier words.

At the base

Every language with a system of numeral words is going to have a numerical base for that system. Which number is used as the base really has a lot to do with the history of the language and how its people traditionally counted. Not every number is appropriate as a base; Douglas Adams once said that nobody makes jokes in base-13, and I can state with confidence that nobody counts in it, either. Why? Because 13 is awkward. It’s a prime number with essentially no connection to any part of the body. Since counting probably originated with body parts, there’s no reason for a culture to ever develop base-13 counting. Other numbers, though, are quite suitable.

  • Decimal (base-10) counting is, far and away, the most common in the world. Look at your hands, and you’ll see why. (Unless, of course, you don’t have ten fingers.) Counting in decimal is just the finger counting most of us grew up with, and decimal systems tend to have new words for higher powers of 10. In English, we’ve got “hundred” and “thousand”, and these are pretty common in other decimal languages. For “ten thousand”, we don’t have a specific native word, but Japanese (man) and Ancient Greek (myrioi) do; the latter is where we get the word “myriad”.

  • Vigesimal (base-20) is not quite as widespread as decimal, but it has plenty of supporters. A few European languages use something like base-20 up to a certain point—one hundred, in fact—where they switch to full decimal. But a “true” vigesimal system, using powers of 20 instead of 10 (and thus having separate words for 400, 8,000, etc.), can be found in Nahautl (Aztec) and Maya, as well as Dzongkha, in Bhutan. Like decimal, vigesimal most likely derives from counting, but here it would be the fingers and the toes.

  • Quinary (base-5) turns up here and there, particularly in the Pacific and Australia. Again, it comes from counting, but this time with only one hand. It’s far more common for 5 to be a “sub-base” in a greater decimal system; in other words, 10 can be “two fives”, but 20 is more likely to be “two tens”. The alternative, where the core terms are for 5, 25, 125, and so on, doesn’t seem to occur, but there’s no reason why it can’t.

  • Duodecimal (base-12) doesn’t appear to have an obvious body correlation, but it actually does. Using the thumb of one hand, count the finger bones on that hand. Each finger has three of them, and you’ve got four non-thumb fingers: 3 × 4 = 12. There are a few languages out there that use duodecimal numerals (including Tolkien’s Quenya), but base-12 is more common in arithmetic contexts, where its multiple factors sometimes make it easier to use than decimal. Even in English, though, we have the “dozen” (12) and “gross” (144).

  • Other numbers are almost never used as the “primary” base in a language, but a few can be found as “auxiliary” bases. Base-60 (sexagesimal), like our minutes and seconds, is entirely possible, but it will likely be accompanied by decimal or duodecimal sub-bases. Some languages of Papua New Guinea and thereabouts use a quaternary (base-4) system or, far more rarely, a senary or base-6 system. Octal (base-8) can work with finger counting if you use the spaces between your fingers, and a couple of cultures do this. And, of course, it’s easy to imagine an AI using octal, hexadecimal (base-16), or plain binary (base-2).

Word problems

In general, numerals up to the primary base are all going to be different, as in English “one” through “ten”. A few powers of the base will also have their own words, but this will be dependent on how often the speakers of a language need those higher numbers. “Hundred” and “thousand” suffice for many older cultures, but the Mayans could count up to the alau, 206 or 64 million, China has native terms up to 1014 (a hundred trillion), and the Vedas have lots of terms for absurdly large numerals.

No matter what the “end” of the scale, most of the numbers in between will be somehow derived. Again, the more often numbers are used, the more likely they’ll acquire specific terms, but special forms are common for multiples of the base up to its square (100 in decimal, 400 in vigesimal, and so on), like our “twenty” or “eighty”. Intermediate numbers will tend to be made from these building blocks: multiples and powers of the base. How they’re combined is up to the language, but the English phrasing, for once, is a pretty good guide.

Some languages work with a secondary base, and these may affect the way numeral words work. Twelve and twenty can almost be considered sub-bases for English with words like “dozen” and the peculiar method of constructing numbers in the teens. Twenty is a stronger force in other European languages, though. French is an example here, with 80 being quatre-vingts, literally “four twenties”. In contrast, a full vigesimal system can function just fine with the numeral for twelve derived as “ten and two”, using 10 as a sub-base, although I’m not aware of an example. Any factor can also work as a sub-base, especially in base-20, where 4 and 5 both work, or base-60, where you can use 6 and 10.

Irregularity is everywhere in natural languages, and that includes numerals. There always seem to be a few outliers that don’t fit the pattern. English has “eleven” and “twelve”, of course; it gets them from Germanic, as do many of its cousins. Spanish, among others, has veinte for 20, whereas other multiples of ten are constructed fairly regularly from their “ones” (treinte, etc.). Other examples abound.

Fitting in

How numeral words fit into a language is also a major variable. Sometimes, they’re a separate part of speech. Or they can be adjectives. Or nouns. Or some combination of all three. If they’re adjectives or nouns, then they may or may not participate in the usual grammar. Latin, for instance, requires small numerals (up to four) to be inflected, but everything larger is largely fixed in form. English lets numerals act as adjectives or nouns, as needed, and some dialects allow nouns following adjectival numerals to ignore grammatical number (“two foot of rope”, “eight head of cattle”). It’s really a mess most everywhere.

For a conlang, it’s going to come down to the necessities. Auxlangs, as always, need to be simple, logical, and reasonable, so it’s best not to get too crazy, and this extends to all aspects of numerals. You’re not going to get many followers if you make them start counting by dozens! (Confession time. I did this for a non-auxlang over ten years ago, and I still forget it uses duodecimal sometimes! Imagine how that would be for a language intended to be spoken.)

Fictional languages get a little bit of a pass. Here, it’s okay to go wild, as long as you know what you’re doing. Non-decimal bases are everywhere in conlangs, even in “professional” ones like Tolkien’s. With non-humans, you get that much more rope to hang yourself with. Four-fingered aliens (or cartoon characters) would be more likely to reckon in an octal system than a decimal one. Depending on how their digits are made, you could also make a case for base-6 or base-9, by analogy with Earthly octal and duodecimal finger counting. Advanced races will be more likely to have a sophisticated system of higher powers, like our billion, trillion, etc. And so on.

More than any other part of this series, numerals are a part of a culture. If you’re making a conlang without a culture—as in an auxlang—then think of who the speakers will be, and copy them. Otherwise, you might need to consider some of the aspects of your fictional speakers. How would they count? How would they think of numbers? Then you can start making your own.

Godot Engine 2.0 released

Finally!

I’ve been saying for a while now that I think Godot is one of the best game engines around for indie developers. It’s open source, it’s free, you never have to worry about royalties—all it really needed was a bit more polish. Well, version 2.0 is out, and that brings some of that much-needed polish. Downloads and changelogs are at the link above, but I’ll pick a few of the improvements that stand out to me.

Scenes

Godot is, for lack of a better term, a scene-based game engine. Scenes are the core construct, and the engine has always been built around making them easy yet powerful. With 2.0, that’s now even more true.

Thanks to the new additions to scene instancing, Godot scenes got even better. Now, every scene in a Godot game is, to put it in Unity terms, a prefab. If you’ve used Unity, you know how helpful prefabs can be; Godot gives them to you for free. Basically, every instance of a scene can be edited in any way. All of its child nodes, including sub-scenes, are there for the changing.

It gets better, because now scenes can be inherited, too. The obvious use for this is a “base” object that is slightly altered to quickly create others. Enemies with subtle AI or animation changes, for example, or palette-swapped pickups. But I’m sure you can find plenty of other ways inheritance can help you. I mean, it wouldn’t be used so much in programming if you couldn’t.

The editor

Without the editor, Godot would be nothing more than Yet Another Engine. But it does have the editor, and that’s one of its biggest draws. The new version gives the editor a major overhaul, adding tons of new features. It’ll take some time to work out how—and how much—they help, but it’s hard to imagine that they won’t.

The most important, from my view, are multiple scene editing and the new Script view. Working with Godot, one of the biggest pains was the constant need to switch between scenes. They’re the central component of your engine, but you can only have one of them open at a time? No more, and that change alone will probably double your productivity.

The dissociation of the script editor from the scene editor turns Godot into more of an IDE. That will make it seem more familiar to people coming from code-heavy engines, for one thing. But it also means that we can keep multiple scripts open across scene changes. Again, the time-consuming context switch when editing was one of my main gripes with Godot’s editor. Now it’s gone.

Live editing

This one deserves its own section. Live editing is, simply put, the ability to edit your game while it’s running. I’ll have to try it out to see how well it works, but if it does, this is pretty huge. Especially in the later stages of development, fine-tuning can take forever if you’re constantly going through the edit-compile-run cycle. If Godot can take even some of that pain away…wow.

Combine this with the improvements to the debugger, including a video RAM view and collision/navigation debugging, and it gets even better. Oh, and if you’re working on a newer Android game, you can even have live editing on the device.

The announcement at the Godot homepage has a video of live editing in action. I suggest watching it.

The rest

Godot version 2.0 is a massive update. Those features I mentioned are only the best parts, and there are a lot of minor additions and changes. Some of them are of…questionable benefit, in my opinion (I’m not sold on heatmaps in the list of open scripts, for instance, and why not use JSON for your scene’s text format, like everyone else?), but those are far outweighed by the undeniable improvements.

I’ve said it before, and I’ll say it again. If you’re an indie game dev, especially if you’re focusing on 2D games, you owe it to yourself to check out Godot. It really is one of the best around for that niche. And it’s not like it’ll cost you anything.

Out of the dark: building the Dark Ages

We have an awful lot of fiction out there set in something not entirely unlike our Middle Ages. Almost every cookie-cutter fantasy world is faux-medieval, and that’s only the ones that aren’t trying to be. The Renaissance and early Industrial Era also get plenty of love, and Roman antiquity even comes up from time to time. But there’s one time period in our history that seems a bit…left out. I’m talking about those centuries after Rome fell to the barbarian hordes, but before William crossed the Channel to give England the same fate. I’m talking about the Dark Ages.

A brighter shade of dark

Now, as we know today, what previous generations called the Dark Ages weren’t really all that dark. Sure, there were Vikings and Vandals, barbarians and Britons, Goths and Gauls, but it wasn’t a complete disaster. The reason we speak of the “Dark Ages”, though, is contrast. Rome was a magnificent empire by any account, and the first to coin the “Dark Age” moniker on its fallen children were living in the equally “shining” Enlightenment. By comparison, the time between wasn’t exactly grand.

Even in our modern knowledge, the notion of a Dark Age is still useful, even if it doesn’t quite mean what we think it means. In general, we can use it to refer to any period of technological, social, and political stagnation and regression. That’s not to say there wasn’t progress in the Dark Ages. One great book about the period is titled Cathedral, Forge, and Waterwheel, and that’s a pretty good indication of some of the advancement that did happen.

Compared to what came before—the Roman empire, with its Colosseum and aqueducts and roads—there’s a huge difference, especially at the start of the Dark Ages. In some parts of Europe, particularly those farthest from the imperial center, general conditions fell to their lowest levels in hundreds of years. While the Empire itself actually did survive in the east in the form of the Byzantines (who were even considered the “true” emperors by the first generations of barbarian kings), the west was shattered, and it showed. But they dug themselves out of that hole, as we know.

Dying light

So, even granting our more limited definition of “Dark Ages”, what caused them? Well, there are a lot of theories. Rome was sacked in 476, of course, and that’s usually considered a primary cause. A serious cold snap starting around 536 couldn’t have helped matters. Plagues around the same time combined with the war and famine to cause even greater death, completing the quartet of the Horsemen.

But all that together shouldn’t have been enough to devastate the society of western Europe, should it? If it happened today, it wouldn’t, because our world is so connected, so small, relative to Roman times. If the whole host of apocalyptic horror visited the EU today, hundreds of millions of people would die, but we wouldn’t have a new Dark Age. The reason can be summed up in one word: continuity.

Yes, half of the Roman Empire survived. In a way, it was the stronger half, but it was also the more distant half. When Rome fell, when all the other catastrophes visited its remnants, the effect was to cause a cultural break. Many parts of the empire were already more or less autonomous, growing ever more apart, and the loss of the “center of gravity” that was Rome merely hastened the process.

A look at Britain illustrates this. After Rome all but gave up on its island colony, England all but gave up on it. Outside of the monasteries, Rome was practically forgotten within a few generations, once the Saxons and their other Germanic friends rolled in. The Danes that started vacationing there in the ninth century cared even less for news from four hundred years ago. By the time William came conquering, Anglo-Saxon England was a far cry from Roman Britannia. This is an extreme example, though, because there was almost no continuity in Britain to start with, so there wasn’t much to lose. However, similar stories appear throughout Europe.

Recurring nightmare

Although Europe’s Dark Ages are a thousand years past, they aren’t the only example of the kind of discontinuity of a Dark Age. Something of the same sort happened in Greece two thousand years before that. The native peoples of America can be considered to have a Dark Age that started circa 1500, as the mighty empires of Mexico and Peru fell to Spanish invaders.

In every case, though, it’s more than just the fall of a civilization. A Dark Age needs a prolonged period of destruction, probably at least two generations long. To make an age go Dark requires severe population loss, a total breakdown of government, and the forcing of a kind of “siege mentality” on a society. Climatic shifts are just a bonus. In all, a Dark Age results from a perfect storm of causes, all of which combine to break the people. Eventually, due to the death, destruction, and constant need to be on guard, everything else falls by the wayside. There simply aren’t enough people to keep things going. Once those that are left start dying off, the noose closes. The circle is broken, and darkness settles in.

That naturally leads to another question: could we have a new Dark Age? It’s hard to imagine, in our present time of progress, something ever causing it to stop, but that doesn’t make it impossible. Indeed, almost the entire sub-genre of post-apocalyptic fiction hinges on this very event. It can happen, but—thankfully—it won’t be easy.

What would it take, then? Well, like the Dark Ages that have come before, it would be a combination of factors. Something causing death on a massive, unprecedented scale. Something to put humanity on the back foot, to disrupt the flow of society so completely that it would take more than a lifetime to recover. In that case, it would never recover, because there would be no one left who remembered the “old days”. There would be no more continuity.

I can think of a few ways that could work. The ever-popular asteroid or comet impact is an easy one, and it even has the knock-on effect of a severe climate shock. Nuclear war never really seemed likely in my lifetime, but I was born in 1983, so I missed the darker days of the Cold War. I did watch WarGames, though, and I remember seeing those world maps lighting up at the end. Two hundred years after that, and I don’t think we’re looking at a Fallout game.

Other options all have their problems. An incredibly virulent outbreak (Plague, Inc. or your favorite zombie movie) might work, but it would have to be so bad that it makes the 1918 flu look like the common cold. Zika is in the news right now, but it simply won’t cut it, nor would Ebola. You need something highly infectious, but with a long incubation period and a massive mortality rate. It’s hard to find a virus that fits all three of those, for evolutionary reasons. The other forms of infectious agents—bacteria, fungi, prions—all have their own disadvantages.

Climate change is the watchword of the day, but it won’t cause a Dark Age by itself. It’s too slow, and even the most alarming predictions don’t take us to temperatures much higher than a few thousand years ago, and that’s assuming that nobody ever does anything about it. No matter what you believe about global warming, you can’t make it enough to break us without some help.

Terminator-style AI is another possibility, one looking increasingly likely these days. It has some potential for catastrophe, but I’m not sure about using it as the continuity-breaker. The same goes for nanotech bots and the like. Maybe they’ll enslave us, but they won’t beat us down so badly that we lose everything.

And then there’s aliens. (Insert History Channel guy here.) An alien-imposed destruction of civilization would be the logical extension of the Roman hordes into the global future. Their attacks would likely be massive enough to influence the planet’s climate. They would cause us to huddle together for mutual defense, assuming they left any of us alive and alone. Yeah, that could work. It needs a lot of ifs, but it’s plausible enough to make for a good story.

The light returns

The Dark Age has to come to an end. It can’t last forever. But there’s no easy signal that it’s over. Instead, it’s a gradual thing. The key point here, though, is that what comes out of the Dark Age won’t be the same as what went in. Look again at Europe. After Rome fell, some of its advances—concrete is a good example—were lost to its descendants for a thousand years. Yet the continent did finally surpass the empire.

Over time, the natural course of progress will lift the Dark Age area to a level that is near enough where it left off, and things can proceed from there. It will be a different place, and that’s because of the discontinuity that caused the darkness in the first place. The old ways become lost, yes, but once we discover the new ways, they’ll be even better.

We stand on the shoulders of giants, as Newton said. Those giants are our ancestors, whether physically or culturally. Sometimes they fall, and sometimes the fall is bad enough that it breaks them. Then we must stand on our own and become our own giants. The Dark Age is that time when we’re standing alone.

Naming languages: personal names

Everyone has a name. Most people have more than one. Every year, thousands of expecting mothers buy books listing baby names, their meanings, and their origins. Entire websites (my favorite is Behind the Name) are dedicated to the same thing. Unlike place names, people’s names truly are personal.

Authors of fantasy and fiction have a few options in their quest for distinctive names. A lot of them take the easy route of using real-world names, and that’s fine. Equally valid is the Tolkien method of constructing an elaborate cultural and linguistic framework, and making names out of that. But we can also take a middle approach with a naming language.

Making a name for yourself

Given names (“first” names, for Westerners) are the oldest. For a long time, most people were known only by their given names. Surnames (“last” names) probably originated as a way to distinguish between people with the same given name.

How parents name their children depends very much on their culture and their language. Surnames can be passed down from father—or mother, in a matriarchal society—to child, or they can be derived from a parent’s name, as in Iceland. Given names can come from just about anywhere, and many of their origins are lost to time. But plenty of them are traceable, as the baby-book authors well know.

The last shall be first

Let’s start with surnames, for the same reason I focused on English place names last week: they’re easier to analyze. Quite a few surnames, in fact, are place names. On my mother’s side are the Hatfields—yes, them—whose ancestors, at some point in history, lived in a place called Hatfield. In general, that’s going to be the case with “toponymic” surnames. Somebody took (or was given) the name of his home town/village/kingdom as his own.

Occupations are another common way of getting a surname. My last name, Potter, surely means that someone in my family tree made pottery for a living. He then passed the name, but not the occupation, to his son, and thus a family name was born. The same is true for a hundred other common surnames, from Smith (any kind will do) to Cooper (a barrel maker) to Fuller (a wool worker) to Shoemaker (that one’s easy). A great many of these come from fields long obsolete, which gives you an idea of how old they are.

Some cultures create a surname from a parent’s given name. That’s closer to the norm in Iceland, but it occurs in other places, too. Even in English, we have names like Johnson, Danielson, and so on.

Other possibilities include simply using first names as last names, reusing historical or religious names (St. John), taking names of associated animals or plants, and almost anything else you can think of.

What’s your name?

For given names, occupations and places don’t crop up nearly as much. Instead, these names were originally intended to reflect things like qualities and deeds. When given to a child, they were a kind of hopeful association. You don’t name a boy “high lord” because he is one, but because you want him to be one.

Again, cultural factors play a huge role. Many English names come from old Anglo-Saxon ones, but just as many derive from the Bible, the most important book in England for about a millennium. Biblical influences changed the name game all over Europe, in fact. (Christianity didn’t wipe out the old names, though. Variants of Thor are still popular.)

Other parts of the world have their own naming conventions. In Japan, for instance, Ichiro is a name given to firstborn sons, and that’s essentially its meaning: “first”. And many of those Bible names, from Michael (mine!) and Mary to Hezekiah and Ezekiel, they all have connotations that don’t nicely translate into our terms. Some of them, thanks to Semitic morphology, encompass what would be whole sentences in English.

Foreign names are often imported, usually as people move around. In modern times, with the greater mobility of the average person, names are leaving their native regions and spreading everywhere. They move as their host cultures do; colonization brought European names to indigenous people—when it didn’t wipe those people out.

All for you

The culture is going to play a big role in what names you make. How do your people think? What is important to them? A very pious people will have a lot more names containing religious elements (e.g., Godwin, Christopher). A subjugated culture will import names from its oppressors, whether on its own or by decree.

Language plays a factor, as well. Look at the difference between Chinese names (Guan, Lu, Chiang) and Japanese (Fujiwara, Shinzo, Nagano). There’s a lot of culture overlap due to history, but the names are completely different.

Also, the phonology and syllable structure of a language will affect the names it creates. With a restricted set of potential syllables, it’s more natural to make names longer, so they’ll be more distinct. (Chinese, obviously, is an exception, but polysyllabic Chinese names are a lot more common in modern times.) Names can be short or long in any language, however. That part’s up to you.

As with place names, you’ll want a good stock of “building blocks”. These will include more adjectives than the place-name set, especially positive traits (“strong”, “high”, “beautiful”). The noun set will also represent those same qualities, especially the selection of animals: “wolf” and “bear” are common in Anglo-Saxon names, for example. Occupational terms (agent nouns) will come in handy for surnames, as will your collection of place names.

Finally, personal names will change over time. They’ll evolve with their languages. And they’ll adapt when they’re borrowed. That’s how we go from old Greek Petros to English Peter, French Pierre, Spanish Pedro, and Russian Pyotr.

To finish this post off, here are some Isian names. First, the surnames:

  • Modafo “of the hill” (modas “hill” + fo “from”)
  • Ostanas “hunter” (ostani “to hunt” + -nas)
  • Samajo “man of the west” (sam “man” + jo “west”)
  • Raysencat “red stone” (ray “red” + sencat “stone”)

Now, some given names:

  • Lukadomo “bright lord” (luka “bright” + domo “lord”)
  • Iche “beautiful girl” (reduced ichi “beautiful” + eshe “girl”)
  • Tonseca “sword arm” (ton “arm” + seca “sword”)
  • Otasida “bearer of the sun” (otasi “to hold” + sida “sun”)

In Isian, names follow the Western ordering, so one can imagine speakers named Tonseca Samajo or Iche Modafo. What names will you make?

Thoughts on Haxe

Haxe is one of those languages that I’ve followed for a long time. Not only that, but it’s the rare programming language that I actually like. There aren’t too many on that list: C++, Scala, Haxe, Python 2 (but not 3!), and…that’s just about it.

(As much as I write about JavaScript, I only tolerate it because of its popularity and general usefulness. I don’t like Java for a number of reasons—I’ll do a “languages I hate” post one of these days—but it’s the only language I’ve written professionally. I like the idea of C# and TypeScript, but they both have the problem of being Microsoft-controlled. And so on.)

About the language

Anyway, back to Haxe, because I genuinely feel that it’s a good programming language. First of all, it’s strongly-typed, and you know my opinion on that. But it’s also not so strict with typing that you can’t get things done. Haxe also has type inference, and that really, really helps you work with a strongly-typed language. Save time while keeping type safety? Why not?

In essence, the Haxe language itself looks like a very fancy JavaScript. It’s got all the bells and whistles you expect from a modern language: classes, generics, object literals, array comprehensions, iterators, and so on. You know, the usual. Just like everybody else.

But there’s also a few interesting features that aren’t quite as common. Pattern matching, for instance, which is one of my favorite things from “functional” languages. Haxe also has the idea of “static extensions”, something like C#’s extension methods, which allow you to add extra functionality to classes. Really, most of the bullet points on the Haxe manual’s “Language Features” section are pretty nifty, and most of them are in some way connected to the type system. Of any language I’ve ever used, only Scala comes close to helping me understand the power and necessity of types as much as Haxe.

The platform

But wait, there’s more. Haxe is cross-platform, in its own special way. Strictly speaking, there’s no native output. Instead, you have a choice of compilation targets, and some of these can then be turned into native binaries. Most of these let you “transpile” Haxe code to another language: JavaScript, PHP, C++, C#, Java, and Python. There’s also the Neko VM, made by Haxe’s creator but not really used much, and you can even have the Haxe compiler spit out ActionScript code or a Flash SWF. (Why you would want to is a question I can’t answer.)

The standard library provides most of what you need for app development, and haxelib is the Haxe-specific answer to NPM, CPAN, et al. A few of the available libraries are very good, like OpenFL (basically a reimplementation of the Flash API). Of course, depending on your target platform, you might also be able to use libraries from NPM, the JVM, or .NET directly. It’s not as easy as it could be—you need an extern interface class, a bit like TypeScript—but it’s there, and plenty of major libraries are already fixed for you.

The verdict

Honestly, I do like Haxe. It has its warts, but it’s a solid language that takes an idea (types as the central focus) and runs with it. And it draws in features from languages like ML and Haskell that are inscrutable to us mere mortals, allowing people some of the power of those languages without the pain that comes in trying to write something usable in a functional style. Even if you only use it as a “better” JavaScript, though, it’s worth a look, especially if you’re a game developer. The Haxe world is chock full of code-based 2D game engines and libraries: HaxePunk, HaxeFlixel, and Kha are just a few.

I won’t say that Haxe is the language to use. There’s no such thing. But it’s far better than a lot of the alternatives for cross-platform development. I like it, and that’s saying a lot.

Magic and tech: information technology

In our modern era, we are well and truly blessed when it comes to information. We have the Internet, of course, with its wealth of knowledge. In only a few seconds, any of us can call up even the most obscure facts. Sure, it’s far from perfect, but it’s more than people from just a hundred years ago could dream of. To someone from the Renaissance or earlier, it really would be magic.

Information

Since the written record is often all we have of older cultures, it’s fairly easy to trace the development of information technology. The Internet is only a few decades old, as we know. Telephones, television, and telegraphs (notice a theme there?) preceded that. Radio transmission goes back only a hundred years or so; before its invention, your choices for communication were mostly limited to the written word.

Writing dates back millennia. It’s the oldest and most stable method of storing information that we have. From clay tablets and inscriptions, we can follow its trail through the ages. Papyrus and parchment have been replaced by paper, which is now giving way to LEDs and flash memory, but the idea remains the same. Although the form modern writing takes would astound anyone from earlier times, its function would be familiar in an instant.

In those older days, what options do you have for information and communication? If you’re literate—not everyone was—you can write, obviously, but there’s only so much that gets you. The Chinese invented a printing press about a thousand years ago, but they didn’t really find it useful; if you look at the Chinese script, you’ll probably see why. The Western, alphabetic, world loved it when they got it four centuries later. Copying by hand was your only option for most things before that. (Seals and stamps had limited use, and block printing didn’t show up in Europe until a couple of generations before Gutenberg.)

The form of a written text also changed through history. That’s mostly because of the conditions. Scrolls work better for some materials, but the codex (books like ours) is more compact, and it’s a more natural fit for paper. And letters can be written on anything handy, even bits of other works!

Add the magic

So, in the era we’re covering, the printing press hasn’t been invented. Woodblocks are a new innovation just now trickling in. Most work is done on parchment, some on paper, and it’s done almost exclusively by hand. Scribing and copying are important professions, and their services are always in high demand. And, thanks to the relative lack of supply, the written word is expensive. Can our magical society improve on this state of affairs? If so, how?

A general copying spell (like D&D’s Amanuensis) is too much to ask for, but that hasn’t stopped some mages from trying. But our magic kingdom does have a few information innovations that have become commonplace. One isn’t connected to writing at all, but to speaking: a spell that increases the volume and clarity of a speaker’s voice. In other words, it’s a PA system. In real life, before the invention of electrical amplification, you had to use natural means, mostly in the form of architecture; amphitheaters aren’t built that way just for looks. In this magical land, though, a good acoustic setting is no longer so vital. Anyone can make his voice heard, anywhere, no matter how large the crowd.

Long-distance communication also isn’t as big a problem. Historically, conversing with someone in another city was hard, involving a back-and-forth series of letters. With the upgraded travel abilities of this society, mail delivery gets a boost, too, but that’s not the only option. Through use of a hand-sized glass ball (essentially the same as a crystal ball or Tolkien’s palantír), direct communication can be achieved. It’s highly limited, however. For one, there’s the expense of creating and imbuing the spheres. Then, it’s only a one-to-one system, as speech is transmitted in something like telepathy. No conference calls or broadcasts, unfortunately.

But even this is a huge step up from couriers. Every town of more than a few hundred people has at least one dedicated connection, usually staffed by junior or washed-up mages. For a small fee, short messages can be sent over the spheres to loved ones, acquaintances, or tradesmen in nearby cities. Longer distances can be covered by a relay system, and the biggest cities are set up as centralized “hubs”, with dozens of connections to their neighbors and the most important places.

The overall effect is a society where people are more likely to be aware of what’s outside their locale. Like the telegraph systems of the 1800s (which directly influenced this idea), communication in this world has become more “real-time”. Unlike telegraphs, the magic spheres are wireless, so they can also be taken aboard ships and to foreign lands. No more waiting two years to hear from sailors at sea, not when they can give you daily updates. True, they may only be a few words in length, but Twitter only gives you 140 characters, and people love it.

More magic

So that’s communication improved by magic. What about the storage of information? We can’t do too much better than printed books without some serious technological improvement, and I’ve already said that these guys don’t even have printing. Can we do better than hand-copied manuscripts?

By using the same endurance spells as before, scribes can work longer and faster, increasing their output. Memory-aiding spells, which have near-infinite uses, can give a true photographic memory that would mean fewer books are necessary; high wizards are their own libraries. (That also cuts down on spell thievery and protects the secrets of the arcane from outsiders.)

A path recently explored involves an enchanted plate of glass. That’s already a hard sell, due to the higher cost of plate glass—magic helps this somewhat, as we’ll see later on—and the further expense of the enchantment. But this particular spell “freezes” an image in the glass for a time. The mage holds the pane between himself and the scene he wishes to capture, and he invokes the spell. Almost instantly, the image is frozen. It’s not permanent (it lasts a few years at most) but it does record in clear color. The downside is that one piece of this glass can only “hold” a single picture. The first use of this particular advance in magic has been in art, strangely enough, capturing images that painters can then use as models.

The wizards do have a few other minor aids to information technology. Invisible ink is known in our world, but they have a variant that really is invisible to anyone other than another mage. Short-distance voice transmission spells are easy enough that they’re mostly used by young adepts for pranks. Writing materials are not limited to parchment and paper; “burning” pens allow one to write on wood, metal, or just about anything else. But the more traditional materials are also easier to make, thanks to spells that speed the fabrication processes. And when printing does come, magical propulsion will quickly make it as fast as Industrial-era presses.

What do you know?

In the end, the magical society doesn’t have much that can top handwriting…yet. That doesn’t mean they’re stuck with medieval-era information tech, though. The magic-based telegraph and photograph are some 500 years ahead of their natural counterparts, and they both help to create a populace more aware of its surroundings, of its setting. On top of that, scribes can work harder and faster (and with better eyesight!) than their Earthly kin, meaning that they make more books. More books means more opportunity to read, which encourages a higher literacy rate. The final result: a well-read, well-informed people.

It’s far from modern, granted. It’s not even that close to Victorian, except for our magical answer to the telegraph. But the larger amount of information available is going to have a ripple effect, as we’ll see in coming posts. Everything from espionage to economics changes when people know what’s going on.

Naming languages: place names

Once you have the bare skeleton of a conlang necessary for making names, you’ll probably want to start making them. In my view, most names can be divided into two broad categories: place names and personal names. Sure, these aren’t the only ones out there, but they’re the two most important kinds. Historically, however, they follow different rules, so we’ll treat them separately. Place names are, in my opinion, easier to study, so they’ll come first.

Building blocks

The absolute best part of the world for the study of place names has to be England. Most conlangers speak English, most conlanging materials are in English, and most places in England are named in English. Even better, many English places have names that are wonderfully transparent in their formation, and that gives us a leg up on our own efforts. Thus, I’ll be using examples from England in this post. (A lot of American names tend to copy English ones in style and form, but there are also plenty that come from other languages, and not all of them Indo-European. That makes things much harder, so we’ll stick to English simplicity.)

The first thing to realize when looking at place names, or toponyms, is that they reflect a place’s history. As I’m writing this, I have Google Maps opened up to show southern England, and I can already find a few easy examples: Oxford, Newport, Ashford, Cambridge, and Bournemouth. For most of these, it should be obvious how they got their names (“ford of the oxen”, “the new port”, “ford near ash trees”), while others need a little bit of puzzling out (“bridge at the Cam river” and “mouth of the bourne”—a bourne was a small stream or brook).

These few examples show the basic method of making place names. First, you need a number of words in a few classes. Geographical features (“river”, “sea”, “forest”, etc.) are one of the main ones. Another covers human constructs (“town”, “hamlet”, “village”, “fort”, “mill”, “bridge”, and a thousand others). Animal names can come into play, too, as in “Oxford”. Also, a few descriptive adjectives, such as color terms, are immensely helpful, and you can even throw in some prepositions, too.

Just putting these together in the English style—but using the words and rules of your naming language—nets you a large number of place names. For example, here are some place names in Isian, an ongoing conlang of my Let’s make a language series:

  • Raymodas, “red hill” (ray “red” + modas “hill”)
  • Ekheblon, “new city” (ekho “new” + eblon “city”)
  • Jadalod, “on the sea” (jadal “sea” + od “on”)
  • Lishos, “sweet water” (lishe “sweet” + shos “water”)
  • Omislakho “king’s island” (omis “island” + lakh “king” + o “of”)

Notice that a few of these have had their constituent parts modified slightly. This can be for reasons of euphony (e.g., vowels merging) or evolution. Also, places with names meaning the exact same thing can be found in the real world. The historical city of Carthage derives its name from the Phoenician for “new city”, and there’s a Sweetwater not too far from where I live.

Changing the names

While most place names are derived in the above fashion, some of them don’t seem to be. But if you look closer, you can find their roots. Those roots often paint a picture of the life of a place, and they can even be a tool in the archaeologist’s toolbox. The way some English place names changed, for instance, illustrates the pattern of invasions across that country. Viking invasions gave York its name, as they did with a number of towns ending in -by. Celtic influences can be found if you look hard enough; “Thames” most likely comes from that family. And don’t forget the Romans.

Of course, names are words or combinations of words, and they are just as susceptible to linguistic evolution. That’s how we get to Lyon from Lugdunum and Marseilles from Massalia, but it works on smaller scales, too. One of the most common changes that affects names is a reduction in unstressed syllables, as in the popular element -ton, derived from town. (The English, admittedly, take this a little too far. If you didn’t know how Worcester and Leicester were pronounced, could you ever guess?)

Names can also be borrowed from languages, just like any other word. This happened extensively in North America, where native names were picked up (and mangled) by European settlers. This is especially noticeable to me, given where I live. Sale Creek, my current home, is purely English and obvious. But I moved here from nearby Soddy, and no one can seem to agree on an etymology for that name. The nearest “big city” of Chattanooga derives from the Muskogean language, while the state’s name, Tennessee, comes from a Cherokee name that they borrowed from earlier inhabitants.

What this means is that some of your names don’t have to be analyzable. If you find a sequence of sounds you like, but you can’t find a way to fit it into your naming language, no problem. Say it’s a foreign or ancient name, and nobody will complain. That’s basically how our world works: some names can be broken down, others are black boxes. This can even give you a bit of a hook for worldbuilding. Why is there an oddball name there? Is it a regional thing, maybe from some barbarian invasion a thousand years ago? Or was it named after a forgotten emperor?

Onward

Next week, we’ll close out this miniseries of posts by looking at the names of people. These are intimately related to the names of places, but they deserve their own time in the spotlight. Until then, draw a map and put some names on it!

Cooldowns

A lot of games these days have embraced a real-time style of fighting involving powers or other special abilities that, once activated, can’t be used again for a specific amount of time. They have to “cool down”, so to speak, leading the waiting period to be called a cooldown. FPS, RTS, MOBA…this particular style of play transcends genres. It’s not only for battles, either. Some mobile games have taken it to the extreme, putting even basic gameplay on a cooldown timer. Of course, if you don’t mind dropping a little cash, they’ll gladly let you cut out the waiting.

A bit of history

The whole idea of cooldowns in gaming probably goes back to role-playing games. In RPGs, combat typically works by rounds. Newer editions of D&D, for example, use rounds of 6 seconds. A few longer actions can be done, resulting in your “turn” being skipped in the following round, but the general ratio is one action to one round. This creates a turn-based style of play that usually isn’t time-sensitive. (It can be, though. Games such as Baldur’s Gate turn this system into one supporting real-time action.)

A more fast-paced, interactive style comes from the “Active Time” battles in some of the Final Fantasy games. This might be considered the beginning of cooldowns, at least in popular gaming. Here, a character’s turn comes around after a set period of time, which can change based on items, spells, or a speed stat. Slower characters take longer to fill up their “charge time”, and Haste spells make it fill faster.

Over the past couple of decades, developers have refined and evolved this system into the one we have today. Along the way, some of them have largely discarded the suspension of disbelief and story reasoning for cooldowns. Especially in competitive gaming, they’re nothing more than another mechanic like DPS or area of effect. But they are pretty much everywhere these days, in whatever guise, because they serve a useful purpose: forcing resource management based on time.

Using cooldowns

At the most basic level, that’s what cooldowns are all about. They’re there for game balance. Requiring you to wait between uses of your “ultimate” ability means you have to learn to use the smaller ones. Limiting healing powers to one use every X seconds gives players a reason to back off from a bigger foe; it also frees you from the need to place (and plan for) disposable items like potions. Conversely, if you use cooldowns extensively in your game, you have to make sure that the scenarios where they come into play are written for them.

On the programming side, cooldown timers are fairly easy to implement. Most game engines have some sort of timer functionality, and that’s a good base to build from. When an ability is used, set the timer to the cooldown period and start it. When it signals that it’s finished, that means that the ability is ready to go again.

But to better illustrate how they work—and because not every game engine likes having dozens or hundreds of timers running at once—here’s a different approach. We’ll start with a kind of “cooldown object”:

class CooldownAbility {
    // ...

    void activateAbility();

    void updateTimer(int timeDelta);

    int defaultCooldown;
    int cooldown;
    int coolingTime;
    bool isCoolingDown;
};

(This is C++-like pseudocode made to illustrate the point. I wouldn’t write a real game like this.)

activateAbility should be self-explanatory. It would probably have a structure like this:

void activateAbility() {
    // do flashy stuff
    // ...

    // start the cooldown period
    coolingTime = 0;
    isCoolingDown = true;
}

The updateTimer method here does just that. Each time it’s called, it adds the timeDelta value (this should be the time since the last update) to the coolingTime, and checks to see if it reached the cooldown limit:

void updateTimer(int timeDelta) {
    coolingTime += timeDelta;

    isCoolingDown = (coolingTime < coolDown);
}

Most games have a nice timer built right in: the game loop. And there’s likely already code in there for keeping track of the time since the last run of the loop. It’s simple enough to hook that in to a kind of “cooldown manager”, which runs through all of the “unusable” abilities and updates the time since last use. That might look something like this:

for (auto&& cd : allCooldowns) {
    cd.updateTimer(timeThisFrame);

    if (!cd.isCoolingDown) {
        // tell the game that the ability is ready
    }
}

(Also, the reason I gave this object both a cooldown and a defaultCooldown is so that, if we wanted, we could implement power-ups that reduce cooldown or penalties that increase it.)

Implementing this same thing in an entity-component engine can work almost the same way. Abilities could be entities with cooldown components, and you could add in a system that does the updating, cooldown reduction/increase, etc.

For a certain style of game, timed resource use makes sense. It makes gameplay better. It opens up new tactics, new strategies, especially in multiplayer gaming. And while it takes a lot of design effort to keep a cooldown-based game balanced and fun, the code isn’t that hard at all. That’s especially good news for indie devs, because they get more time to spend on the balancing part.

Race in writing

Race is a hot topic in our generation. Racism, equality, diversity, affirmative action…you can’t get away from it. Even the very month we’re in has long been declared Black History Month. Scientifically, we are told that race doesn’t really matter. Socially, we’re told it shouldn’t matter. And yet human nature, our predisposition towards clannish, us-against-them xenophobia, keeps race constantly in the news. Whether it’s a white cop shooting a black teenager or the Academy Awards being called out as “too white”, racial tension is a fact of life as much in 2016 as in 1966.

But that’s the real world. In fiction, race has historically been somewhat neglected. In most cases, there’s a very good reason for that: it’s not important to the story. Many genres of fiction achieved the Holy Grail of colorblindness years ago, when such a thing was all but inconceivable to the rest of the world. Indeed, for a great many works, it doesn’t matter what color a character’s skin is. If you’re pointing it out, then, like Chekhov’s gun, it’s probably important. A story where racial tension plays a direct role in character development is going to be very dependent on character race. A lot of others simply won’t.

That’s not to say that it should be entirely ignored. After all, real-world humans have race, and they identify more with people of their own race. And, of course, a mass-media work needs to be very careful these days. One need only look at Exodus: Gods and Kings and the accusations of “whitewashing” it received. Also, when moving stories from the page to the screen, a lack of racial characterization in the book can lead to some…interesting choices by the studio. (I’ll gladly admit that I was surprised to see who was cast as Shadow in American Gods.)

Does it matter?

When you’re planning out a story—if you’re the type to plan that far ahead—you should probably already have an idea what role race will play in the grand scheme of things. Something set in the American South in the 60s (1960s or 1860s, either one works) will require more attention to detail. Feudal Japan, not so much.

Futuristic science fiction deserves special mention here. It’s common for this type of story, when it involves a team of characters, to have a certain ratio of men to women, of white to non-white, as if the author had a checklist of political correctness. But why? Surely, for an important mission like first contact or the first manned Mars mission, the job would go to the most qualified, whoever they were. That assumes rational decision-making, though, and that’s something in short supply today. There’s not much reason to assume that will get any better in the coming decades.

For other genres and settings, race should play second fiddle to story concerns. Yes, it can make for an interesting subplot if handled well, but it’s too easy to make a minor detail too important. Ask yourself, “If I changed this character’s race, what effect would that have on the rest of the story.” If you can’t think of anything, then it might not be quite as pertinent as you first thought.

When it does matter

Very often, though, the answer to that question will be a resounding “yes”. And that’s where you need to delve into the bottomless pit of psychology and sociology and the other social sciences. Lucky you.

If you’re fortunate enough to be working with a specific period and location in history, then most of the work is already done for you. Just look at what race relations were like in that time and place. You’ve always got a little bit of leeway, too. People are not all alike. You can be a pre-Civil War southerner against slavery, or a 1940s German sympathetic to the Jews.

Writing for the future is a lot tougher. A common assumption, especially for stories set more than a century or so ahead of our time, is the end of racism. In the future, they argue, nobody will care what color your skin is. The Expanse series works this in a great way, in my opinion. The whole solar system is full of a mishmash of Earth cultures, but nobody says a word about it. It’s not white against black, it’s Earth against Mars.

You can also go the other way and say that race will become more of a factor. The current political climate actually points this way on topics like immigration. But other factors can lead to a “re-segregation”. Nationalist tendencies, waves of refugees, backlashes against “cultural appropriation”, and simple close-mindedness could all do the trick. Even social media can play a role. While it’s true that there aren’t many paths back to the old days of separate water fountains, we’re not too far from strictly separated racial ghettoes already.

The worldbuilding process should be your guide here. What made the world—more specifically, the story’s setting—the way it is?

When it’s different

All that above, of course, presumes you’re dealing with human race. Alien races are completely different, and I hope to one day write a series on them. For now, just know that the differences between humans and aliens utterly dwarf any difference between human races. Aliens might not perceive a distinction between white and black; conversely, an alien appearance can hide a number of racial distinctions. For fantasy, substitute “elves” or whatever for “aliens”, because the principle is exactly the same.

In fact, this whole post I’ve been using “race” as a broad term that encompasses more than just traditional notions of skin pigmentation. In the context of this post, any social subgroup that is largely self-contained can be considered a race, as can a larger element that shows the behavior of a race. Jews and Muslims can be treated as races, as can fantasy-world dark elves. As long as the potential for discrimination based on a group’s appearance exists, then the race card is on the board.

As always, think about what you’re creating. Where does race fit into the story? Try to make it a natural fit, don’t shove it in there. And this is one of those cases where a lot of popular fiction can’t really help you. White authors tend to write white characters by default, because it’s easiest to write what you know. (A counterexample is Steven Erikson’s Malazan Book of the Fallen series, where half the main characters are black, and you’d never know it except from the occasional hint dropped in narration.)

It’s also all too easy to go to the other extreme, to fill a story with a racial rainbow and put that particular difference front and center when it doesn’t help the story. Honestly, that’s just as bad as saying, “Everybody’s white, deal with it.” If it doesn’t matter, don’t even bring it up. If it does matter, make it matter. Make me care about the struggle of the minority when I’m reading that kind of story, but don’t put it in my face when I’m trying to enjoy a sword-and-sorcery romp where everybody is against the Dark Lord.

In the end, the best advice I can give is twofold. First, learn about your setting. How does it affect racial relations? Second, think about your characters. Put yourself in their shoes. How do they see members of other races, or their own? How are they affected by the society they live in? It’s hard, but writing always is, and this is a case where the payoff is a lot harder to see. But keep at it, because it really is worth it.