On learning to code

Coding is becoming a big thing right now, particularly as an educational tool. Some schools are promoting programming and computer science classes, even a full curriculum that lasts through the entirety of education. And then there are the commercial and political movements such as Code.org and the Hour of Code. It seems that everyone wants children to learn something about computers, beyond just how to use them.

On the other side of the debate are the detractors of the “learn to code” push, who argue that it’s a boondoggle at best. Not everybody can learn how to code, they argue, nor should they. We’re past the point where anyone who wants to use a computer must learn to program it, too.

Both camps have a point, and I can see some merit in either side of the debate. I was one of a lucky few that did have the chance to learn about programming early in school, so I can speak from experience in a way that most others cannot. So here are my thoughts on the matter.

The beauty of the machine

Programming, in my opinion, is an exercise that brings together a number of disparate elements. You need math, obviously, because computer science—the basis for programming—is all math. You also need logic and reason, talents that are in increasingly short supply among our youth. But computer programming is more than these. It’s math, it’s reasoning, it’s problem solving. But it’s also art. Some problems have more than one solution, and some of those are more elegant than others.

At first glance, it seems unreasonable to try to teach coding to children before its prerequisites. True, there are kid-friendly programming environments, like MIT’s Scratch. But these can only take you so far. I started learning BASIC in 3rd grade, at the age of 8, but that was little more than copying snippets of code out of a book and running them, maybe changing a few variables here and there for different effects. And I won’t pretend that that was anywhere near the norm, or that I was. (Incidentally, I was the only one that complained when the teacher—this was a gifted class, so we had the same teacher each year—took programming out of the curriculum.)

My point is, kids need a firm grasp of at least some math before they can hope to understand the intricacies of code. Arithmetic and some concept of algebra are the bare minimum. General computer skills (typing, “computer literacy”, that sort of thing) are also a must. And I’d want some sort of introduction to critical thinking, too, but that should be a mandatory part of schooling, anyway.

I don’t think that very young students (kindergarten through 2nd grade) should be fooling around with anything more than a simple interface to code like Scratch. (Unless they show promise or actively seek the challenge, that is. I’m firmly in favor of more educational freedom.) Actually writing code requires, well, writing. And any sort of abstraction—assembly on a fictitious processor or something like that—probably should wait until middle school.

Nor do I think that coding should be a fixed part of the curriculum. Again, I must agree somewhat with the learn-to-code detractors. Not everyone is going to take to programming, and we shouldn’t force them to. It certainly doesn’t need to be a required course for advancement. The prerequisites of math, critical thinking, writing, etc., however, do need to be taught to—and understood by—every student. Learning to code isn’t the ultimate goal, in my mind. It’s a nice destination, but we need to focus on the journey. We should be striving to make kids smarter, more well-rounded, more rational.

Broad strokes

So, if I had my way, what would I do? That’s hard to say. These posts don’t exactly have a lot of thought put in them. But I’ll give it a shot. This will just be a few ideas, nothing like an integrated, coherent plan. Also, for those outside the US, this is geared towards the American educational system. I’ll leave it to you to convert it to something more familiar.

  • Early years (K-2): The first years of school don’t need coding, per se. Here, we should be teaching the fundamentals of math, writing, science, computer use, typing, and so on. Add in a bit of an introduction to electronics (nothing too detailed, but enough to plant the seed of interest). Near the end, we can introduce the idea of programming, the notion that computers and other digital devices are not black boxes, but machines that we can control.

  • Late elementary (3-5): Starting in 3rd grade (about age 8-9), we can begin actual coding, probably starting with Scratch or something similar. But don’t neglect the other subjects. Use simple games as the main programming projects—kids like games—but also teach how programs can solve problems. And don’t punish students that figure out how to get the computer to do their math homework.

  • Middle school (6-8): Here, as students begin to learn algebra and geometry (in my imaginary educational system, this starts earlier, too), programming can move from the graphical, point-and-click environments to something involving actual code. Python, JavaScript, and C# are some of the better bets, in my opinion. Games should still be an important hook, but more real-world applications can creep in. You can even throw in an introduction to robotics. This is the point where we can introduce programming as a discipline. Computer science then naturally follows, but at a slower pace. Also, design needs to be incorporated sometime around here.

  • High school (9-12): High school should be the culmination of the coding curriculum. The graphical environments are gone, but the games remain. With the higher math taught in these grades, 3D can become an important part of the subject. Computer science also needs to be a major focus, with programming paradigms (object-oriented, functional, and so on) and patterns (Visitor, Factory, etc.) coming into their own. Also, we can begin to teach students more about hardware, robotics, program design, and other aspects beyond just code.

We can’t do it alone

Besides educators, the private sector needs to do its part if ubiquitous programming knowledge is going to be the future. There’s simply no point to teaching everyone how to code if they’ll never be able to use such a skill. Open source code, open hardware, free or low-cost tools, all these are vital to this effort. But the computing world is moving away from all of them. Apple’s iOS costs hundreds of dollars just to start developing. Android is cheaper, but the wide variety of devices means either expensive testing or compromises. Even desktop platforms are moving towards the walled garden.

This platform lockdown is incompatible with the idea of coding as a school subject. After all, what’s the point? Why would I want to learn to code, if the only way I could use that knowledge is by getting a job for a corporation that can afford it? Every other part of education has some reflection in the real world. If we want programming to join that small, elite group, then we must make sure it has a place.

Dragons in fantasy

If there is one thing, one creature, one being that we can point to as the symbol of the fantasy genre, it has to be the dragon. They’re everywhere in fantasy literature. The Hobbit, of course, is an old fantasy story that has come back into vogue in the last few years. More recent books involve dragons as major characters (Steven Erikson’s Malazan series) or as plot points (Daniel Abraham’s appropriately-titled The Dragon’s Path). Movies go through cycles, and dragons are sometimes the “in” subject (the movies based on The Hobbit, but also less recent films like Reign of Fire). Television likes dragons, too, when it has the budget to do them (Game of Thrones, of course). And we can also find these magnificent creatures represented in video games (Drakengard, Skyrim), tabletop RPGs (Dungeons & Dragons—it’s even in the name!), and music (DragonForce).

So what makes dragons so…interesting? It’s not a recent phenomenon; dragon legends go back centuries. They feature in Arthurian legend, Chinese mythology, and Greek epics. They’re everywhere, all throughout history. Something about them fires the imagination, so what is it?

The birth of the dragon

Every ancient culture, it seems, has a mythology involving giant beasts of a kind unknown to modern science. We think of the Greek myths of the Hydra, of course, but it’s only one of many. Even in the Bible, monsters are found: the leviathan and behemoth found in the book of Job, for example. But something like a dragon seems to be found in almost every mythos.

How did this happen? For things like this, there are usually a few possible explanations. One, it could be a borrowing, something that arose in one culture, then spread to its neighbors. That seems plausible, except that New World peoples also have dragon-like supernatural beings, and they had them before Columbus. Another possibility is that the first idea of the dragon was invented in the deep past, before humanity spread to every corner of the globe. But that’s a bit far-fetched. You’d then have to explain how something like that stuck around for 30,000 or so years with so little change, using only art and oral transmission for most of that time.

The third option is, in my opinion, the most reasonable: the idea of dragons arose in a few different places independently, in something like convergent evolution. Each “region” would have its own dragon mythology, where the concept of “dragon” is about the same, while different regions might have wildly different ideas of what they should be.

I would also say that the same should be true for other fantastical creatures—giants, for instance—that pop up around the world. And, in my mind, there’s a perfectly good reason why these same tropes appear everywhere: fossils. We know that there used to be huge animals roaming the earth. Dinosaurs could be enormous, and you could imagine a Bronze Age hunter stumbling upon the fossilized bones of one of them and jumping to conclusions.

Even in recent geological time, it was only the Ice Age that wiped out the mammoths and so many other “megafauna”. (Today’s environmental movement tends to want to blame humans for everything bad, including this, but the evidence can be twisted just about any way you like.) In these cases, we can see the possibility that early human bands did meet these true giants, and they would have told stories about them. In time, those stories, as such stories tend to do, could have become legendary. For dragons, this one doesn’t matter too much, but it’s a point in favor of the idea that ancient peoples saw giant creatures—or their remains—and mythologized them into dragons and giants and everything else.

The nature of the beast

Moving far forward in time, we can see that the modern era’s literature has taken the time-honored myth of the dragon and given it new direction. At some point in the last few decades, authors seem to have decided that dragons must make sense. Sure, that’s completely silly from a mythological point of view, but that’s how it is.

Even in older stories, though, dragons had a purpose. That purpose was different for different stories, as it is today. For many of them, the dragon is a nemesis, an enemy. Sometimes, it’s essentially a force of nature, if not a god in its own right. In a few, dragons are good guys, protectors. Christian cultures in medieval times liked to use the slaying dragon as a symbol for the defeat of paganism. But it’s only relatively recently that the idea of dragons as “people” has become popular. Nowadays, we can find fiction where dragons are represented as magicians, sages, and oracles. A few settings even turn them into another sapient race, with their own civilization, culture, religion, and so on.

The form of dragons also depends a lot on the which mythos we’re talking about. The modern perception of a dragon as a winged, bipedal serpent who breathes fire and hoards gold (in other words, more like the wyvern) is just one possibility. Plenty of cultures have wingless dragons, and most of the “true” dragons have no legs; they’re more like giant snakes. Still, there’s an awful lot of variation, and there’s no single, definitive version of a dragon.

Your own dragon

Dragons in a work of fiction, whether novel or film or game, need to be there for a reason, if you want a coherent story. You don’t have to work out a whole ecological treatise on them, showing their diets, sleep patterns, and reproductive habits—Tolkien’s dragons, for example, were supernatural creations, so they didn’t have to make scientific sense—but you should know why a dragon appears.

If there’s only one of them, there’s probably a reason why. Maybe it’s a demon, or a creation of the gods, or an avatar of chaos. Maybe it’s the sole survivor of its kind, frozen in time for millennia (that’s a big spoiler, but I’m not going to tell you for what). Whatever you come up with, you should be able to justify it with something more than “because it’s there”. The more dragons you have, the more this problem can grow. In the extreme, if they’re everywhere, why aren’t they running things?

More than their reason for existing in the first place, you need to think about their story role. Are they enemies? Are they good or evil? Can they talk? What are they like? Smaug was greedy and haughty, for instance, and it’s a conceit of D&D that dragons are complex beings that are completely misunderstood by us lesser mortals simply because we can’t understand their true motives.

Are there different kinds of dragons? Again we can look at D&D, which has a bewildering assortment even before we include wyverns, lesser drakes, and the like. Of course, a game will need a different notion of role than a novel, and gamers like variation in their enemies, but only the most jaded player would think of a dragon as anything less than a major boss character.

Another thing that’s popular is the idea that dragons can change their form to look human. This might be derived from RPGs, or they might have taken it from an earlier source. However it worked out, a lot of people like the idea of a shapeshifting dragon. (Half the characters in the aforementioned Malazan series seem to be like this, and that’s not the only example in fantasy.) Shapechanging, of course, is an important part of a lot of fantasy, and I might do a post on it later on. It is another interesting possibility, though, if you can get it right.

In a very big way, dragons-as-people is a similar problem as other fantasy races, as well as sci-fi aliens. The challenge here is to make something that feels different, something that isn’t quite human, while still making it believable for the story at hand. If dragons live for 500 years, for example, they will have a different outlook on life and history than we would. If they lay eggs—and who doesn’t like dragon eggs?—they won’t understand the pain and danger of live childbirth, among other things. The ways in which a dragon isn’t like a human are breeding grounds for conflict, both internal and external. All you have to do is follow the notion towards its logical conclusion. You know, just like everything else.

In conclusion, I’d like to say that I do like dragons, when they’re done right. They can be these imposing, alien presences beyond reason or understanding, and that is something I find interesting. But in the wrong hands, they turn into little more than pets or mounts, giant versions of dogs and horses that happen to have scales. Dragons don’t need to be noble or evil, but they should have an impact when you meet one. I mean, you’d feel amazed if you met one in real life, wouldn’t you?

Character alignment

If you’ve ever played or even read about Dungeons & Dragons or similar role-playing games (including derivative RPGs like Pathfinder or even computer games like Nethack), you might have heard of the concept of alignment. It’s a component of a character that, in some cases, can play an important role in defining that character. Depending on the Game Master (GM), alignment can be one more thing to note on a character sheet before forgetting it altogether, or it can be a role-playing straitjacket, a constant presence that urges you towards a particular outcome. Good games, of course, place it somewhere between these two extremes.

The concept also has its uses outside of the particulars of RPGs. Specifically, in the realm of fiction, the notion of alignment can be made to work as an extra “label” for a character. Rather than totally defining the character, pigeonholing him into one of a hew boxes, I find that it works better as a starting point. In a couple of words, we can neatly capture a bit of a character’s essence. It doesn’t always work, and it’s far too coarse for much more than a rough draft, but it can neatly convey the core of a character, giving us a foundation.

First, though, we need to know what alignment actually is. In the “traditional” system, it’s a measure of a character’s nature on two different scales. These each have three possible values; elementary multiplication should tell you that we have nine possibilities. Clearly, this isn’t an exact science, but we don’t need it to be. It’s the first step.

One of the two axes in our alignment graph is the time-honored spectrum of good and evil. A character can be Good, Evil, or Neutral. In a game, these would be quite important, as some magic spells detect Evil or only affect Good characters. Also, some GMs refuse to allow players to play Evil characters. For writing, this distinction by itself matters only in certain kinds of fiction, where “good versus evil” morality is a major theme. Mythic fantasy, for example, is one of these.

The second axis is a little harder to define, even among gamers. The possibilities, again, are threefold: Lawful, Chaotic, or Neutral. Broadly, this is a reflection of a character’s willingness to follow laws, customs, and traditions. In RPGs, it tends to have more severe implications than morality (e.g., D&D barbarians can’t be Lawful), but less severe consequences (few spells, for example, only affect Chaotic characters). In non-gaming fiction, I find the Lawful–Chaotic continuum to be more interesting than the Good–Evil one, but that’s just me.

As I said before, there are nine different alignments. Really, all you do is pick one value from either axis: Lawful Good, Neutral Evil, etc. Each of these affects gameplay and character development, at least if the GM wants it to. And, as it happens, each one covers a nice segment of possible characters in fiction. So, let’s take a look at them.

Lawful Good

We’ll start with Lawful Good (LG). In D&D, paladins must be of this alignment, and “paladin” is a pretty good descriptor of it. Lawful Good is the paragon, the chivalrous knight, the holy saint. It’s Superman. LG characters will be Good with a capital G. They’ll fight evil, then turn the Bad Guys over to the authorities, safe in the knowledge that truth and justice will prevail.

The nicey-niceness of Lawful Good can make for some interesting character dynamics, but they’re almost all centered on situations that force the LG character to make a choice between what is legal and what is morally right. A cop or a knight isn’t supposed to kill innocents, but what happens when inaction causes him to? Is war just, even that waged against evil? Is a mass murderer worth saving? LG, at first, seems one-dimensional; in a way, it is. But there’s definitely a story in there. Something like Isaac Asimov’s “Three Laws of Robotics” works here, as does anything with a strict code of morality and honor.

Some LG characters include Superman, obviously, and Eddard Stark of A Song of Ice and Fire (and look where that got him). Real-world examples are harder to come by; a lot of people think they’re Lawful Good (or they aspire to it), but few can actually uphold the ideal.

Neutral Good

You can be good without being Good, and that’s what this alignment is. Neutral Good (NG) is for those that try their best to do the right thing legally, but who aren’t afraid to take matters into their own hands if necessary (but only then). You’re still a Good Guy, but you don’t keep to the same high standards as Lawful Good, nor do you hold others to those standards.

Neutral Good fits any general “good guys” situation, but it can also be more specific. It’s not the perfect paragon that Lawful Good is. NG characters have flaws. They have suspicions. That makes them feel more “real” than LG white knights. The stories for an NG protagonist are easier to write than those for LG, because there are more possibilities. Any good-and-evil story works, for starters. The old “cop gets fired/taken off the case” also fits Neutral Good.

Truly NG characters are hard to find, but good guys that aren’t obviously Lawful or Chaotic fit right in. Obi-Wan Kenobi is a nice example, as Star Wars places a heavy emphasis on morality. The “everyday heroes” we see on the news are usually NG, too, and that’s a whole class that can work in short stories or a serial drama.

Chaotic Good

I’ll admit, I’m biased. I like Chaotic Good (CG) characters, so I can say the most about them, but I’ll try to restrain myself. CG characters are still good guys. They still fight evil. But they do it alone, following their own moral compass that often—but not always—points towards freedom. If laws get in the way of doing good, then a CG hero ignores them, and he worries about the consequences later.

Chaotic Good is the (supposed) alignment of the vigilante, the friendly rogue, the honorable thief, the freedom fighter working against a tyrannical, oppressive government. It’s the guys that want to do what they believe is right, not what they’re told is right. In fiction, especially modern fantasy and sci-fi, when there are characters that can be described as good, they’re usually Chaotic Good. They’re popular for quite a few reasons: everybody likes the underdog, everyone has an inner rebel, and so on. You have a good guy fighting evil, but also fighting the corruption of The System. The stories practically write themselves.

CG characters are everywhere, especially in movies and TV: Batman is one of the most prominent examples from popular culture of the last decade. But Robin Hood is CG, too. In the real world, CG fairly accurately fits most of the heroes of history, those who chose to do the right thing even knowing what it would cost. (If you’re of a religious bent, you could even make the claim that Jesus was CG. I wouldn’t argue.)

Lawful Neutral

Moving away from the good guys, we come to Lawful Neutral (LN). The best way to describe this alignment, I think, is “order above all”. Following the law (or your code of honor, promises, contracts, etc.) is the most important thing. If others come to harm because of it, that’s not your concern. It’s kind of a cold, calculating style, if you ask me, but there’s good to be had in it, and “the needs of the many outweigh the needs of the few” is completely Lawful Neutral in its sentiment.

LN, in my opinion, is hard to write as a protagonist. Maybe that’s my own Chaotic inclination talking. Still, there are plenty of possibilities. A judge is a perfect example of Lawful Neutral, as are beat cops. (More…experienced cops, as well as most lawyers, probably fall under Lawful Evil.) Political and religious leaders both fall under Lawful Neutral, and offer lots of potential. But I think LN works best as the secondary characters. Not the direct protagonist, but not the antagonists, either.

Lawful Neutral, as I said above, best describes anybody whose purpose is upholding the law without judging it. Those people aren’t likely to be called heroes, but they won’t be villains, either, except in the eyes of anarchists.

True Neutral

The intersection of the two alignment axes is the “Neutral Neutral” point, which is most commonly called True Neutral or simply Neutral (N). Most people, by default, go here. Every child is born Neutral. Every animal incapable of comprehending morality or legality is also True Neutral. But some people are there by choice. Whether they’re amoral, or they strive for total balance, or they’re simply too wishy-washy to take a stand, they stay Neutral.

Neutrality, in and of itself, isn’t that exciting. A double dose can be downright boring. But it works great as a starting point. For an origin story, we can have the protagonist begin as True Neutral, only coming to his final alignment as the story progresses. Characters that choose to be Neutral, on the other hand, are harder to justify. They need a reason, although that itself can be cause for a tale. They can make good “third parties”, too, the alternative to the extremes of Good and Evil. In a particularly dark story, even the best characters might never be more “good” than N.

True Neutral people are everywhere, as the people that have no clear leanings in either direction on either axis. Chosen Neutrals, on the other hand, are a little rarer. It tends to be more common as a quality of a group rather than an individual: Zen Buddhism, Switzerland.

Chaotic Neutral

Seasoned gamers are often wary of Chaotic Neutral (CN), if only because it’s often used as the ultimate “get out of jail free” card of alignment. Some people take CN as saying, “I can do whatever I want.” But that’s not it at all. It’s individualism, freedom above all. Egalitarianism, even anarchy. For Chaotic Neutral, the self rules all. That doesn’t mean you have a license to ignore consequences; on the contrary, CN characters will often run right into them. But they’ll chalk that up as another case of The Man holding them back.

If you don’t consider Chaotic Neutral to be synonymous with Chaotic Stupid, then you have a world of character possibilities. Rebels of all kinds fall under CN. Survivalists fit here, too. Stories with a CN protagonist might be full of reflection, or of fights for freedom. Chaotic Neutral antagonists, by contrast, might stray more into the “do what I want” category. In fiction, the alignment tends to show up more in stories where there isn’t a strong sense of morality, where there are no definite good or bad guys. A dystopic sci-fi novel could easily star a CN protagonist, but a socialist utopia would see them as the villains.

Most of the less…savory sorts of rogues are CN, at least those that aren’t outright evil. Stoners and hippies, anarchists and doomsday preppers, all of these also fit into Chaotic Neutral. As for fictional characters, just about any “anti-hero” works here. The Punisher might be one example.

Lawful Evil

Evil, it might be said, is relative. Lawful Evil (LE) might even be described as contentious. I would personally describe it as tyranny, oppression. The police state in fiction is Lawful Evil, as are the police who uphold it and the politicians who created it. For the LE character, the law is the perfect way to exploit people.

All evil works best for the bad guys, and it takes an amazing writer to pull off an Evil protagonist. LE villains, however, are perfect, especially when the hero is Chaotic Good. Greedy corporations, rogue states, and the Machiavellian schemer are all Lawful Evil, and they all make great bad guys. Like CG, Lawful Evil baddies are downright easy to write, although they’re certainly susceptible to overuse.

LE characters abound, nearly always as antagonists. Almost any “evil empire” of fiction is Lawful Evil. The corrupted churches popular in medieval fantasy fall under this alignment, as well. In reality, too, we can find plenty of LE examples: Hitler, the Inquisition, Dick Cheney, the list goes on.

Neutral Evil

Like Neutral Good, Neutral Evil (NE) fits best into stories where morality is key. But it’s also the best alignment to describe the kind of self-serving evil that marks the sociopath. A character who is NE is probably selfish, certainly not above manipulating others for personal gain, but definitely not insane or destructive. Vindictive, maybe.

Neutral Evil characters tend to fall into a couple of major roles. One is the counterpart to NG: the Bad Guy. This is the type you’ll see in stories of pure good and evil. The second is the true villain, the kind of person who sees everyone around him as a tool to be used and—when no longer required—discarded. It’s an amoral sort of evil, more nuanced than either Lawful or Chaotic, and thus more real. It’s easy to truly hate a Neutral Evil character.

Some of the best antagonists in fiction are NE, but so are some of the most clichéd. The superhero’s nemesis tends to be Neutral Evil, unless he’s a madman or a tyrant; the same is true of the bad guys of action movies. Real-life examples also include many corporate executives (studies claim that as many as 90% of the highest-paid CEOs are sociopaths), quite a few hacking groups (those that are doing it for the money, especially), and likely many of the current Republican presidential candidates (the Democrats tend to be Lawful Evil).

Chaotic Evil

The last of our nine alignments, Chaotic Evil (CE) embraces chaos and madness. It’s the alignment of D&D demons, true, but also psychopaths and terrorists. Pathfinder’s “Strategy Guide” describes CE as “Just wants to watch the world burn”, and that’s a pretty good way of putting it.

For a writer, though, Chaotic Evil is almost a trap. It’s almost too easy. CE characters don’t need motivations, or organization, or even coherent plans. They can act out of impulse, which is certainly interesting, but maybe not the best for characterization. It’s absolutely possible to write a Chaotic Evil villain (though probably impossible to write a believably CE anti-hero), but you have to be careful not to give in to him. You can’t let him take over, because he could do anything. Chaos is inherently unpredictable.

Chaotic Evil is easy to find in fiction. Just look at the Joker, or Jason Voorhees, or every summoned demon and Mad King in fantasy literature. And, unfortunately, it’s far too easy to find CE people in our world’s history: Osama bin Laden, Charles Manson, the Unabomber, and a thousand others along the same lines.

In closing

As I stated above, alignment isn’t the whole of a character. It’s not even a part, really. It’s a guideline, a template to quickly find where a character stands. Saying that a protagonist is Chaotic Good, for instance, is a shorthand way of specifying a number of his qualities. It tells a little about him, his goals, his motivations. It even gives us a hint as to his enemies: Lawful and/or Evil characters and groups, those most distant on either alignment axis.

In some RPGs, acting “out of alignment” is a cardinal sin. It certainly is for player characters like D&D paladins, who have to adhere to a strict moral code. (How strict that code is depends on the GM.) For a fictional character in a story, it’s not so bad, but it can be jarring if it happens suddenly. Given time to develop, on the other hand, it’s a way to show the growth of a character’s morality. Good guys turn bad, lawmen go rogue, but not on a whim.

Again, alignment is not a straitjacket to constrain you, but it can be a writing aid. Sure, it doesn’t fit all sizes. As a lot of gamers will tell you, it’s not even necessary for an RPG. But it’s one more tool at our disposal. This simple three-by-three system lets us visualize, at a glance, a complex web of relationships, and that can be invaluable.

Writing inertia

It’s a well-known maxim that an object at rest tends to stay at rest, while an object in motion tends to stay in motion. This is such an important concept that it has its own name: inertia. But we usually think of it as a scientific idea. Objects have inertia, and they require outside forces to act on them if they are to start or stop moving.

Inertia, though, in a metaphorical sense, isn’t restricted to physical science. People have a kind of inertia, too. It takes an effort to get out of bed in the morning; for some people, this is a lot more effort than others. Athletic types have a hard time relaxing, especially after they’ve passed the apex of their athleticism, while those of us that are more…sedentary have a hard time improving ourselves, simply because it’s so much work.

Writers also have inertia. I know this from personal experience. It takes a big impetus to get me to start writing, whether a post like this, a short story, a novel, or some bit of software. But once I get going, I don’t want to stop. In a sense, it’s like writer’s block, but there’s a bit more to it.

Especially when writing a new piece of fiction (as opposed to a continuation of something I’ve already written), I’ve found it really hard to begin. Once I have the first few paragraphs, the first lines of dialogue, and the barest of setting and plot written down (or typed up), it feels like a dam bursting. The floodgates open, and I can just keep going until I get tired. It’s the same for posts like this. (“Let’s make a language” and the programming-related posts are a lot harder.)

At the start of a new story, I don’t think too much. The hardest part is the opening line, because that requires the most motivation. After that, it’s names. But the text itself, once I get over the first hurdles, seems to flow naturally. Sometimes it’s a trickle, others it’s a torrent, but it’s always there.

In a couple of months, I’ll once again take on the NaNoWriMo (National Novel Writing Month) challenge. Admittedly, I don’t keep to the letter of the rules, but I do keep the original spirit: write a novel of 50,000 words in the month of November. For me, that’s the important aspect. It doesn’t matter that it might be an idea I already had but never started because, as I said, writing inertia means it’s difficult for me to get over that hump and start the story. The timed challenge of NaNoWriMo is the impetus, the force that motivates me.

And I like that outside motivation. It’s why I’ve been “successful”, by my own definition, three out of the four times I’ve tried. In 2010, my first try, I gave up after 10 days and about 8,000 words. Real life interfered in 2011; my grandfather had a stroke on the 3rd of November, and nobody in my extended family got much done that month. Since then, though, I’m essentially 3-for-3: 50,000 words in 2012 (although that was only about a fifth of the whole novel); a complete story at 49,000 words in 2013 (I didn’t feel the need to pad it out); and 50,000 last year (that one’s actually getting released soon, if I have my way). Hopefully, I can make it four in a row.

So that’s really the idea of this post. Inertia is real, writing inertia doubly so. If you’re feeling it, and November seems too far away, find another way. There are a few sites out there with writing prompts, and you can always find a challenge to help focus you on your task. Whatever you do, it’s worth it to start writing. And once you start, you’ll keep going until you have to stop.

Irregularity in language

No natural language in the world is completely and totally regular. We think of English as an extreme of irregularity, and it really is, but all languages have at least some part of their grammar where things don’t always go as planned. And there’s nothing wrong with that. That’s a natural part of a language’s evolution.

Conlangs, on the other hand, are often far too regular. For an auxlang, intended for clear communication, that’s actually a good thing. There, you want regularity, predictability. You want the “clockwork morphology” of Esperanto or Lojban. The problem comes with the artistic conlangs. These, especially those made by novices, can be too predictable. It’s not exactly a big deal—every plural ending in -i isn’t going to break the immersion of a story for the vast majority of people—but it’s a little wart that you might want to do away with.

Count the ways

Irregularity comes in a few different varieties. Mostly, though, they’re all the same: a place where the normal rules of grammar don’t quite work. English is full of these, as everyone knows. Plurals are marked by -s, except when they’re not: geese, oxen, deer, people. Past tense is -ed, except that it sometimes isn’t: go and went. (“Strong” verbs like “get” that change vowels don’t really count, because they are regular, but in their own way.) And let’s not even get started on English orthography.

Some other languages aren’t much better. French has a spelling system that matches its pronunciation in theory only, and Irish looks like a keyboard malfunction. Inflectional grammars are full of oddities, ask any Latin student. Arabic’s broken plurals are just that: broken. Chinese tone patterns change in complex and unpredictable ways, despite tone supposedly being an integral part of a morpheme.

On the other hand, there are a few languages out there that seem to strive for regularity. Turkish is always cited as an example here, the joke being that there’s one irregular verb, and it’s only there so that students will know what to expect when they study other languages.

Conlangs are a sharp contrast. Esperanto’s plurals are always -j. There’s no small class of words marked by -m or anything like that. Again, for the purposes of clarity, that’s a good thing. But it’s not natural.

Phonological irregularity

Irregularity in a language’s phonology happens for a few different reasons. However, because phonology is so central to the character of a language, it can be hard to spot. Here are a few places where it can show up:

  • Borrowing: Especially as English (American English in particular) suffuses every corner of the planet, languages can pick up new words and bring new sounds with them. This did happen in English’s history, as it brought the /ʒ/ sound (“pleasure”, etc.) from French, but a more extreme example is the number of Bantu languages that borrowed click sounds from their Khoisan neighbors.

  • Onomatopoeia: The sounds of nature can be emulated by speech, but there’s not always a perfect correspondence between the two. The “meow” of a cat, for instance, contains a sequence of sounds rare in the rest of English.

  • Register: Slang and colloquialism can create phonological irregularities, although this isn’t all that common. English has “yeah” and “nah”, both with a final /æ/, which appears in no other word.

Grammatical irregularity

This is what most people think of when they consider irregularity in a language. Examples include:

  • Irregular marking: We’ve already seen examples of English plurals and past tense. Pretty much every other natural language has something else to throw in here.

  • Gender differences: I’m not just talking about the weirdness of having the word for “girl” in the neuter gender. The Romance languages also have a curious oddity where some masculine-looking words take a feminine article, as in Spanish la mano.

  • Number differences: This includes all those English words where the plural is the same as the singular, like deer and fish, as well as plural-only nouns like scissors.

  • Borrowing: Loanwords can bring their own grammar with them. What’s the plural of manga or even rendezvous?

Lexical irregularity

Sometimes words just don’t fit. Look at the English verb to be. Present, it’s is or are, past is was or were, and so on. Totally unpredictable. This can happen in any language, and one way is a drift in a word’s meaning.

  • Substitution: One word form can be swapped out for another. This is the case with to be and its varied forms.

  • Meaning changes: Most common in slang, like using “bad” to mean “good”.

  • Useless affixes: “Inflammable means flammable?” The same thing is presently ongoing as “irregardless” becomes more widespread.

  • Archaisms: Old forms can be kept around in fixed phrases. In English, this is most commonly the case with the Bible and Shakespeare, but “to and fro” is still around, too.

Orthographic irregularity

There are spelling bees for English. How many other languages can say that? How many would want to? As a language evolves, its orthography doesn’t necessarily follow, especially in languages where the standard spelling was fixed long ago. Here are a few ways that spelling can drift from pronunciation:

  • Silent letters: English is full of these, French more so. And then there are all those extra silent letters added to make words look more like Latin. Case in point, debt didn’t always have the b; it was added to remind people of debitus. (Silent letters can even be dialectal in nature. I pronounce wh and w differently, but few other Americans do.)

  • Missing letters: Nowhere in English can you have dg followed by a consonant except in the American spelling of words like judgment, where the e that would soften the g is implied. (I lost a spelling bee on this very word, in fact, but that was a long time ago.)

  • Sound changes: These can come from evolution or what seems like sheer perversity. (English gh is a case of the latter, I think.)

  • Borrowing: As phonological understanding has grown, we’ve adopted a kind of “standard” orthography for loanwords, roughly equivalent to Latin, Spanish, or Italian. Problem is, this is nothing at all like the standard orthography already present in English. And don’t even get me started on the attempts at rendering Arabic words into English letters.

In closing

All this is not to say that you should run off and add hundreds of irregular forms to your conlang. Again, if it’s an auxlang, you don’t want that. Even conlangs made for a story should use irregular words only sparingly. But artistic conlangs can gain a lot of flavor and “realism” from having a weird word here and there. It makes things harder to learn, obviously, but it’s the natural thing to do.

Death and remembrance

Early in the morning of August 16 (the day I’m writing this), my stepdad’s mother passed away after a lengthy and increasingly tiresome battle with Alzheimer’s. This post isn’t a eulogy; for various reasons, I don’t feel like I’m the right person for such a job. Instead, I’m using it as a learning experience, as I have the past few years during her slow decline. So this post is about death, a morbid topic in any event. It’s not about the simple fact of death, however, but how a culture perceives that fact.

Weight of history

Burial ceremonies are some of the oldest evidence of true culture and civilization that we have. The idea of burying the dead with mementos even extends across species boundaries: Neanderthal remains have been found with tools. And the dead, our dead, are numerous, as the rising terrain levels in parts of Europe (caused by increasing numbers of burials throughout the ages) can attest. Death’s traditions are evident from the mummies of Egypt and Peru, the mausoleums of medieval Europe or the classical world, and the Terracotta Army of China. All societies have death, and they all must confront it, so let’s see how they do it.

The role of religion

Religion, in a very real sense, is ultimately an attempt to make sense of death’s finality. The most ancient religious practices we know deal with two main topics: the creation of the world, and the existence and form of an afterlife. Every faith has its own way of answering those two core mysteries. Once you wade through all the commandments and prohibitions and stories and revelations, that’s really all you’re left with.

One of the oldest and most enduring ideas is the return to the earth. This one is common in “pagan” beliefs, but it’s also a central concept in the Abrahamic religions of the modern West. “Ashes to ashes, dust to dust,” is one popular variation of the statement. And it fits the biological “circle of life”, too. The body of the deceased does return to the earth (whether in whole or as ashes), and that provides sustenance, allowing new life to bloom.

More organized religion, though, needs more, and that is where we get into the murky waters of the soul. What that is, nobody truly knows, and that’s not even a metaphor: the notion of “soul” is different for different peoples. Is it the essence of humanity that separates us from lower animals? Is it intelligence and self-awareness? A spark of the divine?

In truth, it doesn’t really matter. Once religion offers the idea of a soul that is separate from the body, it must then explain what happens to that soul once the body can no longer support it. Thousands of years worth of theologians have argued that point, up to—and including—starting wars in the name of their own interpretation. The reason they can do that is simple: all the ideas are variations on the same basic theme.

That basic them is thus: people die. That much can’t be argued. What happens next is the realm of God or gods, but it usually follows a general pattern. Souls are judged based on some subset of their actions in life, such as good deeds versus bad, adherence to custom or precept, or general faithfulness. Their form of afterlife then depends on the outcome. “Good” souls (whatever that is decided to mean) are awarded in some way, while “bad” souls are condemned. The harsher faiths make this condemnation last forever, but it’s most often (and more justly, in my opinion) for a period of time proportional to the evils committed in life.

The award, in general, is a second, usually eternal life spent in a utopia, however that would be defined by the religion in question. Christianity, for example, really only specifies that souls in heaven are in the presence of God, but popular thought has transformed that to the life of delights among the clouds that we see portrayed in media; early Church thought was an earthly heaven instead. Islam, popularly, has the “72 eternal virgins” presented to the faithful in heaven. In Norse mythology, valiant souls are allowed to dine with the gods and heroes in Valhalla, but they must then fight the final battle, Ragnarök (which they are destined to lose, strangely enough). In even these three disparate cases, you can see the similarities: the good receive an idyllic life, something they could only dream of in the confines of their body.

Ceremonies of death

Religion, then, tells us what happens to the soul, but there is still the matter of the body. It must be disposed of, and even early cultures understood this. But how do we dispose of something that was once human while retaining the dignity of the person once inhabited it?

Ceremonial burial is the oldest trick in the book, so to speak. It’s one of the markers of intelligence and organization in the archaeological record, and it dates back to long before our idea of civilization. And it’s still practiced on a wide scale today; my stepdad’s mother, the ultimate cause of this post, will be buried in the coming days.

Burial takes different forms for different peoples, but it’s always a ceremony. The dead are often buried with some of their possessions, and this may be the result of some primal belief that they’ll need them in the hereafter. We don’t know for sure about the rites and rituals of ancient cultures, but we can easily imagine that they were not much different from our own. We in the modern world say a few words, remember the deeds of the deceased, lower the body into the ground, leave a marker, and promise to come back soon. Some people have more elaborate shrines, others have only a bare stone inscribed with their name. Some families plant flowers or leave baubles (my cousin, who passed away at the beginning of last year, has a large and frankly gaudy array of such things adorning his grave, including solar-powered lights, wind chimes, and pictures).

Anywhere the dead are buried, it’s pretty much the same. They’re placed in the ground in a special, reserved place (a cemetery). The graves are marked, both for ease of remembrance and as a helpful reminder of where not to bury another. The body is left in some enclosure to protect it from prying eyes, and keepsakes are typically beside it.

Burial isn’t the only option, though, not even in the modern world. Cremation, where the body is burned and rendered into ash, is still popular. (A local scandal some years ago involved a crematorium whose owner was, in fact, dumping the bodies in a pond behind the place and filling the urns with things like cement or ground bones.) Today, cremation is seen as an alternative to burial, but some cultures did (and do) see it or something similar as the primary method of disposing of a person’s earthly remains. The Viking pyre is fixed in our imagination, and television sitcoms almost always have a dead relative’s ashes sitting somewhere vulnerable.

I’ll admit that I don’t see the purpose of cremation. If you believe in the resurrection of souls into their reformed earthly bodies, as in some varieties of Christianity and Judaism, then you’d have to view the idea of burning the body to ash as something akin to blasphemy. On the other hand, I can see the allure. The key component of a cremation is fire, and fire is the ultimate in human tools. The story of human civilization, in a very real sense, is the story of how we have tamed fire. So it’s easy to see how powerful a statement cremation or a funeral pyre can make.

Burying and burning were the two main ways of disposing of remains for the vast majority of humanity’s history. Nowadays, we have a few other options: donating to science, dissection for organs, cryogenic freezing, etc. Notice, though, that these all have a “technological” connotation. Cryogenics is the realm of sci-fi; organ donation is modern medicine. There’s still a ceremony, but the final result is much different.

Closing thoughts

Death in a culture brings together a lot of things: religion, ritual, the idea of family. Even the legal system gets involved these days, because of things like life insurance, death certificates, and the like. It’s more than just the end of life, and there’s a reason why the most powerful, most immersive stories are often those that deal with death in a realisic way. People mourn, they weep, they celebrate the life and times of the deceased.

We have funerals and wakes and obituaries because no man is an island. Everyone is connected, everyone has family and friends. The living are affected by death, and far more than the deceased. We’re the ones who feel it, who have to carry on, and the elaborate ceremonies of death are our oldest, most human way of coping.

We honor the fallen because we knew them in life, and we hope to know them again in an afterlife, whatever form that may take. But, curiously, death has a dichotomy. Religion clashes with ancient tradition, and the two have become nearly inseparable. A couple of days from now, my stepdad might be sitting in the local funeral home’s chapel, listening to a service for his mother that invokes Christ and resurrection and other theology, but he’ll be looking at a casket that is filled with tiny treasures, a way of honoring the dead that has continued, unbroken, for tens of thousands of years. And that is the truth of culture.

Thoughts on ES6

ES6 is out, and the world will never be the same. Or something like that. JavaScript won’t be the same, at least once the browsers finish implementing the new standard. Of course, by that time, ES7 will be completed. It’s just like every other standardized language. Almost no compilers fully support C++14, even in 2015, and there will certainly be holes in their coverage two years from now, when C++17 (hopefully) arrives. C# and Java programmers are lucky, since their releases are dictated by the languages’ owners, who don’t have to worry about compatibility. The price of a standard, huh?

Anyway, ES6 does bring a lot to the table. It’s almost a total overhaul of JavaScript, in my opinion, and most of it looks to be for the better. New idioms and patterns will arise over the coming years. Eventually, we may even start talking about “Modern JavaScript” the way we do “Modern C++” or “Modern Perl”, indicating that the “old” way of thinking is antiquated, deprecated. The new JS coder in 2020 might wonder why we ever did half the things we did, the same way kids today wonder how anyone got by with only 4 MB of memory or a 250 MB hard drive (like my first PC).

Babel has an overview of the new features of ES6, so I won’t repeat them here. I will offer my opinions on them, though.

Classes and Object Literals

I already did a post on classes, with a focus on using them in games. But they’re useful everywhere, especially as so many JavaScript programmers come in from languages with a more “traditional” approach to objects. Even for veterans, they come in handy. We no longer have to reinvent the wheel or use an external library for simple inheritance. That’s a good thing.

The enhancements to object literals are nice, too. Mostly, I like the support for prototype objects, methods, and super. Those will give a big boost to the places where we don’t use classes. The shorthand assignments are pure sugar, but that’s really par for the course in ES6: lots of syntactic conveniences to help us do the things we were already doing.

Modules

I will do a post on modules for game development, I promise. For now, I’d like to say that I like the idea of modules, though I’m not totally sold on their implementation and syntax. I get why the standards people did it the way they did, but it feels odd, especially as someone who has been using CommonJS and AMD modules for a couple of years.

No matter what you think of them, modules will be one of the defining points of ES6. Once modules become widespread, Browserify becomes almost obsolete, RequireJS entirely so. The old pattern of adding a property to the global window goes…out the window. (Sorry.) ES6 modules are a little more restrictive than those of Node, but I’d start looking into them as soon as the browsers start supporting them.

Promises

Async programming is the hot thing right now, and promises are ES6’s answer. It’s not full on threading, and it’s distinct from Web Workers, but this could be another area to watch. Having a language-supplied async system will mean that everybody can use it, just like C++11’s threads and futures. Once people can use something, they will use it.

Promises, I think, will definitely come into their own for AJAX-type uses. If JavaScript ever gets truly big for desktop apps, then event-driven programming will become even more necessary, and that’s another place I see promises becoming important. Games will probably make use of them, too, if they don’t cause to much of a hit to speed.

Generators and Iterators

Both of these really help a lot of programming styles. (Comprehensions do, too, but they were pushed back to ES7.) Iterators finally give us an easy way of looping over an array’s values, something I’ve longed for. They also work for custom objects, so we can make our own collections and other nifty things.

You might recognize generators from Python. (That’s where I know them from.) When you use them, it will most likely be for making your own iterable objects. They’ll also be handy for async programming and coroutines, if they’re anything like their Python counterparts.

Syntactic Sugar

A lot of additions to ES6 are purely for aesthetic purposes, so I’ll lump them all together here, in the same order as Babel’s “Learn ES2015” page that I linked above.

  • Arrows: Every JavaScript clone (CoffeeScript, et al.) has a shortcut for function literals, so there’s no reason not to put one in the core language. ES6 uses the “fat” arrow =>, which stands out. I like that, and I’ll be using it as soon as possible, especially for lambda-like functions. The only gotcha here? Arrow functions don’t get their own this, so watch out for that.

  • Template Strings: String interpolation using ${}. Took long enough. This will save pinkies everywhere from over-stretching. Anyway, there’s nothing much to complain about here. It’s pretty much the same thing as PHP, and everybody likes that. Oh, wait…

  • Destructuring: One of those ideas where you go, “Why didn’t they think of it sooner?”

  • Function Parameters: All these seem to be meant to get rid of any use for arguments, which is probably a good thing. Default parameters were sorely needed, and “rest” parameters will mean one more way to prevent off-by-one errors. My advice? Start using these ASAP.

  • let & const: Everybody complains about JavaScript’s scoping rules. let is the answer to those complaints. It gives you block-scoped variables, just like you know from C, C++, Java, and C#. var is still there, though, as it should be. For newer JS coders coming from other languages, I’d use let everywhere to start. const gives you, well, constants. Those are nice, but module exports remove one reason for constants, so I don’t see const getting quite as much use.

  • Binary & Octal Literals: Uh, yeah, sure. I honestly don’t know how much use these are in any higher-level language nowadays. But they don’t hurt me just by being there, so I’m not complaining.

Library Additions

This is kind of an “everything else” category. ES6 adds quite a bit to the standard library. Everything that I don’t feel is big enough to warrant its own section goes here, again in the order shown on “Learn ES2015”.

  • Unicode: It’s about time. Not just the Unicode literal strings, but the String and RegExp support for higher characters. For anyone working with Unicode, ES6 is a godsend. Especially if you’re doing anything with emoji, like, say, making a language that uses them.

  • Maps and Sets: If these turn out to be more efficient than plain objects, then they’ll be perfect; otherwise, I don’t think they are terribly important. In fact, they’re not that hard to make yourself, and it’s a good programming exercise. WeakMap and WeakSet are more specialized; if you need them, then you know you need them, and you probably won’t care about raw performance.

  • Proxies: These are going to be bigger on the server side, I think. Testing will get a big boost, too, but I don’t see proxies being a must-have feature in the browser. I’d love to be proven wrong, though.

  • Symbols: Library makers might like symbols. With the exception of the builtins, though, some of us might not even notice they’re there. Still, they could be a performance boost if they’re faster than strings as property keys.

  • Subclassing: Builtin objects like Array and Date can be subclassed in ES6. I’m not sure how I feel on that. On the plus side, it’s good for consistency and for the times when you really do need a custom array that acts like the real thing. However, I can see this being overused at first.

  • New APIs: The new builtin methods are all welcome additions. The Array stuff, particularly, is going to be helpful. Math.imul() and friends will speed up low-level tasks, too. And the new methods for String (like startsWith()) should have already been there years ago. (Of all the ES6 features, these are the most widely implemented, so you might be able to use them now.)

  • Reflection: Reflection is always a cool feature, but it almost cries out to be overused and misused. Time will tell.

Conclusions

ES6 has a lot of new, exciting features, but it’ll take a while before we can use them everywhere. Still, I think there’s enough in there to get started learning right now. But there are going to be a lot of projects that will soon become needless. Well, that’s the future for you, and what a bright future it is. One thing’s for sure: JavaScript will never be the same.

First Languages for Game Development

If you’re going to make a game, you’ll need to do some programming. Even the best drag-and-drop or building-block environment won’t always be enough. At some point, you’ll have to make something new, something customized for your on game. But there are a lot of options out there. Some of them are easier, some more complex. Which one to choose?

In this post, I’ll offer my opinion on that tough decision. I’ll try to keep my own personal feelings about a language out of it, but I can’t promise anything. Also, I’m admittedly biased against software that costs a lot of money, but I know that not everyone feels the same way, so I’ll bite my tongue. I’ll try to give examples (and links!) of engines or other environments that use each language, too.

No Language At All

Examples: Scratch, Construct2, GameMaker

For a very few cases, especially the situation of kids wanting to make games, the best choice of programming language might be “None”. There are a few engines out there that don’t really require programming. Most of these use a “Lego” approach, where you build logic out of primitive “blocks” that you can drag and connect.

This option is certainly appealing, especially for those that think they can’t “do” programming. And successful games have been made with no-code engines. Retro City Rampage, for example, is a game created in GameMaker, and a number of HTML5 mobile games are being made in Construct2. Some other engines have now started creating their own “no programming required” add-ons, like the Blueprints system of Unreal Engine 4.

The problem comes when you inevitable exceed the limitations of the engine, when you need to do something its designers didn’t include a block for. For children and younger teens, this may never happen, but anyone wanting to go out of the box might need more than they can get from, say, Scratch’s colorful jigsaw pieces. When that happens, some of these engines have a fallback: Construct2 lets you write plugins in JavaScript, while GameMaker has its own language, GML, and the newest version of RPG Maker uses Ruby.

Programming, especially game programming, is hard, there’s no doubt about it. I can understand wanting to avoid it as much as possible. Some people can, and they can make amazing things. If you can work within the limitations of your chosen system, that’s great! If you need more, though, then read on.

JavaScript

Examples: Unity3D, Phaser

JavaScript is everywhere. It’s in your browser, on your phone, and in quite a few desktop games. The main reason for its popularity is almost tautological: JavaScript is everywhere because it’s everywhere. For game programming, it started coming into its own a few years ago, as mobile gaming exploded and browsers became optimized enough to run it at a decent speed. With HTML5, it’s only going to get bigger, and not just for games.

As a language, JavaScript is on the easy side, except for a few gotchas that trip up even experienced programmers. (There’s a reason why it has a book subtitled “The Good Parts”.) For the beginner, it certainly offers the easiest entry: just fire up your browser, open the console, and start typing. Unity uses JS as its secondary language, and about a million HTML5 game engines use it exclusively. If you want to learn, there are worse places to start.

Of course, the sheer number of engines might be the language’s downfall. Phaser might be one of the biggest pure JS engines right now, but next year it could be all but forgotten. (Outside of games, this is the case with web app frameworks, which come and go with surprising alacrity.) On top of that, HTML5 engines often require installation of NodeJS, a web server, and possibly more. All that can be pretty daunting when all you want to do is make a simple game.

Personally, I think JavaScript is a good starting language if you’re careful. Would-be game developers might be better off starting with Unity or Construct2 (see above) rather than something like Phaser, though.

C++ (with a few words on C)

Examples: Unreal Engine 4, SFML, Urho3D

C++ is the beast of the programming world. It’s big, complex, hard to learn, but it is fast. Most of today’s big AAA games use C++, especially for the most critical sections of code. Even many of the high-level engines are themselves written in C++. For pure performance, there’s not really any other option.

Unfortunately, that performance comes at a price. Speaking as someone who learned C++ as his second programming language, I have to say that it’s a horrible choice for your first. There’s just too much going on. The language itself is huge, and it can get pretty cryptic at times.

C is basically C++’s older brother. It’s nowhere near as massive as C++, and it can sometimes be faster. Most of your operating system is likely written in C, but that doesn’t make it any better of a choice for a budding game programmer. In a way, C is too old. Sure, SDL is a C library, but it’s going to be the lowest level of your game engine. When you’re first starting out, you won’t even notice it.

As much as I love C++ (it’s probably my personal favorite language right now), I simply can’t recommend starting with it. Just know that it’s there, but treat it as a goal, an ideal, not a starting point.

Lua

Examples: LÖVE, many others as a scripting or modding language

Lua is pretty popular as a scripting language. Lots of games use it for modding purposes, with World of Warcraft by far the biggest. For that reason alone, it might be a good start. After all, making mods for games can be a rewarding start to game development. Plus, it’s a fairly simple language that doesn’t have many traps for the unwary. Although I’ll admit I don’t know Lua as well as most of the other languages in this list, I can say that it can’t be too bad if so many people are using it. I do get a kind of sense that people don’t take it seriously enough for creating games, though, so take from that what you will.

C#

Examples: Unity3D, MonoGame

C# has to be considered a good candidate for a first language simply because it’s the primary language of Unity. Sure you can write Unity games in JavaScript, but there are a few features that require C#, and most of the documentation assumes that’s what you’ll be using.

As for the language itself, C# is good. Personally, I don’t think it’s all that pretty, but others might have different aesthetic sensibilities. It used to be that C# was essentially Microsoft-only, but Mono has made some pretty good strides in recent years, and some developments in 2015 (including the open-sourcing of .NET Core) show positive signs. Not only that, but my brother finds it interesting (again, thanks to Unity), so I almost have to recommend at least giving it a shot.

The downside of C# for game programming? Yeah, learning it means you get to use Unity. But, that’s about all you get to use. Besides MonoGame and the defunct XNA, C# doesn’t see a lot of use in the game world. For the wider world of programming, though, it’s one of the standard languages, the Microsoft-approved alternative to…

Java

Examples: LibGDX, JMonkeyEngine, anything on Android

Java is the old standard for cross-platform coding. The Java Virtual Machine runs just about anywhere you can think of, even places it shouldn’t (like a web browser). It’s the language of Minecraft and your average Android app. And it was meant to be so simple, anybody could learn it. Sounds perfect, don’t it?

Indeed, Java is simple to learn. And it has some of the best tools in the world. But it also has some of the slowest, buggiest, most bloated and annoying tools you have ever had the misfortune of using. (These sets do overlap, by the way.) The language itself is, in my opinion, the very definition of boring. I don’t know why I feel that way, but I do. Maybe because it’s so simple, a child could use it.

Obviously, if you’re working on Android, you’re going to use Java at some point. If you have an engine that runs on other platforms, you might not have to worry about it, since “native” code on Android only needs a thin Java wrapper that Unity and others provide for you. If you’re not targeting Android, Java might not be on your radar. I can’t blame you. Sure, it’s a good first language, but it’s not a good language. The me from five years ago would never believe I’m saying this, but I’d pick C# over Java for a beginning game developer.

Python

Examples: Pygame, RenPy

I’ll gladly admit that I think Python is one of the best beginner languages out there. It’s clean and simple, and it does a lot of things right. I’ll also gladly admit that I don’t think it can cut it for game programming. I can say this with experience as I have tried to write a 2D game engine in Python. (It’s called Pyrge, and you can find the remnants of it on my Github profile that I won’t link here out of embarrassment.). It’s hard, mostly because the tools available aren’t good enough. Python is a programmer’s language, and Pygame is a wonderful library, but there’s not enough there for serious game development.

There’s always a “but”. For the very specific field of “visual novels”, Python does work. RenPy is a nice little tool for that genre, and it’s been used for quite a few successful games. They’re mostly of the…adult variety, but who’s counting? If that’s what you want to make, then Python might be the language for you, just because of RenPy. Otherwise, as much as I love it, I can’t really recommend it. It’s a great language to learn the art of programming, but games have different requirements, and those are better met by other options.

Engine-Specific Scripting

Examples: GameMaker, Godot Engine, Torque, Inform 7

Some engine developers make their own languages. The reasons why are as varied as the engines themselves, but they aren’t all that important. What is important is that these engine-specific languages are often the only way to interact with those environments. That can be good and bad. The bad, obviously, is that what you learn in GML or GDScript or TorqueScript doesn’t carry over to other languages. Sometimes, that’s a fair trade, as the custom language can better interact with the guts of the engine, giving a performance boost or just a better match to the engine’s quirks. (The counter to this is that some engines use custom scripting languages to lock you into their product.)

I can’t evaluate each and every engine-specific programming language. Some of them are good, some are bad, and almost all of them are based on some other language. Godot’s GDScript, for example, is based on Python, while TorqueScript is very much a derivative of JavaScript. Also, I can’t recommend any of these languages. The engines, on the other hand, all have their advantages and disadvantages. I already discussed GameMaker above, and I think Godot has a lot of promise (I’m using it right now), but I wouldn’t say you should use it because of its scripting language. Instead, learn the scripting language if you like the engine.

The Field

There are plenty of other options that I didn’t list here. Whether it’s because I’m not that familiar with the language, or it doesn’t see much use in game development, or because it doesn’t really work as a first language, it wasn’t up there. So here are some of the “best of the rest” options, along with some of the places they’re used:

  • Swift (SpriteKit) and Objective-C (iOS): I don’t have a Mac, which is a requirement for developing iOS apps, and Swift is really only useful for that purpose. Objective-C actually does work for cross-platform programming, but I’m not aware of any engines that use it, except those that are Apple-specific.

  • Haxe (HaxeFlixel): Flash is dying as a platform, and Haxe (through OpenFL) is its spiritual successor. HaxeFlixel is a 2D engine that I’ve really tried to like. It’s not easy to get into, though. The language itself isn’t that bad, though it may be more useful for porting old Flash stuff than making new games.

  • Ruby (RPG Maker VX Ace): Ruby is one of those things I have an irrational hatred for, like broccoli and reality shows. (My hatred of cats, on the other hand, is entirely rational.) Still, I can’t deny that it’s a useful language for a lot of people. And it’s the scripting language for RPG Maker, when you have to delve into that engine’s inner workings. Really, if you’re not using RPG Maker, I don’t see any reason to bother with Ruby, but you might see things differently.

  • JavaScript variants (Phaser): Some people (and corporations), fed up with JavaScript’s limitations, decided to improve it. But they all went in their own directions, with the result of a bewildering array of languages: CoffeeScript, TypeScript, LiveScript, Dart, and Coco, to name a few. For a game developer, the only one directly of any use is TypeScript, because Phaser has it as a secondary language. They all compile into JS, though, so you can choose the flavor you like.

If there’s anything I missed, let me know. If you disagree with my opinions (and you probably do), tell me why. Any other suggestion, criticism, or whatever can go in the comments, too. The most important thing is to find something you like. I mean, why let somebody else make your decisions for you?