The problem with emoji

Emoji are everywhere these days. Those little icons like 📱 and 😁 show up on our phones, in our browsers, even on TV. In a way, they’re great. They give us a concise way to express some fairly deep concepts. Emotions are hard to sum up in words. “I’m crying tears of joy” is so much longer than 😂, especially if you’re limited to 140 characters of text.

From the programmer’s point of view, however, emoji can rightfully be considered a pox on our house. This is for a few reasons, so let’s look at each of them in turn. In general, these are in order from the most important and problematic to the least.

  1. Emoji are Unicode characters. Yes, you can treat them as text if you’re using them, but we programmers have to make a special effort to properly support Unicode. Sure, some languages say they do it automatically, but deeper investigation shows the hollowness of such statements. Plain ASCII doesn’t even have room for all the accented letters used by the Latin alphabet, so we need Unicode, but that doesn’t mean it’s easy to work with.

  2. Emoji are on a higher plane. The Unicode character set is divided into planes. The first 65,536 code points are the Basic Multilingual Plane (BMP), running from 0x0000 to 0xFFFF. Each further plane is considered supplemental, and many emoji fall in the second plane, with code points around 0x1F000. At first glance, the only problem seems to be an additional byte required to represent each emoji, but…

  3. UCS-2 sucks. UCS-2 is the fixed-width predecessor to UTF-16. It’s obsolete precisely because it can’t handle higher planes, but we still haven’t rid ourselves of it. JavaScript, among others, essentially uses UCS-2 strings, and this is a very bad thing for emoji. They have to be encoded as a surrogate pair, using two otherwise-invalid code points in the BMP. It breaks finding the length of a string. It breaks string indexing. It even breaks simple parsing, because…

  4. Regular expressions can’t handle emoji. At least in present-day JavaScript, they can’t. And that’s the most used language on the web. It’s the front-end language of the here and now. But the JS regex works in UCS-2, which means it doesn’t understand higher-plane characters. (This is getting fixed, and there are libraries out there to help mitigate the problem, but we’re still not to the point where we can count on full support.)

  5. Emoji are hard to type. This applies mostly to desktops. Yeah, people still use those, myself included. For us, typing emoji is a complicated process. Worse, it doesn’t work everywhere. I’m on Linux, and my graphical applications are split between those using GTK+ and those using Qt. The GTK+ ones allow me to type any Unicode character by pressing Ctrl+Shift+U and then the hexadecimal code point. For example, 😂 has code point 0x1F602, so I typed Ctrl+Shift+U, then 1f602, then a space to actually insert the character. Qt-based apps, on the other hand, don’t let me do this; in an impressive display of finger-pointing, Qt, KDE, and X all put the responsibility for Unicode handling on each other.

So, yeah, emoji are a great invention for communication. But, speaking as a programmer, I can’t stand working with them. Maybe that’ll change one day. We’ll have to wait and see.

Thoughts on Vulkan

As I write this (February 17), we’re two days removed from the initial release of the Vulkan API. A lot has been written across the Internet about what this means for games, gamers, and game developers, so I thought I’d add my two cents.

I’ve been watching the progress of Vulkan with interest as both user and programmer, and on a “minority” platform (Linux). For both reasons, Vulkan should be making me ecstatic, but it really isn’t. I’m not trying to be the wet blanket here, but everything I see about Vulkan is written in such a gushing tone that I feel the need to provide a counterweight.

What is it?

First off, the rundown. Vulkan is basically the “next generation” of OpenGL. OpenGL, of course, is the 3D technology that powers everything that isn’t Windows, as well as quite a few games on Windows. Vulkan is intended to be a lower-level—and thus faster—API that achieves its speed by being closer to the metal. It’s supposed to be a better fit for the actual hardware of a GPU, rather than the higher-level state machine of OpenGL. Oh, and it’s cross-platform, unlike DirectX.

As of 2/17, there’s only one game out there that can use Vulkan: The Talos Principle. Drivers are similarly scarce. AMD’s are alpha-quality on Windows and nonexistent on Linux, nVidia only has an old beta for Linux, but much better Windows support, and Intel is, well, Intel. Hurray for competition.

Why it’s good

The general rule in programming is that the higher in the “stack” you go, the slower you get. High-level languages like JavaScript, Python, and Ruby are all dreadfully slow when compared to the lower-level C and C++. And assembly is the fastest you can get, because it’s the closest thing to a native language. For GPUs, the same thing is true. OpenGL is fairly high up in the stack, and it shows.

Vulkan was made to fit in a lower level. It has better support for multithreading, multicore programming. Shaders are faster. Everything about it was made to speed things up while remaining stable and supported. In essence, the purpose is to put everyone on a level playing field everywhere except the GPU. To make the OS irrelevant to graphics.

That’s a good thing. I say that not only because I use Linux, not only because I’d like more games for it. I say that as someone who loves the idea of computers in general and as gaming machines. Anything that makes things better while keeping the PC open is a win for everybody. DirectX might be the best API ever invented (I’ve heard people say it is), but if you’re using something other than Windows or an Xbox, it might as well not exist. OpenGL works just about everywhere there’s graphics. If Vulkan can do the same, then there’s no question that it’s good.

Why it’s not

But it won’t. That’s the problem. Vulkan ultimately derives from AMD’s Mantle API, which was mostly made for the Xbox One and PS4, to give them a much-needed power boost. The PC wasn’t exactly an afterthought, but it doesn’t seem like it was ever going to be the main focus of Mantle. Now, that console-oriented nature probably got washed away in the transition to Vulkan, but it causes a ripple effect, meaning that…

Vulkan doesn’t work everywhere.

Yeah, I said it. Currently, it requires some serious hardware support, and it’s mostly limited to the latest couple of generations of GPU. Intel only makes integrated graphics, and some of those can use it, but you know how that goes. For the GTX line, you need at least a 6-series, and then only the best of them. AMD has the widest support, as you’d expect, but it’s full of holes. On Linux, the R9 290 won’t be able to use Vulkan, because it uses the wrong driver (radeonsi instead of amdgpu).

And that brings me to my problem. For AMD’s APU integrated graphics, you have to have at least the Kaveri chipset, because that’s when they started putting in the GCN stuff that Vulkan requires. Kaveri came out in early 2014, a mere two years ago. It was supposed to release in late 2013, but delays crept in. Since I built my current PC for Christmas 2013, I’m out of luck, unless I want to buy a new video card.

But there’s no good choice for that right now, not on Linux. Do I get something from nVidia, where I’m stuck with proprietary drivers, and I can’t even upgrade the kernel without worrying that they’ll crash? Or do I buy AMD, the same company that got me into this mess in the first place? Sure, they have better open-source drivers, but who’s to say that they’ll actually work? You can ask the 290 owners what they think about that one.

The churn

So, for now, I’m on the outside looking in when it comes to Vulkan. But I can see the benefit in that. I get to watch while all the early adopters work out the kinks.

Vulkan isn’t going to take over the world in a night, or a month, or even a year. There are just too many people out there with computers that can’t use it. It’ll take some time before that critical mass is reached, when there are enough Vulkan-capable PCs out there to make it worthwhile to dump OpenGL. (DirectX isn’t really a factor here. It’s tied to Windows, and to a specific Windows version. I don’t care if DX12 is the Second Coming, it’s not going to make me get Windows 10.)

Game engines can start supporting Vulkan right now. Quite a few of them are, like Valve’s Source Engine. As an alternate code path, as an optimization used if possible, it’s fine. As a replacement for the OpenGL rendering system of an engine? Not a chance. Not yet.

Give it some time. Give the Khronos Group a couple of versions to fix the inevitable bugs. Give the world a few years to cycle through their current—underpowered or unsupported—computers or GPUs. When we get to that point, you might be able to see Vulkan reach its full potential. 2020 is a nice year, I think. It’s four years into the future, so that’s a couple of generations of graphics cards and about one upgrade cycle for most people and time for a new set of consoles. If Vulkan hasn’t taken off by then, it probably never will. But it will, eventually.

Thoughts on Haxe

Haxe is one of those languages that I’ve followed for a long time. Not only that, but it’s the rare programming language that I actually like. There aren’t too many on that list: C++, Scala, Haxe, Python 2 (but not 3!), and…that’s just about it.

(As much as I write about JavaScript, I only tolerate it because of its popularity and general usefulness. I don’t like Java for a number of reasons—I’ll do a “languages I hate” post one of these days—but it’s the only language I’ve written professionally. I like the idea of C# and TypeScript, but they both have the problem of being Microsoft-controlled. And so on.)

About the language

Anyway, back to Haxe, because I genuinely feel that it’s a good programming language. First of all, it’s strongly-typed, and you know my opinion on that. But it’s also not so strict with typing that you can’t get things done. Haxe also has type inference, and that really, really helps you work with a strongly-typed language. Save time while keeping type safety? Why not?

In essence, the Haxe language itself looks like a very fancy JavaScript. It’s got all the bells and whistles you expect from a modern language: classes, generics, object literals, array comprehensions, iterators, and so on. You know, the usual. Just like everybody else.

But there’s also a few interesting features that aren’t quite as common. Pattern matching, for instance, which is one of my favorite things from “functional” languages. Haxe also has the idea of “static extensions”, something like C#’s extension methods, which allow you to add extra functionality to classes. Really, most of the bullet points on the Haxe manual’s “Language Features” section are pretty nifty, and most of them are in some way connected to the type system. Of any language I’ve ever used, only Scala comes close to helping me understand the power and necessity of types as much as Haxe.

The platform

But wait, there’s more. Haxe is cross-platform, in its own special way. Strictly speaking, there’s no native output. Instead, you have a choice of compilation targets, and some of these can then be turned into native binaries. Most of these let you “transpile” Haxe code to another language: JavaScript, PHP, C++, C#, Java, and Python. There’s also the Neko VM, made by Haxe’s creator but not really used much, and you can even have the Haxe compiler spit out ActionScript code or a Flash SWF. (Why you would want to is a question I can’t answer.)

The standard library provides most of what you need for app development, and haxelib is the Haxe-specific answer to NPM, CPAN, et al. A few of the available libraries are very good, like OpenFL (basically a reimplementation of the Flash API). Of course, depending on your target platform, you might also be able to use libraries from NPM, the JVM, or .NET directly. It’s not as easy as it could be—you need an extern interface class, a bit like TypeScript—but it’s there, and plenty of major libraries are already fixed for you.

The verdict

Honestly, I do like Haxe. It has its warts, but it’s a solid language that takes an idea (types as the central focus) and runs with it. And it draws in features from languages like ML and Haskell that are inscrutable to us mere mortals, allowing people some of the power of those languages without the pain that comes in trying to write something usable in a functional style. Even if you only use it as a “better” JavaScript, though, it’s worth a look, especially if you’re a game developer. The Haxe world is chock full of code-based 2D game engines and libraries: HaxePunk, HaxeFlixel, and Kha are just a few.

I won’t say that Haxe is the language to use. There’s no such thing. But it’s far better than a lot of the alternatives for cross-platform development. I like it, and that’s saying a lot.

Thoughts on types

Last week, I talked about an up-and-coming HTML5 game engine. One of the defining features of that engine was that it uses TypeScript, not regular JavaScript, for its coding. TypeScript has its problems (it’s made by Microsoft, for one), but it cuts to the heart of an argument that has raged for decades in programming circles: strong versus weak typing.

First off, here’s a quick refresher. In most programming languages, values have types. These can be simple (an integer, a string of text) or complex (a class with a deep inheritance hierarchy and 50 or so methods), but they’re part of the value’s identity. Variables can have type, too, but different languages handle that in different terms. Some require you to set a variable’s type when it is first defined, and they strictly enforce that type. Others are more lenient: if x holds the value 123, it’s an integer; if you set it to "foo", then it becomes a string. And some languages allow you to mix types in an expression, while others will throw errors the minute you even dare add a string to a number.

A position of strength

I’m of two minds on types. On the one hand, I do think that a “strong” type system, where everything knows what it is and conversions must be explicit, is good for the specific kind of programming where data corruption is an unforgivable sin. The Ada language, one of the most notorious for strict typing, was made that way for a reason: it was intended for use in situations where errors are literally life-threatening.

I also like the idea of a strongly-typed language because it can “write itself” in a sense. That’s one of the things Haskell supporters are always saying, and it’s very reminiscent of the way I solved a lot of test questions in physics class. For example, if you know your answer needs to be a force in newtons (kg m/s²), and you’re given a mass (kg), a velocity (m/s), and a time (s), then it’s pretty obvious what you need to do. The same principle can apply when you’ve got code that returns a type constructed from a number of seemingly unrelated ones: figure out the chain that takes you from A to B. You can’t really do that in, say, JavaScript, because everything can return anything.

And strong types are an extra form of documentation, something sorely lacking in just about every bit of code out there. The types give you an idea of what you’re dealing with. If they’re used right, they can even guide you into using an API properly. Of course, that puts more work on the library developer, which means it’s less likely to actually get done, but it’s a nice thought.

The weak shall inherit

In a “weak” type system, objects can still have types, but variables don’t. That’s the case in JavaScript, where var x (or let x, if you’re lucky enough to get to use ES6) is all you have to go on. Is it a number? A string? A function? The answer: none of the above. It’s a variable. Isn’t that enough?

I can certainly see where it would be. For pure, unadulterated hacking, give me weak typing. Coding goes so much faster when you don’t have to constantly ask yourself what something should be. Scripting languages tend to be weakly-typed, and that’s probably why. When you know what you’re working with, and you don’t have to worry as much about error recovery, maintenance, or things like that, types only get in the way.

Of course, once I do need to think about changing things, a weakly-typed language starts to become more of a hindrance. Look at any large project in JavaScript or Ruby. They’re all a tangled mess of code held together by layers of validation and test suites sometimes bigger than the project itself. It’s…ugly. Worse, it creates a kind of Stockholm Syndrome where the people developing that mess think it’s just fine.

I’m not saying that testing (or even TDD) is a bad thing, mind you. It’s not. But so much of that testing is unnecessary. Guys, we’ve got a tool that can automate a lot of those tests for you. It’s called a compiler.

So, yeah, I like the idea of TypeScript…in theory. As programmers look to use JavaScript in “bigger” settings, they can’t miss the fact that it’s woefully inadequate for them. It was never meant to be anything more than a simple scripting language, and it shows. Modernizing efforts like ES5 and ES6 help, but they don’t—can’t—get rid of JavaScript’s nature as a weakly-typed free-for-all. (How bad is it? Implicit conversions have become accepted idioms. Want to turn n into a number? The “right” way is +n! Making a string is as easy as n+"", and booleans are just !!n.)

That’s not to say strong typing is the way to go, either. Take that too far, and you risk the opposite problem: losing yourself in conversions. A good language, in my opinion, needs a way to enforce types, but it also needs a way to not enforce them. Sometimes, you really do want an “anything”. Java’s Object doesn’t quite work for that, nor does the C answer of void *. C++ is getting any soon, or so they say; that’ll be a step up. (Note: auto in C++ is type inference. That’s a different question, but I personally think it’s an absolute must for a strongly-typed language.) But those should be used only when there’s no other option.

There’s no right answer. This is one of those debates that will last forever, and all I can do is throw in my two cents. But I like to think I have an informed opinion, and that was it. When I’m hacking up something for myself, something that probably won’t be used again once I’m done, I don’t want to be bothered with types; they take too much time. Once I start coding “for real”, I need to start thinking about how that code is going to be used. Then, strong typing saves time, because it means the compiler takes care of what would otherwise be a mound of boilerplate test cases, leaving me more time to work on the core problem.

Maybe it doesn’t work for you, but I hope I’ve at least given you a reason to think about it.

On writing and dialects

I’ve been seriously attempting to write fiction for over five years now, and I’m still learning new things about the craft all the time. One of those things concerns my own style of writing, and it’s the main reason I object to one of the fundamental maxims of creative writing.

Writing itself isn’t the hard part,” the saying goes. To some extent, that’s true. Coming up with a believable, interesting, story with believable, interesting characters is hard. Planning, plotting, characterizing, worldbuilding, all of that is supremely difficult, to the point where the mechanics of writing get lost in the noise. Especially nowadays, when everything is done on a computer, and most “writing” is actually typing on a keyboard, the physical act of writing is a small fraction of the effort that goes into creating a story.

Move one level up, to the words you’re putting on-screen, and things don’t really change all that much. You’re still in the rote mechanics of writing, but now at the level of grammar and syntax. As long as you can touch-type (and you’ll eventually learn how, if you keep at it long enough), writing—typing, if you prefer—the words is almost reflexive. As long as you speak English, putting the right words together comes naturally. Except that it doesn’t, and therein lies my problem.

Southern Man

The reason is simple: when I write a story in “standard” English (for me, that would be General American), I’m not speaking my native language. I’m American, and I’m effectively monolingual, despite a couple of years of Spanish classes in high school and fifteen more of amateur linguistic study. It’s not that I can’t speak or write English, it’s that I’m not used to speaking the standard.

As we say around here, I’m Southern-born and Southern-bred. I’m a child of the South. That’s where I was born, it’s where I live, and it’s probably where I’ll die. And even if you don’t know the first thing about American regional politics, you likely know about the Southern dialect.

It’s not different enough from the rest of the country to really be considered its own language. I can still understand just about any other American speaker, as well as most other English dialects (although those from northern England and parts of Australia sometimes baffle me), and they can likewise understand the vast majority of what I’m saying. But it is different, and it can be startling if you don’t know what to expect. Just like I sometimes struggle to figure out some of the words Jeremy Clarkson is saying, I know that plenty of people would need subtitles for Hatfields & McCoys. (Technically, that’s Appalachian, not Southern, but I’ll get to that in a minute.)

In writing, it doesn’t seem quite so bad, since the pronunciation differences, like the characteristic Southern drawl, don’t show up. But phonology isn’t the only part of a dialect. Words matter, despite what the writing self-help guys say. Y’all, for example, is the quintessential Southern word, yet I don’t think I’ve used it once in any of the stories I’ve written since the start of the decade. Why? Because that would immediately mark the whole work as “dialectal” or, worse, “substandard”. And I don’t think I want that.

Talking the talk

But sticking to the standard—whatever that is for English—means that I have to write at a level I’m not exactly comfortable with. It gets even worse because “Southern” refers to not one single dialect, but a group of them. Where I grew up, which isn’t all that far from where I’m living now, the local speech is closer to Appalachian, the talk of hillbillies living in the mountains, than the “General Southern” of the Deep South area that stretches from Charleston to Jackson. Appalachian has its own speech patterns, its own curious vocabulary, and a few peculiar grammatical constructions that make it a dialect of its own. (And that has slight regional differences, but those need not concern us here.)

So I’m not “going up a level” when I’m writing in standard American English. I’m going up two. I have to raise my standards just to get to what is widely considered the least standard of all the American dialects. Then I need to go from there up to the true literary language. It’s a kind of diglossia, if you think about it. I speak the homespun mix of Southern and Appalachian at home, among friends and family; its how I was raised to talk. For talking to others in the region, I use a more generic Southern, dropping the Appalachianisms while keeping the drawl and the y’all. Again, I learned that by osmosis: listening to people, watching the local news, etc.

Neither my home “idiolect” nor the Southern dialect are written, except in the written emulation of speech. They don’t need to be. That’s not what they’re for. But standard English is different. I don’t hear it spoken around me casually, only formally or in the media. I learned it in school, and I had to learn how it differs from the English I’m used to.

The crux of the problem, then, is this: where is the line between dialect and language? I’ve found that, when you’re writing, it’s a lot closer than you might think. I’m constantly slowed by the internal translation from Southern to General American, and it is not a perfect match. It’s the little things that trip me up, like the past perfect (in my spoken dialect, had went is an acceptable substitution for had gone), -ward versus -wards (Southerners that I’ve heard prefer towards, but most Americans use toward), and serial verbs in the future tense (try to or try and? go get or go and get?). At times, it really is like I’m writing in a different language.

(That’s not even including the Americanisms I find illogical. Like British writers, I consistently keep punctuation out of quotation marks, unless it’s part of the quote. I’m told that this is actually common practice among programmers. That makes sense, because programming languages won’t let you do it the “wrong” way. HTML, unfortunately, explicitly supports “Americanized” closing tags.)

Plain speech

Of course, the creative part of creative writing is always going to be the most important. There’s no denying that. I tend to write in a seat-of-the-pants style, where I don’t plan much in advance, instead letting things happen naturally. (I’ll talk about that in a future post.) But that very style means that I’m often stuck, as I have to stop typing to think of a name or a part of a character’s back-story. The dialectal difference is just one more thing to worry about.

If I were a better writer, I might be able to turn this liability into an advantage. Maybe there’s a market out there for books written in a Southern style, full of colloquialisms and colorful figures of speech. I don’t know, but I doubt I could be the one to pull it off. For now, I’ll stick with the standard, as hard as it is. It’s not art if you don’t suffer, right?

Looking forward to 2016

So, it’s a new year. The slate has been cleaned. We can put 2015 behind us, and look ahead to 2016. From a programming point of view, what does this new year hold? Let’s take a look.

Programming languages

This year should be an exciting one if you like programming languages for their own sake.

  • JavaScript: Most everybody is using a browser capable of most of ECMAScript 5 (ES5). By the end of the year, expect both parts of that to increase. More people are going to have modern capabilities, and the browsers themselves will cover more of the standard. Speaking of standards, I’d look to ES6 support becoming more widespread in browsers, even Microsoft’s.

  • C++: The next revision of the C++ standard, C++17, is still a ways away. (You’re crazy if you’re betting on it actually coming out in 2017. Remember, C++11 was codenamed C++0x for years.) However, we should start seeing parts of it becoming fixed. Ranges are a big thing right now, concepts are coming (finally!), and it looks like C++ might get some sort of compile-time reflection. Things are looking up, but we’re not there yet.

  • Perl: I’m serious. Perl 6 is out. (I think that leaves Star Citizen as the final harbinger of Armageddon.) In the works for a decade and a half, with a set of operators best described by a periodic table, and seemingly designed to be impossible to implement, who can’t love Perl? At the very least, it’ll be fun to write in, and the new, incompatible version will spur a new generation of Perl golfers and other code artists. But I think it might turn out a bit like Python 3. Perl 5 has history, and that’s not going away.

  • The rest: I can easily see Rust gaining a bigger cult following over the next year, especially at places like Github. PHP has their version 7, but the less said about PHP, the better. C# and Java are going to be simmering for another twelve months, at least, and I don’t see much new news coming out of either of them. Ruby will continue its slow slide into irrelevance, probably dragging Python with it. (I wouldn’t mind them taking Haskell along, but I digress.) Newcomers will arise, and I’d say we’re in for another round of “visual coding”. And hey, maybe this will finally be the year for Scala.

Hardware and the like

The big thing on everyone’s lips right now is Vulkan, the official successor to OpenGL. It was supposed to be out in time for Christmas, but it got pushed back. (Funnily enough, the same thing happened two years ago with Kaveri, AMD’s first processor line that could support Vulkan. But I’m not bitter.) Personally, I don’t see much out of Vulkan this year. It’ll be released, and we’ll see a few early, buggy drivers and experimental alphas of games, most of which will be glorified tech demos. I’d give it till 2018 before I start worrying about replacing OpenGL.

Tiny computers are going to get bigger this year, I think. I mean that in a figurative way, of course. The Raspberry Pi 2 is the big name in this field, but you’ve also got the BeagleBone and things like that, not to mention the good old Arduino. However you look at it, it’s a mature area. We’ve moved beyond revolution, now it’s time for evolution. These computers will get more powerful, easier to use, and more ubiquitous. Next Christmas, I can easily see a stick computer being like this year’s quadcopters.

On the other hand, as much as I hate to say it, I’m not holding out a lot of hope for 3-D printing. We’ve been hearing about it for half a decade, and there has definitely been incremental progress. But 2016, in my opinion, is not going to be the year we see inexpensive 3-D printers flying off the shelves. They’ll stay in the background. (The whole “Internet of Things”, however, will only grow, but it’s not intended to be programmable, so it doesn’t help us.)

Libraries, engines, etc.

Look for Unity and Unreal to continue their competition, with a bunch of smaller guys chomping at the bit. Godot, assuming they don’t screw themselves over by switching to Vulkan prematurely, might get a boost as the indie engine of choice. And JavaScript engines have near-infinite upside, especially for mobile coding. Game development in 2016 will be like it was in 2015, but better in every way.

I do think the Node.js fad is dying down, and not a moment too soon. That doesn’t mean Node is done, only that I see people evaluating it for what it is, rather than what it’s advertised as. It’s the same thing as Ruby a few years ago, back in the early days of Rails. Or JavaScript and Angular a couple of years ago, for that matter. Still, Node is a solid platform for a lot of things. It’s not going away, but this is the year that it fades from the spotlight.

The same can be said for the current crop of JS web frameworks. There’s no chance of the whole Internet getting behind a single framework, nor two or even ten. But this is an area where the churn is so great, what’s popular next December hasn’t even been written yet. I can tell you that it’ll be slower, more bloated, and less comprehensible than what’s out there, though.

In the end

For programming, 2016 has a lot to look forward to, and I’ve barely scratched the surface here. (I haven’t even mentioned learning to code, which will get even bigger this coming year.) Whether native or browser, desktop or mobile, it’s a good time to code.

Programming paradigm primer

“Paradigm”, as a word, has a bad reputation. It’s one of those buzzwords that corporate people like to throw out to make themselves sound smart. (They usually fail.) But it has a real meaning, too. Sometimes, “paradigm” is exactly the word you want. Like when you’re talking about programming languages. The alliteration, of course, is just an added bonus.

Since somewhere around the 1960s, there’s been more than one way to write programs, more than one way to view the concepts of a complex piece of software. Some of these have revolutionized the art of programming, while others mostly languish in obscurity. Today, we have about half a dozen of these paradigms with significant followings. They each have their ups and downs, and each has a specialty where it truly shines. So let’s take a look at them.

Now, it’s entirely possible for a programming language to use or encourage only a single paradigm, but it’s far more common for languages to support multiple ways of writing programs. Thanks to one Mr. Turing, we know that essentially all languages are, from a mathematical standpoint, equivalent, so you can create, say, C libraries that use functional programming. But I’m talking about direct support. C doesn’t have native objects (struct doesn’t count), for example, so it’s hard to call it an object-oriented language.

Where it all began

Imperative programming is, at its heart, nothing more than writing out the steps a program should take. Really, that’s all there is to it. They’re executed one after the other, with occasional branching or looping thrown in for added control. Assembly language, obviously, is the original imperative language. It’s a direct translation of the computer’s instruction set and the order in which those instructions are executed. (Out-of-order execution changes the game a bit, but not too much.)

The idea of functions or subroutines doesn’t change the imperative nature of such a program, but it does create the subset of structured or procedural programming languages, which are explicitly designed for the division of code into self-contained blocks that can be reused.

The list of imperative languages includes all the old standbys: C, Fortran, Pascal, etc. Notice how all these are really old? Well, there’s a reason for that. Structured programming dates back decades, and all the important ideas were hashed out long before most of us were born. That’s not to say that we’ve perfected imperative programming. There’s always room for improvement, but we’re far into the realm of diminishing returns.

Today, imperative programming is looked down upon by many. It’s seen as too simple, too dumb. And that’s true, but it’s far from useless. Shell scripts are mostly imperative, and they’re the glue that holds any operating system together. Plenty of server-side code gets by just fine, too. And then there’s all that “legacy” code out there, some of it still in COBOL…

The imperative style has one significant advantage: its simplicity. It’s easy to trace the execution of an imperative program, and they’re usually going to be fast, because they line up well with the computer’s internal methods. (That was C’s original selling point: portable assembly language.) On the other hand, that simplicity is also its biggest weakness. You need to do a lot more work in an imperative language, because they don’t exactly have a lot of features.

Objection!

In the mid-90s, object-oriented programming (OOP) got big. And I do mean big. It was all the rage. Books were written, new languages created, and every coding task was reimagined in terms of objects. Okay, but what does that even mean?

OOP actually dates back much further than you might think, but it only really began to get popular with C++. Then, with Java, it exploded, mainly from marketing and the dot-com bubble. The idea that got so hot was that of objects. Makes sense, huh? It’s right there in the name.

Objects, reduced to their most basic, are data structures that are deeply entwined with code. Each object is its own type, no different from integers or strings, but they can have customized behavior. And you can do things with them. Inheritance is one of them: creating a new type of object (class) that mimics an existing one, but with added functionality. Polymorphism is the other: functions that work differently depending on what type of object they’re acting on. Together, inheritance and polymorphism work to relieve a huge burden on coders, by making it easier to work with different types in the same way.

That’s the gist of it, anyway. OOP, because of its position as the dominant style when so much new blood was entering the field, has a ton of information out there. Design patterns, best practices, you name it. And it worked its way into every programming language that existed 10-15 years ago. C++, Java, C#, and Objective-C are the most used of the “classic” OOP languages today, although every one of them offers other options (including imperative, if you need it). Most scripting-type languages have it bolted on somewhere, such as Python, Perl, and PHP. JavaScript is a bit special, in that it uses a different kind of object-oriented programming, based on prototypes rather than classes, but it’s no less OOP.

OOP, however, has a couple of big disadvantages. One, it can be confusing, especially if you use inheritance and polymorphism to their fullest. It’s not uncommon, even in the standard libraries of Java and C#, to have a class that inherits from another class, which inherits from another, and so on, 10 or more levels deep. And each subclass can add its own functions, which are passed on down the line. There’s a reason why Java and C# are widely regarded as having some of the most complete documentation of any programming language.

The other disadvantage is the cause of why OOP seems to be on the decline. It’s great for code reuse and modeling certain kinds of problems, but it’s a horrible fit for some tasks. Not everything can be boiled down to objects and methods.

What’s your function?

That leads us to the current hotness: functional programming, or FP. The functional fad started as a reaction to overuse of OOP, but (again) its roots go way back.

While OOP tries to reduce everything to objects, functional programming, shockingly enough, models the world as a bunch of functions. Now, “function” in this context doesn’t necessarily mean the same thing as in other types of programming. Usually, for FP, these are mathematical functions: they have one output for every input, no matter what else is happening. The ideal, called pure functional programming, is a program free of side effects, such that it is entirely deterministic. (The problem with that? “Side effects” includes such things as user input, random number generation, and other essentials.)

FP has had its biggest success with languages like Haskell, Scala, and—amazingly enough—JavaScript. But functional, er, functions have spread to C++ and C#, among others. (Python, interestingly, has rejected, or at least deprecated, some functional aspects.)

It’s easy to see why. FP’s biggest strength comes from its mathematical roots. Logically, it’s dead simple. You have functions, functions that act on other functions, functions that work with lists, and so on. All of the basic concepts come straight from math, and mistakes are easily found, because they stick out like a sore thumb.

So why hasn’t it caught on? Why isn’t everybody using functional programming? Well, most people are, just in languages that weren’t entirely designed for it. The core of FP is fairly language-agnostic. You can write functions without side effects in C, for example, it’s just that a lot of people don’t.

But FP isn’t everywhere, and that’s because it’s not really as simple as its proponents like to believe. Like OOP, not everything can be reduced to a network of functions. Anything that requires side effects means we have to break out of the functional world, and that tends to be messy. (Haskell’s method of doing this, the monad, has become legendary in the confusion it causes.) Also, FP code really, really needs a smart interpreter, because its mode of execution is so different from how a computer runs, and because it tends to work at a higher level of abstraction. But interpreters are universally slower than native, relegating most FP code to those higher levels, like the browser.

Your language here

Another programming paradigm that deserves special mention is generic programming. This one’s harder to explain, but it goes something like this: you write functions that accept a set of possible types, then let the compiler figure out what “real” type to use. Unlike OOP, the types don’t have to be related; anything that fits the bill will work.

Generic programming is the idea behind C++ templates and Java or C# generics. It’s also really only used in languages like that, though many languages have “duck-typing”, which works in a similar fashion. It’s certainly powerful; most of the C++ standard library uses templates in some fashion, and that percentage is only going up. But it’s complicated, and you can tie your brain in knots trying to figure out what’s going on. Plus, templates are well-known time sinks for compilers, and they can increase code size by some pretty big factors. Duck-typing, the “lite” form of generic programming, doesn’t have either problem, but it can be awfully slow, and it usually shows up in languages that are already slow, only compounding the problem.

What do I learn?

There’s no one right way to code. If we’ve learned anything in the 50+ years the human race has been doing it, it’s that. From a computer science point of view, functional is the way to go right now. From a business standpoint, it’s OOP all the way, unless you’re looking at older code. Then you’ll be going procedural.

And then there are all those I didn’t mention: reactive, event-driven, actor model, and dozens more. Each has its own merits, its own supporters, and languages built around it.

My best advice is to learn whatever you’re preferred language offers first. Then, once you’re comfortable, move on, and never stop learning. Even if you’ll never use something like Eiffel in a serious context, it has explored an idea that could be useful in the language you do use. (In this case, contract programming.) The same could be said for Erlang, or F#, or Clojure, or whatever tickles your fancy. Just resist the temptation to become a zealot. Nobody likes them.

Now, some paradigms are harder than others, in my opinion. For someone who started with imperative programming, the functional mindset is hard to adjust to. Similarly, OOP isn’t easy if you’re used to Commodore BASIC, and even experienced JavaScript programmers are tripped up by prototypes. (I know this one first-hand.)

That’s why I think it’s good that so many languages are adopting a “multi-paradigm” approach. C++ really led the way in this, but now it’s popping up everywhere among the “lower” languages. If all paradigms (for some suitable value of “all”) are equal, then you can use whatever you want, whenever you want. Use FP for the internals, wrapped by an event-driven layer for I/O, calling OOP or imperative libraries when you need them. Some call it a kitchen-sink approach, but I see programmers as like chefs, and every chef needs a kitchen sink.

On learning to code

Coding is becoming a big thing right now, particularly as an educational tool. Some schools are promoting programming and computer science classes, even a full curriculum that lasts through the entirety of education. And then there are the commercial and political movements such as Code.org and the Hour of Code. It seems that everyone wants children to learn something about computers, beyond just how to use them.

On the other side of the debate are the detractors of the “learn to code” push, who argue that it’s a boondoggle at best. Not everybody can learn how to code, they argue, nor should they. We’re past the point where anyone who wants to use a computer must learn to program it, too.

Both camps have a point, and I can see some merit in either side of the debate. I was one of a lucky few that did have the chance to learn about programming early in school, so I can speak from experience in a way that most others cannot. So here are my thoughts on the matter.

The beauty of the machine

Programming, in my opinion, is an exercise that brings together a number of disparate elements. You need math, obviously, because computer science—the basis for programming—is all math. You also need logic and reason, talents that are in increasingly short supply among our youth. But computer programming is more than these. It’s math, it’s reasoning, it’s problem solving. But it’s also art. Some problems have more than one solution, and some of those are more elegant than others.

At first glance, it seems unreasonable to try to teach coding to children before its prerequisites. True, there are kid-friendly programming environments, like MIT’s Scratch. But these can only take you so far. I started learning BASIC in 3rd grade, at the age of 8, but that was little more than copying snippets of code out of a book and running them, maybe changing a few variables here and there for different effects. And I won’t pretend that that was anywhere near the norm, or that I was. (Incidentally, I was the only one that complained when the teacher—this was a gifted class, so we had the same teacher each year—took programming out of the curriculum.)

My point is, kids need a firm grasp of at least some math before they can hope to understand the intricacies of code. Arithmetic and some concept of algebra are the bare minimum. General computer skills (typing, “computer literacy”, that sort of thing) are also a must. And I’d want some sort of introduction to critical thinking, too, but that should be a mandatory part of schooling, anyway.

I don’t think that very young students (kindergarten through 2nd grade) should be fooling around with anything more than a simple interface to code like Scratch. (Unless they show promise or actively seek the challenge, that is. I’m firmly in favor of more educational freedom.) Actually writing code requires, well, writing. And any sort of abstraction—assembly on a fictitious processor or something like that—probably should wait until middle school.

Nor do I think that coding should be a fixed part of the curriculum. Again, I must agree somewhat with the learn-to-code detractors. Not everyone is going to take to programming, and we shouldn’t force them to. It certainly doesn’t need to be a required course for advancement. The prerequisites of math, critical thinking, writing, etc., however, do need to be taught to—and understood by—every student. Learning to code isn’t the ultimate goal, in my mind. It’s a nice destination, but we need to focus on the journey. We should be striving to make kids smarter, more well-rounded, more rational.

Broad strokes

So, if I had my way, what would I do? That’s hard to say. These posts don’t exactly have a lot of thought put in them. But I’ll give it a shot. This will just be a few ideas, nothing like an integrated, coherent plan. Also, for those outside the US, this is geared towards the American educational system. I’ll leave it to you to convert it to something more familiar.

  • Early years (K-2): The first years of school don’t need coding, per se. Here, we should be teaching the fundamentals of math, writing, science, computer use, typing, and so on. Add in a bit of an introduction to electronics (nothing too detailed, but enough to plant the seed of interest). Near the end, we can introduce the idea of programming, the notion that computers and other digital devices are not black boxes, but machines that we can control.

  • Late elementary (3-5): Starting in 3rd grade (about age 8-9), we can begin actual coding, probably starting with Scratch or something similar. But don’t neglect the other subjects. Use simple games as the main programming projects—kids like games—but also teach how programs can solve problems. And don’t punish students that figure out how to get the computer to do their math homework.

  • Middle school (6-8): Here, as students begin to learn algebra and geometry (in my imaginary educational system, this starts earlier, too), programming can move from the graphical, point-and-click environments to something involving actual code. Python, JavaScript, and C# are some of the better bets, in my opinion. Games should still be an important hook, but more real-world applications can creep in. You can even throw in an introduction to robotics. This is the point where we can introduce programming as a discipline. Computer science then naturally follows, but at a slower pace. Also, design needs to be incorporated sometime around here.

  • High school (9-12): High school should be the culmination of the coding curriculum. The graphical environments are gone, but the games remain. With the higher math taught in these grades, 3D can become an important part of the subject. Computer science also needs to be a major focus, with programming paradigms (object-oriented, functional, and so on) and patterns (Visitor, Factory, etc.) coming into their own. Also, we can begin to teach students more about hardware, robotics, program design, and other aspects beyond just code.

We can’t do it alone

Besides educators, the private sector needs to do its part if ubiquitous programming knowledge is going to be the future. There’s simply no point to teaching everyone how to code if they’ll never be able to use such a skill. Open source code, open hardware, free or low-cost tools, all these are vital to this effort. But the computing world is moving away from all of them. Apple’s iOS costs hundreds of dollars just to start developing. Android is cheaper, but the wide variety of devices means either expensive testing or compromises. Even desktop platforms are moving towards the walled garden.

This platform lockdown is incompatible with the idea of coding as a school subject. After all, what’s the point? Why would I want to learn to code, if the only way I could use that knowledge is by getting a job for a corporation that can afford it? Every other part of education has some reflection in the real world. If we want programming to join that small, elite group, then we must make sure it has a place.

Dragons in fantasy

If there is one thing, one creature, one being that we can point to as the symbol of the fantasy genre, it has to be the dragon. They’re everywhere in fantasy literature. The Hobbit, of course, is an old fantasy story that has come back into vogue in the last few years. More recent books involve dragons as major characters (Steven Erikson’s Malazan series) or as plot points (Daniel Abraham’s appropriately-titled The Dragon’s Path). Movies go through cycles, and dragons are sometimes the “in” subject (the movies based on The Hobbit, but also less recent films like Reign of Fire). Television likes dragons, too, when it has the budget to do them (Game of Thrones, of course). And we can also find these magnificent creatures represented in video games (Drakengard, Skyrim), tabletop RPGs (Dungeons & Dragons—it’s even in the name!), and music (DragonForce).

So what makes dragons so…interesting? It’s not a recent phenomenon; dragon legends go back centuries. They feature in Arthurian legend, Chinese mythology, and Greek epics. They’re everywhere, all throughout history. Something about them fires the imagination, so what is it?

The birth of the dragon

Every ancient culture, it seems, has a mythology involving giant beasts of a kind unknown to modern science. We think of the Greek myths of the Hydra, of course, but it’s only one of many. Even in the Bible, monsters are found: the leviathan and behemoth found in the book of Job, for example. But something like a dragon seems to be found in almost every mythos.

How did this happen? For things like this, there are usually a few possible explanations. One, it could be a borrowing, something that arose in one culture, then spread to its neighbors. That seems plausible, except that New World peoples also have dragon-like supernatural beings, and they had them before Columbus. Another possibility is that the first idea of the dragon was invented in the deep past, before humanity spread to every corner of the globe. But that’s a bit far-fetched. You’d then have to explain how something like that stuck around for 30,000 or so years with so little change, using only art and oral transmission for most of that time.

The third option is, in my opinion, the most reasonable: the idea of dragons arose in a few different places independently, in something like convergent evolution. Each “region” would have its own dragon mythology, where the concept of “dragon” is about the same, while different regions might have wildly different ideas of what they should be.

I would also say that the same should be true for other fantastical creatures—giants, for instance—that pop up around the world. And, in my mind, there’s a perfectly good reason why these same tropes appear everywhere: fossils. We know that there used to be huge animals roaming the earth. Dinosaurs could be enormous, and you could imagine a Bronze Age hunter stumbling upon the fossilized bones of one of them and jumping to conclusions.

Even in recent geological time, it was only the Ice Age that wiped out the mammoths and so many other “megafauna”. (Today’s environmental movement tends to want to blame humans for everything bad, including this, but the evidence can be twisted just about any way you like.) In these cases, we can see the possibility that early human bands did meet these true giants, and they would have told stories about them. In time, those stories, as such stories tend to do, could have become legendary. For dragons, this one doesn’t matter too much, but it’s a point in favor of the idea that ancient peoples saw giant creatures—or their remains—and mythologized them into dragons and giants and everything else.

The nature of the beast

Moving far forward in time, we can see that the modern era’s literature has taken the time-honored myth of the dragon and given it new direction. At some point in the last few decades, authors seem to have decided that dragons must make sense. Sure, that’s completely silly from a mythological point of view, but that’s how it is.

Even in older stories, though, dragons had a purpose. That purpose was different for different stories, as it is today. For many of them, the dragon is a nemesis, an enemy. Sometimes, it’s essentially a force of nature, if not a god in its own right. In a few, dragons are good guys, protectors. Christian cultures in medieval times liked to use the slaying dragon as a symbol for the defeat of paganism. But it’s only relatively recently that the idea of dragons as “people” has become popular. Nowadays, we can find fiction where dragons are represented as magicians, sages, and oracles. A few settings even turn them into another sapient race, with their own civilization, culture, religion, and so on.

The form of dragons also depends a lot on the which mythos we’re talking about. The modern perception of a dragon as a winged, bipedal serpent who breathes fire and hoards gold (in other words, more like the wyvern) is just one possibility. Plenty of cultures have wingless dragons, and most of the “true” dragons have no legs; they’re more like giant snakes. Still, there’s an awful lot of variation, and there’s no single, definitive version of a dragon.

Your own dragon

Dragons in a work of fiction, whether novel or film or game, need to be there for a reason, if you want a coherent story. You don’t have to work out a whole ecological treatise on them, showing their diets, sleep patterns, and reproductive habits—Tolkien’s dragons, for example, were supernatural creations, so they didn’t have to make scientific sense—but you should know why a dragon appears.

If there’s only one of them, there’s probably a reason why. Maybe it’s a demon, or a creation of the gods, or an avatar of chaos. Maybe it’s the sole survivor of its kind, frozen in time for millennia (that’s a big spoiler, but I’m not going to tell you for what). Whatever you come up with, you should be able to justify it with something more than “because it’s there”. The more dragons you have, the more this problem can grow. In the extreme, if they’re everywhere, why aren’t they running things?

More than their reason for existing in the first place, you need to think about their story role. Are they enemies? Are they good or evil? Can they talk? What are they like? Smaug was greedy and haughty, for instance, and it’s a conceit of D&D that dragons are complex beings that are completely misunderstood by us lesser mortals simply because we can’t understand their true motives.

Are there different kinds of dragons? Again we can look at D&D, which has a bewildering assortment even before we include wyverns, lesser drakes, and the like. Of course, a game will need a different notion of role than a novel, and gamers like variation in their enemies, but only the most jaded player would think of a dragon as anything less than a major boss character.

Another thing that’s popular is the idea that dragons can change their form to look human. This might be derived from RPGs, or they might have taken it from an earlier source. However it worked out, a lot of people like the idea of a shapeshifting dragon. (Half the characters in the aforementioned Malazan series seem to be like this, and that’s not the only example in fantasy.) Shapechanging, of course, is an important part of a lot of fantasy, and I might do a post on it later on. It is another interesting possibility, though, if you can get it right.

In a very big way, dragons-as-people is a similar problem as other fantasy races, as well as sci-fi aliens. The challenge here is to make something that feels different, something that isn’t quite human, while still making it believable for the story at hand. If dragons live for 500 years, for example, they will have a different outlook on life and history than we would. If they lay eggs—and who doesn’t like dragon eggs?—they won’t understand the pain and danger of live childbirth, among other things. The ways in which a dragon isn’t like a human are breeding grounds for conflict, both internal and external. All you have to do is follow the notion towards its logical conclusion. You know, just like everything else.

In conclusion, I’d like to say that I do like dragons, when they’re done right. They can be these imposing, alien presences beyond reason or understanding, and that is something I find interesting. But in the wrong hands, they turn into little more than pets or mounts, giant versions of dogs and horses that happen to have scales. Dragons don’t need to be noble or evil, but they should have an impact when you meet one. I mean, you’d feel amazed if you met one in real life, wouldn’t you?

Character alignment

If you’ve ever played or even read about Dungeons & Dragons or similar role-playing games (including derivative RPGs like Pathfinder or even computer games like Nethack), you might have heard of the concept of alignment. It’s a component of a character that, in some cases, can play an important role in defining that character. Depending on the Game Master (GM), alignment can be one more thing to note on a character sheet before forgetting it altogether, or it can be a role-playing straitjacket, a constant presence that urges you towards a particular outcome. Good games, of course, place it somewhere between these two extremes.

The concept also has its uses outside of the particulars of RPGs. Specifically, in the realm of fiction, the notion of alignment can be made to work as an extra “label” for a character. Rather than totally defining the character, pigeonholing him into one of a hew boxes, I find that it works better as a starting point. In a couple of words, we can neatly capture a bit of a character’s essence. It doesn’t always work, and it’s far too coarse for much more than a rough draft, but it can neatly convey the core of a character, giving us a foundation.

First, though, we need to know what alignment actually is. In the “traditional” system, it’s a measure of a character’s nature on two different scales. These each have three possible values; elementary multiplication should tell you that we have nine possibilities. Clearly, this isn’t an exact science, but we don’t need it to be. It’s the first step.

One of the two axes in our alignment graph is the time-honored spectrum of good and evil. A character can be Good, Evil, or Neutral. In a game, these would be quite important, as some magic spells detect Evil or only affect Good characters. Also, some GMs refuse to allow players to play Evil characters. For writing, this distinction by itself matters only in certain kinds of fiction, where “good versus evil” morality is a major theme. Mythic fantasy, for example, is one of these.

The second axis is a little harder to define, even among gamers. The possibilities, again, are threefold: Lawful, Chaotic, or Neutral. Broadly, this is a reflection of a character’s willingness to follow laws, customs, and traditions. In RPGs, it tends to have more severe implications than morality (e.g., D&D barbarians can’t be Lawful), but less severe consequences (few spells, for example, only affect Chaotic characters). In non-gaming fiction, I find the Lawful–Chaotic continuum to be more interesting than the Good–Evil one, but that’s just me.

As I said before, there are nine different alignments. Really, all you do is pick one value from either axis: Lawful Good, Neutral Evil, etc. Each of these affects gameplay and character development, at least if the GM wants it to. And, as it happens, each one covers a nice segment of possible characters in fiction. So, let’s take a look at them.

Lawful Good

We’ll start with Lawful Good (LG). In D&D, paladins must be of this alignment, and “paladin” is a pretty good descriptor of it. Lawful Good is the paragon, the chivalrous knight, the holy saint. It’s Superman. LG characters will be Good with a capital G. They’ll fight evil, then turn the Bad Guys over to the authorities, safe in the knowledge that truth and justice will prevail.

The nicey-niceness of Lawful Good can make for some interesting character dynamics, but they’re almost all centered on situations that force the LG character to make a choice between what is legal and what is morally right. A cop or a knight isn’t supposed to kill innocents, but what happens when inaction causes him to? Is war just, even that waged against evil? Is a mass murderer worth saving? LG, at first, seems one-dimensional; in a way, it is. But there’s definitely a story in there. Something like Isaac Asimov’s “Three Laws of Robotics” works here, as does anything with a strict code of morality and honor.

Some LG characters include Superman, obviously, and Eddard Stark of A Song of Ice and Fire (and look where that got him). Real-world examples are harder to come by; a lot of people think they’re Lawful Good (or they aspire to it), but few can actually uphold the ideal.

Neutral Good

You can be good without being Good, and that’s what this alignment is. Neutral Good (NG) is for those that try their best to do the right thing legally, but who aren’t afraid to take matters into their own hands if necessary (but only then). You’re still a Good Guy, but you don’t keep to the same high standards as Lawful Good, nor do you hold others to those standards.

Neutral Good fits any general “good guys” situation, but it can also be more specific. It’s not the perfect paragon that Lawful Good is. NG characters have flaws. They have suspicions. That makes them feel more “real” than LG white knights. The stories for an NG protagonist are easier to write than those for LG, because there are more possibilities. Any good-and-evil story works, for starters. The old “cop gets fired/taken off the case” also fits Neutral Good.

Truly NG characters are hard to find, but good guys that aren’t obviously Lawful or Chaotic fit right in. Obi-Wan Kenobi is a nice example, as Star Wars places a heavy emphasis on morality. The “everyday heroes” we see on the news are usually NG, too, and that’s a whole class that can work in short stories or a serial drama.

Chaotic Good

I’ll admit, I’m biased. I like Chaotic Good (CG) characters, so I can say the most about them, but I’ll try to restrain myself. CG characters are still good guys. They still fight evil. But they do it alone, following their own moral compass that often—but not always—points towards freedom. If laws get in the way of doing good, then a CG hero ignores them, and he worries about the consequences later.

Chaotic Good is the (supposed) alignment of the vigilante, the friendly rogue, the honorable thief, the freedom fighter working against a tyrannical, oppressive government. It’s the guys that want to do what they believe is right, not what they’re told is right. In fiction, especially modern fantasy and sci-fi, when there are characters that can be described as good, they’re usually Chaotic Good. They’re popular for quite a few reasons: everybody likes the underdog, everyone has an inner rebel, and so on. You have a good guy fighting evil, but also fighting the corruption of The System. The stories practically write themselves.

CG characters are everywhere, especially in movies and TV: Batman is one of the most prominent examples from popular culture of the last decade. But Robin Hood is CG, too. In the real world, CG fairly accurately fits most of the heroes of history, those who chose to do the right thing even knowing what it would cost. (If you’re of a religious bent, you could even make the claim that Jesus was CG. I wouldn’t argue.)

Lawful Neutral

Moving away from the good guys, we come to Lawful Neutral (LN). The best way to describe this alignment, I think, is “order above all”. Following the law (or your code of honor, promises, contracts, etc.) is the most important thing. If others come to harm because of it, that’s not your concern. It’s kind of a cold, calculating style, if you ask me, but there’s good to be had in it, and “the needs of the many outweigh the needs of the few” is completely Lawful Neutral in its sentiment.

LN, in my opinion, is hard to write as a protagonist. Maybe that’s my own Chaotic inclination talking. Still, there are plenty of possibilities. A judge is a perfect example of Lawful Neutral, as are beat cops. (More…experienced cops, as well as most lawyers, probably fall under Lawful Evil.) Political and religious leaders both fall under Lawful Neutral, and offer lots of potential. But I think LN works best as the secondary characters. Not the direct protagonist, but not the antagonists, either.

Lawful Neutral, as I said above, best describes anybody whose purpose is upholding the law without judging it. Those people aren’t likely to be called heroes, but they won’t be villains, either, except in the eyes of anarchists.

True Neutral

The intersection of the two alignment axes is the “Neutral Neutral” point, which is most commonly called True Neutral or simply Neutral (N). Most people, by default, go here. Every child is born Neutral. Every animal incapable of comprehending morality or legality is also True Neutral. But some people are there by choice. Whether they’re amoral, or they strive for total balance, or they’re simply too wishy-washy to take a stand, they stay Neutral.

Neutrality, in and of itself, isn’t that exciting. A double dose can be downright boring. But it works great as a starting point. For an origin story, we can have the protagonist begin as True Neutral, only coming to his final alignment as the story progresses. Characters that choose to be Neutral, on the other hand, are harder to justify. They need a reason, although that itself can be cause for a tale. They can make good “third parties”, too, the alternative to the extremes of Good and Evil. In a particularly dark story, even the best characters might never be more “good” than N.

True Neutral people are everywhere, as the people that have no clear leanings in either direction on either axis. Chosen Neutrals, on the other hand, are a little rarer. It tends to be more common as a quality of a group rather than an individual: Zen Buddhism, Switzerland.

Chaotic Neutral

Seasoned gamers are often wary of Chaotic Neutral (CN), if only because it’s often used as the ultimate “get out of jail free” card of alignment. Some people take CN as saying, “I can do whatever I want.” But that’s not it at all. It’s individualism, freedom above all. Egalitarianism, even anarchy. For Chaotic Neutral, the self rules all. That doesn’t mean you have a license to ignore consequences; on the contrary, CN characters will often run right into them. But they’ll chalk that up as another case of The Man holding them back.

If you don’t consider Chaotic Neutral to be synonymous with Chaotic Stupid, then you have a world of character possibilities. Rebels of all kinds fall under CN. Survivalists fit here, too. Stories with a CN protagonist might be full of reflection, or of fights for freedom. Chaotic Neutral antagonists, by contrast, might stray more into the “do what I want” category. In fiction, the alignment tends to show up more in stories where there isn’t a strong sense of morality, where there are no definite good or bad guys. A dystopic sci-fi novel could easily star a CN protagonist, but a socialist utopia would see them as the villains.

Most of the less…savory sorts of rogues are CN, at least those that aren’t outright evil. Stoners and hippies, anarchists and doomsday preppers, all of these also fit into Chaotic Neutral. As for fictional characters, just about any “anti-hero” works here. The Punisher might be one example.

Lawful Evil

Evil, it might be said, is relative. Lawful Evil (LE) might even be described as contentious. I would personally describe it as tyranny, oppression. The police state in fiction is Lawful Evil, as are the police who uphold it and the politicians who created it. For the LE character, the law is the perfect way to exploit people.

All evil works best for the bad guys, and it takes an amazing writer to pull off an Evil protagonist. LE villains, however, are perfect, especially when the hero is Chaotic Good. Greedy corporations, rogue states, and the Machiavellian schemer are all Lawful Evil, and they all make great bad guys. Like CG, Lawful Evil baddies are downright easy to write, although they’re certainly susceptible to overuse.

LE characters abound, nearly always as antagonists. Almost any “evil empire” of fiction is Lawful Evil. The corrupted churches popular in medieval fantasy fall under this alignment, as well. In reality, too, we can find plenty of LE examples: Hitler, the Inquisition, Dick Cheney, the list goes on.

Neutral Evil

Like Neutral Good, Neutral Evil (NE) fits best into stories where morality is key. But it’s also the best alignment to describe the kind of self-serving evil that marks the sociopath. A character who is NE is probably selfish, certainly not above manipulating others for personal gain, but definitely not insane or destructive. Vindictive, maybe.

Neutral Evil characters tend to fall into a couple of major roles. One is the counterpart to NG: the Bad Guy. This is the type you’ll see in stories of pure good and evil. The second is the true villain, the kind of person who sees everyone around him as a tool to be used and—when no longer required—discarded. It’s an amoral sort of evil, more nuanced than either Lawful or Chaotic, and thus more real. It’s easy to truly hate a Neutral Evil character.

Some of the best antagonists in fiction are NE, but so are some of the most clichéd. The superhero’s nemesis tends to be Neutral Evil, unless he’s a madman or a tyrant; the same is true of the bad guys of action movies. Real-life examples also include many corporate executives (studies claim that as many as 90% of the highest-paid CEOs are sociopaths), quite a few hacking groups (those that are doing it for the money, especially), and likely many of the current Republican presidential candidates (the Democrats tend to be Lawful Evil).

Chaotic Evil

The last of our nine alignments, Chaotic Evil (CE) embraces chaos and madness. It’s the alignment of D&D demons, true, but also psychopaths and terrorists. Pathfinder’s “Strategy Guide” describes CE as “Just wants to watch the world burn”, and that’s a pretty good way of putting it.

For a writer, though, Chaotic Evil is almost a trap. It’s almost too easy. CE characters don’t need motivations, or organization, or even coherent plans. They can act out of impulse, which is certainly interesting, but maybe not the best for characterization. It’s absolutely possible to write a Chaotic Evil villain (though probably impossible to write a believably CE anti-hero), but you have to be careful not to give in to him. You can’t let him take over, because he could do anything. Chaos is inherently unpredictable.

Chaotic Evil is easy to find in fiction. Just look at the Joker, or Jason Voorhees, or every summoned demon and Mad King in fantasy literature. And, unfortunately, it’s far too easy to find CE people in our world’s history: Osama bin Laden, Charles Manson, the Unabomber, and a thousand others along the same lines.

In closing

As I stated above, alignment isn’t the whole of a character. It’s not even a part, really. It’s a guideline, a template to quickly find where a character stands. Saying that a protagonist is Chaotic Good, for instance, is a shorthand way of specifying a number of his qualities. It tells a little about him, his goals, his motivations. It even gives us a hint as to his enemies: Lawful and/or Evil characters and groups, those most distant on either alignment axis.

In some RPGs, acting “out of alignment” is a cardinal sin. It certainly is for player characters like D&D paladins, who have to adhere to a strict moral code. (How strict that code is depends on the GM.) For a fictional character in a story, it’s not so bad, but it can be jarring if it happens suddenly. Given time to develop, on the other hand, it’s a way to show the growth of a character’s morality. Good guys turn bad, lawmen go rogue, but not on a whim.

Again, alignment is not a straitjacket to constrain you, but it can be a writing aid. Sure, it doesn’t fit all sizes. As a lot of gamers will tell you, it’s not even necessary for an RPG. But it’s one more tool at our disposal. This simple three-by-three system lets us visualize, at a glance, a complex web of relationships, and that can be invaluable.