Worldbuilding and the level of detail

Building a world, whether a simple stage, a vast universe, or anywhere in between, is hard work. Worse, it’s work that never really ends. One author, one team of writers, one group of players—none of these could hope to create a full-featured world in anything approaching a reasonable time. We have to cut corners at every turn, just to make the whole thing possible, let alone manageable.

Where to cut those corners is the hard part. Obviously, everything pertinent to the story must be left in. But how much else do we need? Only the creator of the work can truly answer that one. Some stories may only require the most rudimentary worldbuilding. Action movies and first-person shooters, for example, don’t need much more than sets and (maybe) some character motivations. A sprawling, open-world RPG has to have a bit more effort put into it. The bigger the “scale”, the more work you’d think you need.

Level of detail

But that’s not necessarily true. In computer graphics programming, there’s a concept called mipmapping. A large texture (like the outer surface of a building) takes up quite a chunk of memory. If it’s far enough away, though, it’ll only be a few pixels on the screen. That’s wasteful—and it slows the game down—so a technique was invented where smaller versions of the texture could be loaded when an object was too far away to warrant the “full-sized” graphics. As you get closer, the object’s texture is progressively changed to better and better versions, until the game engine determines that it’s worth showing the original.

The full set of these textures, from the original possibly down to a single pixel, is called a mipmap. In some cases, it’s even possible to control how much of the mipmap is used. On lower-end machines, some games can be set to use lower-resolution textures, effectively taking off the top layer or two of the mipmap. Lower resolution means less memory usage, and less processing needed for lighting and other effects. The setting for this is usually called the level of detail.

Okay, so that’s what happens with computer graphics, but how, you might be wondering, does that apply to worldbuilding? Well, the idea of mipmapping is a perfect analogy to what we need to do to make a semi-believable world without spending ages on it. The things that are close to the story need the most detail, while people, places, and events far away can be painted in broad, low-res strokes. Then, if future parts of the story require it, those can be “swapped out” for something more detailed.

The level of detail is another “setting” we can tweak in a fictional work. Epic fantasy cries out for great worldbuilding. Superhero movies…not so much. Books have the room to delve deeper into what makes the world tick than the cramped confines of film. Even then, there’s a limit to how much you should say. You don’t need to calculate how many people would be infected by a zombie plague in a 24-hour period in your average metropolis. But a book might want to give some vague figures, whereas a movie would just show as many extras as the producers could find that day.

Branches of the tree

The level of detail, then, is more like a hard cutoff. This is how far down any particular path you’re willing to go for the story. But you certainly don’t need to go that far for everything. You have to pick and choose, and that’s where the mipmap-like idea of “the closer you are, the more detail you get” comes in.

Again, the needs of the story, the tone you’re trying to set, and the genre and medium are all going to affect your priorities. In this way, the mipmap metaphor is exactly backwards. We want to start with the least detail. Then, in those areas you know will be important, fill in a bit more. (This can happen as you’re writing, or in the planning stages, depending on how you work.)

As an example, let’s say you’re making something like a typical “space opera” type of work. You know it’s going to be set in the galaxy at large, or a sizable fraction of it. Now, our galaxy has 100 billion stars, but you’d be crazy to worry about one percent of one percent of that. Instead, think about where the story needs to be: the galactic capital, the home of the lost ancients, and so on. You might not even have to put them on a map unless you expect their true locations to become important, and you can always add those in later.

In the same vein, what about government? Well, does it matter? If politics won’t be a central focus, then just make a note to throw in the occasional mention of the Federation/Empire/Council, and that’s that. Only once you know what you want do you have to fill in the blanks. Technology? Same thing. Technobabble about FTL or wormholes or hyperspace would make for usable filler.

Of course, the danger is that you end up tying yourself in knots. Sometimes, you can’t reconcile your broad picture with the finer details. If you’re the type of writer who plans before putting words on the page, that’s not too bad; cross out your original idea, and start over. Seat-of-the-pants writers will have a tougher time of it. My advice there is to hold off as long as feasible before committing to any firm details. Handwaving, vague statements, and the unreliable narrator can all help here, although those can make the story seem wishy-washy. It might be best to steer the story around such obstacles instead.

The basic idea does need you to think a little bit beforehand. You have to know where your story is going to know how much detail is enough. Focus on the wrong places, and you waste effort and valuable time. Set the “level of detail” dial too low, and you might end up with a shallow story in a shallower world. In graphics, mipmaps can often be created automatically. As writers, we don’t have that luxury. We have to make all those layers ourselves. Nobody ever said building a world was easy.

Languages I love

A while back, I talked about the four programming languages I hate. Now, you get the contrast: the languages I love. Just by comparing this list to the last one, you’ll probably get a good idea of how I think, but I don’t mind. It’s my opinion. No matter how much justification I give, it’ll never be anything more. Of course, that also means that I can’t possibly be wrong, so don’t bother telling me that I am.


C++ is almost universally reviled, it seems. I’m in the minority on this one, but I’m not ashamed. C++, especially the “modern” C++ that we got in 2011, is one of my favorite languages. It’s fast (running, not compiling), versatile, and ubiquitous. It really is the everything language. To some, that’s its biggest flaw, but I find it to be its greatest strength.

I don’t like languages that force me to think a certain way. That’s part of the grudge I have against Java, and it’s most of the reason I put Haskell on the “hate” list. I’d rather have a language smart enough to say, “Hey, you know what you’re doing. Use this if you need it, but if you don’t want to, that’s cool.”

C++ is like that. Want to write an imperative program? No problem. Like your functional programming instead? Every release this decade has only added FP tools. Object-oriented? It’s still around. C++ is called a multi-paradigm language, and it truly deserves that title.

It’s not a perfect language by any means. Most of the problems stem from the necessary backwards-compatibility with C, but everyone knows (or should know) not to use those bits unless absolutely necessary. And they’re increasingly unnecessary. We don’t have to deal with raw pointers anymore. Nobody should ever be calling malloc or creating static arrays (of the int a[42] kind) in modern C++.

Every programming language is a compromise, but I think the good far outweighs the bad in C++. It was a close thing last decade, but now it’s no contest. Smart pointers, templates, RAII, constexpr, C++ just has too many good features. If we could drop the C stuff and get some better string handling, we might just have the perfect language.


I’ve written about Haxe before, but I’ll reiterate some of the things I said.

Haxe basically started out as a better ActionScript, but it’s so much more, and it’s one of those languages I wish was better known. Pay attention, language makers, because this is how you do strongly-typed. It’s also got functional stuff if you want it, OOP if that’s your preference, and a little bit of generic programming. It’s not quite as multi-paradigm as C++, but it’s far better than most.

The main flaw with Haxe is probably its not-quite-what-you’d-expect idea of “cross-platform”. Haxe has a compiler, but no way to make native binaries by itself. You can get C++ output, and Haxe can start up GCC or Clang for you, but that’s the best you ‘re going to get. Beyond that, the library support is lacking, and the documentation could use some work. The language itself, though, is solid.

This is a language I want to use. It’s just hard to come up with a reason I should. Maybe, if Haxe gets more popular, people will stop seeing it as “ActionScript++” and see it for what it is: one of the most interesting programming languages around.


Put simply, Scala is what Java should be.

Internally, it’s a horrible mess, I’ll grant. I’ve seen some of the “internal” and “private” APIs, and they only serve to make my head hurt. But the outer layer, the one we coders actually use, is just fine. Scala gives you functional when you want it, OOP or imperative when you don’t. It lets you do the “your variable is really a constant” thing that FP guys love so much. (I personally don’t understand that, but whatever.) But it doesn’t force you into anything. It’s multi-paradigm and, more importantly, trusting. Exactly what I want from a programming language.

Scala is where I first learned about pattern matching and actors. That makes sense, as those are two of its biggest strengths, and they aren’t things that show up in too many other languages…at least not the ones people actually use. But they’re not all. Where it can, Scala gives you back the things Java took away, like operator overloading. Yet it’s still compatible with Java libraries, and it still runs on the JVM. (Supposedly, you can even use it on Android, but I’ve never managed to get that to work for anything more complex than “Hello, World!”)

If Java is old-school C, Scala is modern C++. That’s the way I see it. Given the choice, I’d rather use it than plain Java any day of the week. Except for speed, it’s better in almost every way. It may not be the best language out there—it does have its flaws—but it’s one of the best options if you’re in the Java “ecosystem”.

The field

I don’t love every language I come across, nor do I hate all the ones that aren’t listed above. Here are some of the ones that didn’t make either cut:

  • Python: I really like Python, but I can’t love it, for two reasons. One, it has many of the same problems as Ruby regarding speed and parallelism. Two, the version split. Python 3 has some good features, but it’s a total break with Python 2, and it has a couple of design decisions that I simply do not agree with. (Changing print from statement to function is only a symptom, not the whole problem.)

  • Perl: I find Perl fascinating…from a distance. It’s the ultimate in linguistic anarchy. But that comes with a hefty price in both execution speed and legibility. Modern Perl 5 doesn’t look as much like line noise as the older stuff, but it’s definitely not like much else. And the less said about Perl 6, the better.

  • JavaScript: You could say I have a love/hate relationship with JavaScript. It has a few severe flaws (this handling, for instance), but there’s a solid core in there. Remember how I said that C++’s good outweighs its bad? With JS, it’s almost a perfect balance.

  • Lisp: Specifically, that means Clojure here, as that’s the only one I’ve ever really tried. Lisp has some great ideas, but somebody forgot to add in the syntax. It’s too easy to get dizzy looking at all the parentheses, and editors can only do so much.

  • C: As much as I blame C for C++’s faults, I don’t hate it. Yeah, it’s horrible from a security and safety standpoint, but it’s fast and low-level. Sometimes, that’s all you need. And we wouldn’t have the great languages we have today if not for C. Can you imagine a world dominated by Pascal clones?

  • C#: I don’t hate C#. I hate the company that developed it, but the language itself is okay. It’s a better Java that isn’t anywhere near as compatible. Too much of it depends on Microsoft stuff for me to love it, but it fixes Java enough that I can’t hate it. So it falls in the middle, through no fault of its own.

Languages I hate

Anyone who has been writing code for any length of time—anyone who isn’t limited to a single programming language—will have opinions on languages. Some are to be liked, some to be loved, and a few to be hated. Naturally, which category a specific language falls into depends on who you’re talking to. In the years I’ve been coding, I’ve seriously considered probably a dozen different languages, and I’ve glanced at half again as many. Along the way, I have seen the good and the bad. In this post, I’ll give you the bad, and why I think they belong there. (Later on, I’ll do the same for my favorite languages, of course.)


Let’s get this one out of the way first. I use Java. I know it. I’ve even made money writing something in it, which is more than I can say for any other programming language. But I will say this right now: I have never met anyone who likes Java.

The original intent of Java was a language that could run “everywhere” with minimal hassle. Also, it had to be enough like C++ to get the object-oriented goodness that was in vogue in the nineties, but without all that extraneous C crap that only led to buffer overflows. So everything is an object—except for “primitive” types like, say, integers. You don’t get to play with pointers—but you can get null-pointer exceptions. In early versions of the language, there was no way to make an algorithm that worked with any type; the solution was to cast everything to Object, the root class underlying the whole system. But then they bolted on generics, in a mockery of C++ templates. They do work, except for the niggling bit called type erasure.

And those are just some of the design decisions that make Java unbearable. There’s also the sheer verbosity of the language, a problem compounded by the tendency of new Java coders to overuse object-oriented design patterns. Factories and abstract classes have their place, but that place is not “everywhere I can put them”. Yes, that’s the fault of inexperienced programmers, but the language and its libraries (standard and 3rd-party) only reinforce the notion.

Unlike most of the other languages I hate, I have to grin and bear it with Java. It’s too widespread to ignore. Android uses it, and that’s the biggest mobile platform out there. Like it or not, Java won’t go away anytime soon. But if it’s possible, I’d rather use something like Scala.


A few years ago, Ruby was the hipster language of choice, mostly thanks to the Rails framework. Rails was my first introduction to Ruby, and it left such a bad taste in my mouth that I went searching for something better. (Haven’t found it yet, but hope springs eternal…) Every bit of Ruby I see on the Internet only makes me that much more secure in my decision.

This one is far more subjective. Ruby just looks wrong to me, and it’s hard to explain why. Most of it is the cleverness it tries to espouse. Blocks and symbols are useful things, but the syntax rubs me the wrong way. The standard library methods let you write things like 3.times, which seems like it’s trying to be cute. I find it ugly, but that might be my C-style background. And then there’s the Unicode support. Ruby had to be dragged, kicking and screaming, into the modern world of string handling, and few of the reasons why had anything to do with the language itself.

Oh, and Ruby’s pitifully slow. That’s essentially by design. If any part of the code can add methods to core types like integers and strings, optimization becomes…we’ll just say non-trivial. Add in the Global Interpreter Lock (a problem Python also has), and you don’t even get to use multithreading to get some of that speed back. No wonder every single Ruby app out there needs such massive servers for so little gain.

And even though most of the hipsters have moved on, the community doesn’t seem likely to shed the cult-like image that they brought. Ruby fans, like those of every other mildly popular language, are zealous when it comes to defending their language. Like the “true” Pythonistas and those poor, deluded fools who hold up PHP as a model of simplicity, Ruby fanboys spin their language’s weaknesses into strengths.

Java is everywhere, and that helps spread out the hate. Ruby, on the other hand, is concentrated. Fortunately, that makes it easy to ignore.


This one is like stepping into quicksand—I don’t know how far I’m going to sink, and there’s no one around to help me.

Haskell gets a lot of praise for its mathematical beauty, its almost-pure functional goodness, its concision (quicksort is only two lines!) and plenty of other things. I’ll gladly say that one Haskell application I use, Pandoc, is very good. But I would not want to develop it.

The Haskell fans will be quick to point out that I started with imperative programming, and thus I don’t understand the functional mindset. Some would even go as far as Dijkstra, saying that I could never truly appreciate the sheer beauty of the language. To them, I would say: then who can?. The vast majority of programmers didn’t start with a functional programming language (unless you count JavaScript, but it still has C-like syntax, and that’s how most people are going to learn it). A language that no one can understand is a language no one can use. Isn’t that what we’re always hearing about C++?

But Haskell’s main problem, in my opinion, is its poor fit to real-world problems. Most things that programs need to do simply don’t fit the functional mold. Sure, some parts of them do, but the whole doesn’t. Input/output, random numbers, the list goes on. Real programs have state, and functional programming abhors state. Haskell’s answer to this is monads, but the only decent description of a monad I’ve ever seen had to convert it to JavaScript to make sense!

I don’t mind functional programming in itself. I think it can be useful in some cases, but it doesn’t work everywhere. Instead of a “pure” functional language, why can’t I have one that lets me use FP when I can, but switch back to something closer to how the system works when I need it. Oh, wait…


I’ll just leave this here.

Thoughts on Haxe

Haxe is one of those languages that I’ve followed for a long time. Not only that, but it’s the rare programming language that I actually like. There aren’t too many on that list: C++, Scala, Haxe, Python 2 (but not 3!), and…that’s just about it.

(As much as I write about JavaScript, I only tolerate it because of its popularity and general usefulness. I don’t like Java for a number of reasons—I’ll do a “languages I hate” post one of these days—but it’s the only language I’ve written professionally. I like the idea of C# and TypeScript, but they both have the problem of being Microsoft-controlled. And so on.)

About the language

Anyway, back to Haxe, because I genuinely feel that it’s a good programming language. First of all, it’s strongly-typed, and you know my opinion on that. But it’s also not so strict with typing that you can’t get things done. Haxe also has type inference, and that really, really helps you work with a strongly-typed language. Save time while keeping type safety? Why not?

In essence, the Haxe language itself looks like a very fancy JavaScript. It’s got all the bells and whistles you expect from a modern language: classes, generics, object literals, array comprehensions, iterators, and so on. You know, the usual. Just like everybody else.

But there’s also a few interesting features that aren’t quite as common. Pattern matching, for instance, which is one of my favorite things from “functional” languages. Haxe also has the idea of “static extensions”, something like C#’s extension methods, which allow you to add extra functionality to classes. Really, most of the bullet points on the Haxe manual’s “Language Features” section are pretty nifty, and most of them are in some way connected to the type system. Of any language I’ve ever used, only Scala comes close to helping me understand the power and necessity of types as much as Haxe.

The platform

But wait, there’s more. Haxe is cross-platform, in its own special way. Strictly speaking, there’s no native output. Instead, you have a choice of compilation targets, and some of these can then be turned into native binaries. Most of these let you “transpile” Haxe code to another language: JavaScript, PHP, C++, C#, Java, and Python. There’s also the Neko VM, made by Haxe’s creator but not really used much, and you can even have the Haxe compiler spit out ActionScript code or a Flash SWF. (Why you would want to is a question I can’t answer.)

The standard library provides most of what you need for app development, and haxelib is the Haxe-specific answer to NPM, CPAN, et al. A few of the available libraries are very good, like OpenFL (basically a reimplementation of the Flash API). Of course, depending on your target platform, you might also be able to use libraries from NPM, the JVM, or .NET directly. It’s not as easy as it could be—you need an extern interface class, a bit like TypeScript—but it’s there, and plenty of major libraries are already fixed for you.

The verdict

Honestly, I do like Haxe. It has its warts, but it’s a solid language that takes an idea (types as the central focus) and runs with it. And it draws in features from languages like ML and Haskell that are inscrutable to us mere mortals, allowing people some of the power of those languages without the pain that comes in trying to write something usable in a functional style. Even if you only use it as a “better” JavaScript, though, it’s worth a look, especially if you’re a game developer. The Haxe world is chock full of code-based 2D game engines and libraries: HaxePunk, HaxeFlixel, and Kha are just a few.

I won’t say that Haxe is the language to use. There’s no such thing. But it’s far better than a lot of the alternatives for cross-platform development. I like it, and that’s saying a lot.

Thoughts on types

Last week, I talked about an up-and-coming HTML5 game engine. One of the defining features of that engine was that it uses TypeScript, not regular JavaScript, for its coding. TypeScript has its problems (it’s made by Microsoft, for one), but it cuts to the heart of an argument that has raged for decades in programming circles: strong versus weak typing.

First off, here’s a quick refresher. In most programming languages, values have types. These can be simple (an integer, a string of text) or complex (a class with a deep inheritance hierarchy and 50 or so methods), but they’re part of the value’s identity. Variables can have type, too, but different languages handle that in different terms. Some require you to set a variable’s type when it is first defined, and they strictly enforce that type. Others are more lenient: if x holds the value 123, it’s an integer; if you set it to "foo", then it becomes a string. And some languages allow you to mix types in an expression, while others will throw errors the minute you even dare add a string to a number.

A position of strength

I’m of two minds on types. On the one hand, I do think that a “strong” type system, where everything knows what it is and conversions must be explicit, is good for the specific kind of programming where data corruption is an unforgivable sin. The Ada language, one of the most notorious for strict typing, was made that way for a reason: it was intended for use in situations where errors are literally life-threatening.

I also like the idea of a strongly-typed language because it can “write itself” in a sense. That’s one of the things Haskell supporters are always saying, and it’s very reminiscent of the way I solved a lot of test questions in physics class. For example, if you know your answer needs to be a force in newtons (kg m/s²), and you’re given a mass (kg), a velocity (m/s), and a time (s), then it’s pretty obvious what you need to do. The same principle can apply when you’ve got code that returns a type constructed from a number of seemingly unrelated ones: figure out the chain that takes you from A to B. You can’t really do that in, say, JavaScript, because everything can return anything.

And strong types are an extra form of documentation, something sorely lacking in just about every bit of code out there. The types give you an idea of what you’re dealing with. If they’re used right, they can even guide you into using an API properly. Of course, that puts more work on the library developer, which means it’s less likely to actually get done, but it’s a nice thought.

The weak shall inherit

In a “weak” type system, objects can still have types, but variables don’t. That’s the case in JavaScript, where var x (or let x, if you’re lucky enough to get to use ES6) is all you have to go on. Is it a number? A string? A function? The answer: none of the above. It’s a variable. Isn’t that enough?

I can certainly see where it would be. For pure, unadulterated hacking, give me weak typing. Coding goes so much faster when you don’t have to constantly ask yourself what something should be. Scripting languages tend to be weakly-typed, and that’s probably why. When you know what you’re working with, and you don’t have to worry as much about error recovery, maintenance, or things like that, types only get in the way.

Of course, once I do need to think about changing things, a weakly-typed language starts to become more of a hindrance. Look at any large project in JavaScript or Ruby. They’re all a tangled mess of code held together by layers of validation and test suites sometimes bigger than the project itself. It’s…ugly. Worse, it creates a kind of Stockholm Syndrome where the people developing that mess think it’s just fine.

I’m not saying that testing (or even TDD) is a bad thing, mind you. It’s not. But so much of that testing is unnecessary. Guys, we’ve got a tool that can automate a lot of those tests for you. It’s called a compiler.

So, yeah, I like the idea of TypeScript…in theory. As programmers look to use JavaScript in “bigger” settings, they can’t miss the fact that it’s woefully inadequate for them. It was never meant to be anything more than a simple scripting language, and it shows. Modernizing efforts like ES5 and ES6 help, but they don’t—can’t—get rid of JavaScript’s nature as a weakly-typed free-for-all. (How bad is it? Implicit conversions have become accepted idioms. Want to turn n into a number? The “right” way is +n! Making a string is as easy as n+"", and booleans are just !!n.)

That’s not to say strong typing is the way to go, either. Take that too far, and you risk the opposite problem: losing yourself in conversions. A good language, in my opinion, needs a way to enforce types, but it also needs a way to not enforce them. Sometimes, you really do want an “anything”. Java’s Object doesn’t quite work for that, nor does the C answer of void *. C++ is getting any soon, or so they say; that’ll be a step up. (Note: auto in C++ is type inference. That’s a different question, but I personally think it’s an absolute must for a strongly-typed language.) But those should be used only when there’s no other option.

There’s no right answer. This is one of those debates that will last forever, and all I can do is throw in my two cents. But I like to think I have an informed opinion, and that was it. When I’m hacking up something for myself, something that probably won’t be used again once I’m done, I don’t want to be bothered with types; they take too much time. Once I start coding “for real”, I need to start thinking about how that code is going to be used. Then, strong typing saves time, because it means the compiler takes care of what would otherwise be a mound of boilerplate test cases, leaving me more time to work on the core problem.

Maybe it doesn’t work for you, but I hope I’ve at least given you a reason to think about it.

Programming paradigm primer

“Paradigm”, as a word, has a bad reputation. It’s one of those buzzwords that corporate people like to throw out to make themselves sound smart. (They usually fail.) But it has a real meaning, too. Sometimes, “paradigm” is exactly the word you want. Like when you’re talking about programming languages. The alliteration, of course, is just an added bonus.

Since somewhere around the 1960s, there’s been more than one way to write programs, more than one way to view the concepts of a complex piece of software. Some of these have revolutionized the art of programming, while others mostly languish in obscurity. Today, we have about half a dozen of these paradigms with significant followings. They each have their ups and downs, and each has a specialty where it truly shines. So let’s take a look at them.

Now, it’s entirely possible for a programming language to use or encourage only a single paradigm, but it’s far more common for languages to support multiple ways of writing programs. Thanks to one Mr. Turing, we know that essentially all languages are, from a mathematical standpoint, equivalent, so you can create, say, C libraries that use functional programming. But I’m talking about direct support. C doesn’t have native objects (struct doesn’t count), for example, so it’s hard to call it an object-oriented language.

Where it all began

Imperative programming is, at its heart, nothing more than writing out the steps a program should take. Really, that’s all there is to it. They’re executed one after the other, with occasional branching or looping thrown in for added control. Assembly language, obviously, is the original imperative language. It’s a direct translation of the computer’s instruction set and the order in which those instructions are executed. (Out-of-order execution changes the game a bit, but not too much.)

The idea of functions or subroutines doesn’t change the imperative nature of such a program, but it does create the subset of structured or procedural programming languages, which are explicitly designed for the division of code into self-contained blocks that can be reused.

The list of imperative languages includes all the old standbys: C, Fortran, Pascal, etc. Notice how all these are really old? Well, there’s a reason for that. Structured programming dates back decades, and all the important ideas were hashed out long before most of us were born. That’s not to say that we’ve perfected imperative programming. There’s always room for improvement, but we’re far into the realm of diminishing returns.

Today, imperative programming is looked down upon by many. It’s seen as too simple, too dumb. And that’s true, but it’s far from useless. Shell scripts are mostly imperative, and they’re the glue that holds any operating system together. Plenty of server-side code gets by just fine, too. And then there’s all that “legacy” code out there, some of it still in COBOL…

The imperative style has one significant advantage: its simplicity. It’s easy to trace the execution of an imperative program, and they’re usually going to be fast, because they line up well with the computer’s internal methods. (That was C’s original selling point: portable assembly language.) On the other hand, that simplicity is also its biggest weakness. You need to do a lot more work in an imperative language, because they don’t exactly have a lot of features.


In the mid-90s, object-oriented programming (OOP) got big. And I do mean big. It was all the rage. Books were written, new languages created, and every coding task was reimagined in terms of objects. Okay, but what does that even mean?

OOP actually dates back much further than you might think, but it only really began to get popular with C++. Then, with Java, it exploded, mainly from marketing and the dot-com bubble. The idea that got so hot was that of objects. Makes sense, huh? It’s right there in the name.

Objects, reduced to their most basic, are data structures that are deeply entwined with code. Each object is its own type, no different from integers or strings, but they can have customized behavior. And you can do things with them. Inheritance is one of them: creating a new type of object (class) that mimics an existing one, but with added functionality. Polymorphism is the other: functions that work differently depending on what type of object they’re acting on. Together, inheritance and polymorphism work to relieve a huge burden on coders, by making it easier to work with different types in the same way.

That’s the gist of it, anyway. OOP, because of its position as the dominant style when so much new blood was entering the field, has a ton of information out there. Design patterns, best practices, you name it. And it worked its way into every programming language that existed 10-15 years ago. C++, Java, C#, and Objective-C are the most used of the “classic” OOP languages today, although every one of them offers other options (including imperative, if you need it). Most scripting-type languages have it bolted on somewhere, such as Python, Perl, and PHP. JavaScript is a bit special, in that it uses a different kind of object-oriented programming, based on prototypes rather than classes, but it’s no less OOP.

OOP, however, has a couple of big disadvantages. One, it can be confusing, especially if you use inheritance and polymorphism to their fullest. It’s not uncommon, even in the standard libraries of Java and C#, to have a class that inherits from another class, which inherits from another, and so on, 10 or more levels deep. And each subclass can add its own functions, which are passed on down the line. There’s a reason why Java and C# are widely regarded as having some of the most complete documentation of any programming language.

The other disadvantage is the cause of why OOP seems to be on the decline. It’s great for code reuse and modeling certain kinds of problems, but it’s a horrible fit for some tasks. Not everything can be boiled down to objects and methods.

What’s your function?

That leads us to the current hotness: functional programming, or FP. The functional fad started as a reaction to overuse of OOP, but (again) its roots go way back.

While OOP tries to reduce everything to objects, functional programming, shockingly enough, models the world as a bunch of functions. Now, “function” in this context doesn’t necessarily mean the same thing as in other types of programming. Usually, for FP, these are mathematical functions: they have one output for every input, no matter what else is happening. The ideal, called pure functional programming, is a program free of side effects, such that it is entirely deterministic. (The problem with that? “Side effects” includes such things as user input, random number generation, and other essentials.)

FP has had its biggest success with languages like Haskell, Scala, and—amazingly enough—JavaScript. But functional, er, functions have spread to C++ and C#, among others. (Python, interestingly, has rejected, or at least deprecated, some functional aspects.)

It’s easy to see why. FP’s biggest strength comes from its mathematical roots. Logically, it’s dead simple. You have functions, functions that act on other functions, functions that work with lists, and so on. All of the basic concepts come straight from math, and mistakes are easily found, because they stick out like a sore thumb.

So why hasn’t it caught on? Why isn’t everybody using functional programming? Well, most people are, just in languages that weren’t entirely designed for it. The core of FP is fairly language-agnostic. You can write functions without side effects in C, for example, it’s just that a lot of people don’t.

But FP isn’t everywhere, and that’s because it’s not really as simple as its proponents like to believe. Like OOP, not everything can be reduced to a network of functions. Anything that requires side effects means we have to break out of the functional world, and that tends to be messy. (Haskell’s method of doing this, the monad, has become legendary in the confusion it causes.) Also, FP code really, really needs a smart interpreter, because its mode of execution is so different from how a computer runs, and because it tends to work at a higher level of abstraction. But interpreters are universally slower than native, relegating most FP code to those higher levels, like the browser.

Your language here

Another programming paradigm that deserves special mention is generic programming. This one’s harder to explain, but it goes something like this: you write functions that accept a set of possible types, then let the compiler figure out what “real” type to use. Unlike OOP, the types don’t have to be related; anything that fits the bill will work.

Generic programming is the idea behind C++ templates and Java or C# generics. It’s also really only used in languages like that, though many languages have “duck-typing”, which works in a similar fashion. It’s certainly powerful; most of the C++ standard library uses templates in some fashion, and that percentage is only going up. But it’s complicated, and you can tie your brain in knots trying to figure out what’s going on. Plus, templates are well-known time sinks for compilers, and they can increase code size by some pretty big factors. Duck-typing, the “lite” form of generic programming, doesn’t have either problem, but it can be awfully slow, and it usually shows up in languages that are already slow, only compounding the problem.

What do I learn?

There’s no one right way to code. If we’ve learned anything in the 50+ years the human race has been doing it, it’s that. From a computer science point of view, functional is the way to go right now. From a business standpoint, it’s OOP all the way, unless you’re looking at older code. Then you’ll be going procedural.

And then there are all those I didn’t mention: reactive, event-driven, actor model, and dozens more. Each has its own merits, its own supporters, and languages built around it.

My best advice is to learn whatever you’re preferred language offers first. Then, once you’re comfortable, move on, and never stop learning. Even if you’ll never use something like Eiffel in a serious context, it has explored an idea that could be useful in the language you do use. (In this case, contract programming.) The same could be said for Erlang, or F#, or Clojure, or whatever tickles your fancy. Just resist the temptation to become a zealot. Nobody likes them.

Now, some paradigms are harder than others, in my opinion. For someone who started with imperative programming, the functional mindset is hard to adjust to. Similarly, OOP isn’t easy if you’re used to Commodore BASIC, and even experienced JavaScript programmers are tripped up by prototypes. (I know this one first-hand.)

That’s why I think it’s good that so many languages are adopting a “multi-paradigm” approach. C++ really led the way in this, but now it’s popping up everywhere among the “lower” languages. If all paradigms (for some suitable value of “all”) are equal, then you can use whatever you want, whenever you want. Use FP for the internals, wrapped by an event-driven layer for I/O, calling OOP or imperative libraries when you need them. Some call it a kitchen-sink approach, but I see programmers as like chefs, and every chef needs a kitchen sink.

On learning to code

Coding is becoming a big thing right now, particularly as an educational tool. Some schools are promoting programming and computer science classes, even a full curriculum that lasts through the entirety of education. And then there are the commercial and political movements such as and the Hour of Code. It seems that everyone wants children to learn something about computers, beyond just how to use them.

On the other side of the debate are the detractors of the “learn to code” push, who argue that it’s a boondoggle at best. Not everybody can learn how to code, they argue, nor should they. We’re past the point where anyone who wants to use a computer must learn to program it, too.

Both camps have a point, and I can see some merit in either side of the debate. I was one of a lucky few that did have the chance to learn about programming early in school, so I can speak from experience in a way that most others cannot. So here are my thoughts on the matter.

The beauty of the machine

Programming, in my opinion, is an exercise that brings together a number of disparate elements. You need math, obviously, because computer science—the basis for programming—is all math. You also need logic and reason, talents that are in increasingly short supply among our youth. But computer programming is more than these. It’s math, it’s reasoning, it’s problem solving. But it’s also art. Some problems have more than one solution, and some of those are more elegant than others.

At first glance, it seems unreasonable to try to teach coding to children before its prerequisites. True, there are kid-friendly programming environments, like MIT’s Scratch. But these can only take you so far. I started learning BASIC in 3rd grade, at the age of 8, but that was little more than copying snippets of code out of a book and running them, maybe changing a few variables here and there for different effects. And I won’t pretend that that was anywhere near the norm, or that I was. (Incidentally, I was the only one that complained when the teacher—this was a gifted class, so we had the same teacher each year—took programming out of the curriculum.)

My point is, kids need a firm grasp of at least some math before they can hope to understand the intricacies of code. Arithmetic and some concept of algebra are the bare minimum. General computer skills (typing, “computer literacy”, that sort of thing) are also a must. And I’d want some sort of introduction to critical thinking, too, but that should be a mandatory part of schooling, anyway.

I don’t think that very young students (kindergarten through 2nd grade) should be fooling around with anything more than a simple interface to code like Scratch. (Unless they show promise or actively seek the challenge, that is. I’m firmly in favor of more educational freedom.) Actually writing code requires, well, writing. And any sort of abstraction—assembly on a fictitious processor or something like that—probably should wait until middle school.

Nor do I think that coding should be a fixed part of the curriculum. Again, I must agree somewhat with the learn-to-code detractors. Not everyone is going to take to programming, and we shouldn’t force them to. It certainly doesn’t need to be a required course for advancement. The prerequisites of math, critical thinking, writing, etc., however, do need to be taught to—and understood by—every student. Learning to code isn’t the ultimate goal, in my mind. It’s a nice destination, but we need to focus on the journey. We should be striving to make kids smarter, more well-rounded, more rational.

Broad strokes

So, if I had my way, what would I do? That’s hard to say. These posts don’t exactly have a lot of thought put in them. But I’ll give it a shot. This will just be a few ideas, nothing like an integrated, coherent plan. Also, for those outside the US, this is geared towards the American educational system. I’ll leave it to you to convert it to something more familiar.

  • Early years (K-2): The first years of school don’t need coding, per se. Here, we should be teaching the fundamentals of math, writing, science, computer use, typing, and so on. Add in a bit of an introduction to electronics (nothing too detailed, but enough to plant the seed of interest). Near the end, we can introduce the idea of programming, the notion that computers and other digital devices are not black boxes, but machines that we can control.

  • Late elementary (3-5): Starting in 3rd grade (about age 8-9), we can begin actual coding, probably starting with Scratch or something similar. But don’t neglect the other subjects. Use simple games as the main programming projects—kids like games—but also teach how programs can solve problems. And don’t punish students that figure out how to get the computer to do their math homework.

  • Middle school (6-8): Here, as students begin to learn algebra and geometry (in my imaginary educational system, this starts earlier, too), programming can move from the graphical, point-and-click environments to something involving actual code. Python, JavaScript, and C# are some of the better bets, in my opinion. Games should still be an important hook, but more real-world applications can creep in. You can even throw in an introduction to robotics. This is the point where we can introduce programming as a discipline. Computer science then naturally follows, but at a slower pace. Also, design needs to be incorporated sometime around here.

  • High school (9-12): High school should be the culmination of the coding curriculum. The graphical environments are gone, but the games remain. With the higher math taught in these grades, 3D can become an important part of the subject. Computer science also needs to be a major focus, with programming paradigms (object-oriented, functional, and so on) and patterns (Visitor, Factory, etc.) coming into their own. Also, we can begin to teach students more about hardware, robotics, program design, and other aspects beyond just code.

We can’t do it alone

Besides educators, the private sector needs to do its part if ubiquitous programming knowledge is going to be the future. There’s simply no point to teaching everyone how to code if they’ll never be able to use such a skill. Open source code, open hardware, free or low-cost tools, all these are vital to this effort. But the computing world is moving away from all of them. Apple’s iOS costs hundreds of dollars just to start developing. Android is cheaper, but the wide variety of devices means either expensive testing or compromises. Even desktop platforms are moving towards the walled garden.

This platform lockdown is incompatible with the idea of coding as a school subject. After all, what’s the point? Why would I want to learn to code, if the only way I could use that knowledge is by getting a job for a corporation that can afford it? Every other part of education has some reflection in the real world. If we want programming to join that small, elite group, then we must make sure it has a place.

Thoughts on ES6

ES6 is out, and the world will never be the same. Or something like that. JavaScript won’t be the same, at least once the browsers finish implementing the new standard. Of course, by that time, ES7 will be completed. It’s just like every other standardized language. Almost no compilers fully support C++14, even in 2015, and there will certainly be holes in their coverage two years from now, when C++17 (hopefully) arrives. C# and Java programmers are lucky, since their releases are dictated by the languages’ owners, who don’t have to worry about compatibility. The price of a standard, huh?

Anyway, ES6 does bring a lot to the table. It’s almost a total overhaul of JavaScript, in my opinion, and most of it looks to be for the better. New idioms and patterns will arise over the coming years. Eventually, we may even start talking about “Modern JavaScript” the way we do “Modern C++” or “Modern Perl”, indicating that the “old” way of thinking is antiquated, deprecated. The new JS coder in 2020 might wonder why we ever did half the things we did, the same way kids today wonder how anyone got by with only 4 MB of memory or a 250 MB hard drive (like my first PC).

Babel has an overview of the new features of ES6, so I won’t repeat them here. I will offer my opinions on them, though.

Classes and Object Literals

I already did a post on classes, with a focus on using them in games. But they’re useful everywhere, especially as so many JavaScript programmers come in from languages with a more “traditional” approach to objects. Even for veterans, they come in handy. We no longer have to reinvent the wheel or use an external library for simple inheritance. That’s a good thing.

The enhancements to object literals are nice, too. Mostly, I like the support for prototype objects, methods, and super. Those will give a big boost to the places where we don’t use classes. The shorthand assignments are pure sugar, but that’s really par for the course in ES6: lots of syntactic conveniences to help us do the things we were already doing.


I will do a post on modules for game development, I promise. For now, I’d like to say that I like the idea of modules, though I’m not totally sold on their implementation and syntax. I get why the standards people did it the way they did, but it feels odd, especially as someone who has been using CommonJS and AMD modules for a couple of years.

No matter what you think of them, modules will be one of the defining points of ES6. Once modules become widespread, Browserify becomes almost obsolete, RequireJS entirely so. The old pattern of adding a property to the global window goes…out the window. (Sorry.) ES6 modules are a little more restrictive than those of Node, but I’d start looking into them as soon as the browsers start supporting them.


Async programming is the hot thing right now, and promises are ES6’s answer. It’s not full on threading, and it’s distinct from Web Workers, but this could be another area to watch. Having a language-supplied async system will mean that everybody can use it, just like C++11’s threads and futures. Once people can use something, they will use it.

Promises, I think, will definitely come into their own for AJAX-type uses. If JavaScript ever gets truly big for desktop apps, then event-driven programming will become even more necessary, and that’s another place I see promises becoming important. Games will probably make use of them, too, if they don’t cause to much of a hit to speed.

Generators and Iterators

Both of these really help a lot of programming styles. (Comprehensions do, too, but they were pushed back to ES7.) Iterators finally give us an easy way of looping over an array’s values, something I’ve longed for. They also work for custom objects, so we can make our own collections and other nifty things.

You might recognize generators from Python. (That’s where I know them from.) When you use them, it will most likely be for making your own iterable objects. They’ll also be handy for async programming and coroutines, if they’re anything like their Python counterparts.

Syntactic Sugar

A lot of additions to ES6 are purely for aesthetic purposes, so I’ll lump them all together here, in the same order as Babel’s “Learn ES2015” page that I linked above.

  • Arrows: Every JavaScript clone (CoffeeScript, et al.) has a shortcut for function literals, so there’s no reason not to put one in the core language. ES6 uses the “fat” arrow =>, which stands out. I like that, and I’ll be using it as soon as possible, especially for lambda-like functions. The only gotcha here? Arrow functions don’t get their own this, so watch out for that.

  • Template Strings: String interpolation using ${}. Took long enough. This will save pinkies everywhere from over-stretching. Anyway, there’s nothing much to complain about here. It’s pretty much the same thing as PHP, and everybody likes that. Oh, wait…

  • Destructuring: One of those ideas where you go, “Why didn’t they think of it sooner?”

  • Function Parameters: All these seem to be meant to get rid of any use for arguments, which is probably a good thing. Default parameters were sorely needed, and “rest” parameters will mean one more way to prevent off-by-one errors. My advice? Start using these ASAP.

  • let & const: Everybody complains about JavaScript’s scoping rules. let is the answer to those complaints. It gives you block-scoped variables, just like you know from C, C++, Java, and C#. var is still there, though, as it should be. For newer JS coders coming from other languages, I’d use let everywhere to start. const gives you, well, constants. Those are nice, but module exports remove one reason for constants, so I don’t see const getting quite as much use.

  • Binary & Octal Literals: Uh, yeah, sure. I honestly don’t know how much use these are in any higher-level language nowadays. But they don’t hurt me just by being there, so I’m not complaining.

Library Additions

This is kind of an “everything else” category. ES6 adds quite a bit to the standard library. Everything that I don’t feel is big enough to warrant its own section goes here, again in the order shown on “Learn ES2015”.

  • Unicode: It’s about time. Not just the Unicode literal strings, but the String and RegExp support for higher characters. For anyone working with Unicode, ES6 is a godsend. Especially if you’re doing anything with emoji, like, say, making a language that uses them.

  • Maps and Sets: If these turn out to be more efficient than plain objects, then they’ll be perfect; otherwise, I don’t think they are terribly important. In fact, they’re not that hard to make yourself, and it’s a good programming exercise. WeakMap and WeakSet are more specialized; if you need them, then you know you need them, and you probably won’t care about raw performance.

  • Proxies: These are going to be bigger on the server side, I think. Testing will get a big boost, too, but I don’t see proxies being a must-have feature in the browser. I’d love to be proven wrong, though.

  • Symbols: Library makers might like symbols. With the exception of the builtins, though, some of us might not even notice they’re there. Still, they could be a performance boost if they’re faster than strings as property keys.

  • Subclassing: Builtin objects like Array and Date can be subclassed in ES6. I’m not sure how I feel on that. On the plus side, it’s good for consistency and for the times when you really do need a custom array that acts like the real thing. However, I can see this being overused at first.

  • New APIs: The new builtin methods are all welcome additions. The Array stuff, particularly, is going to be helpful. Math.imul() and friends will speed up low-level tasks, too. And the new methods for String (like startsWith()) should have already been there years ago. (Of all the ES6 features, these are the most widely implemented, so you might be able to use them now.)

  • Reflection: Reflection is always a cool feature, but it almost cries out to be overused and misused. Time will tell.


ES6 has a lot of new, exciting features, but it’ll take a while before we can use them everywhere. Still, I think there’s enough in there to get started learning right now. But there are going to be a lot of projects that will soon become needless. Well, that’s the future for you, and what a bright future it is. One thing’s for sure: JavaScript will never be the same.

First Languages for Game Development

If you’re going to make a game, you’ll need to do some programming. Even the best drag-and-drop or building-block environment won’t always be enough. At some point, you’ll have to make something new, something customized for your on game. But there are a lot of options out there. Some of them are easier, some more complex. Which one to choose?

In this post, I’ll offer my opinion on that tough decision. I’ll try to keep my own personal feelings about a language out of it, but I can’t promise anything. Also, I’m admittedly biased against software that costs a lot of money, but I know that not everyone feels the same way, so I’ll bite my tongue. I’ll try to give examples (and links!) of engines or other environments that use each language, too.

No Language At All

Examples: Scratch, Construct2, GameMaker

For a very few cases, especially the situation of kids wanting to make games, the best choice of programming language might be “None”. There are a few engines out there that don’t really require programming. Most of these use a “Lego” approach, where you build logic out of primitive “blocks” that you can drag and connect.

This option is certainly appealing, especially for those that think they can’t “do” programming. And successful games have been made with no-code engines. Retro City Rampage, for example, is a game created in GameMaker, and a number of HTML5 mobile games are being made in Construct2. Some other engines have now started creating their own “no programming required” add-ons, like the Blueprints system of Unreal Engine 4.

The problem comes when you inevitable exceed the limitations of the engine, when you need to do something its designers didn’t include a block for. For children and younger teens, this may never happen, but anyone wanting to go out of the box might need more than they can get from, say, Scratch’s colorful jigsaw pieces. When that happens, some of these engines have a fallback: Construct2 lets you write plugins in JavaScript, while GameMaker has its own language, GML, and the newest version of RPG Maker uses Ruby.

Programming, especially game programming, is hard, there’s no doubt about it. I can understand wanting to avoid it as much as possible. Some people can, and they can make amazing things. If you can work within the limitations of your chosen system, that’s great! If you need more, though, then read on.


Examples: Unity3D, Phaser

JavaScript is everywhere. It’s in your browser, on your phone, and in quite a few desktop games. The main reason for its popularity is almost tautological: JavaScript is everywhere because it’s everywhere. For game programming, it started coming into its own a few years ago, as mobile gaming exploded and browsers became optimized enough to run it at a decent speed. With HTML5, it’s only going to get bigger, and not just for games.

As a language, JavaScript is on the easy side, except for a few gotchas that trip up even experienced programmers. (There’s a reason why it has a book subtitled “The Good Parts”.) For the beginner, it certainly offers the easiest entry: just fire up your browser, open the console, and start typing. Unity uses JS as its secondary language, and about a million HTML5 game engines use it exclusively. If you want to learn, there are worse places to start.

Of course, the sheer number of engines might be the language’s downfall. Phaser might be one of the biggest pure JS engines right now, but next year it could be all but forgotten. (Outside of games, this is the case with web app frameworks, which come and go with surprising alacrity.) On top of that, HTML5 engines often require installation of NodeJS, a web server, and possibly more. All that can be pretty daunting when all you want to do is make a simple game.

Personally, I think JavaScript is a good starting language if you’re careful. Would-be game developers might be better off starting with Unity or Construct2 (see above) rather than something like Phaser, though.

C++ (with a few words on C)

Examples: Unreal Engine 4, SFML, Urho3D

C++ is the beast of the programming world. It’s big, complex, hard to learn, but it is fast. Most of today’s big AAA games use C++, especially for the most critical sections of code. Even many of the high-level engines are themselves written in C++. For pure performance, there’s not really any other option.

Unfortunately, that performance comes at a price. Speaking as someone who learned C++ as his second programming language, I have to say that it’s a horrible choice for your first. There’s just too much going on. The language itself is huge, and it can get pretty cryptic at times.

C is basically C++’s older brother. It’s nowhere near as massive as C++, and it can sometimes be faster. Most of your operating system is likely written in C, but that doesn’t make it any better of a choice for a budding game programmer. In a way, C is too old. Sure, SDL is a C library, but it’s going to be the lowest level of your game engine. When you’re first starting out, you won’t even notice it.

As much as I love C++ (it’s probably my personal favorite language right now), I simply can’t recommend starting with it. Just know that it’s there, but treat it as a goal, an ideal, not a starting point.


Examples: LÖVE, many others as a scripting or modding language

Lua is pretty popular as a scripting language. Lots of games use it for modding purposes, with World of Warcraft by far the biggest. For that reason alone, it might be a good start. After all, making mods for games can be a rewarding start to game development. Plus, it’s a fairly simple language that doesn’t have many traps for the unwary. Although I’ll admit I don’t know Lua as well as most of the other languages in this list, I can say that it can’t be too bad if so many people are using it. I do get a kind of sense that people don’t take it seriously enough for creating games, though, so take from that what you will.


Examples: Unity3D, MonoGame

C# has to be considered a good candidate for a first language simply because it’s the primary language of Unity. Sure you can write Unity games in JavaScript, but there are a few features that require C#, and most of the documentation assumes that’s what you’ll be using.

As for the language itself, C# is good. Personally, I don’t think it’s all that pretty, but others might have different aesthetic sensibilities. It used to be that C# was essentially Microsoft-only, but Mono has made some pretty good strides in recent years, and some developments in 2015 (including the open-sourcing of .NET Core) show positive signs. Not only that, but my brother finds it interesting (again, thanks to Unity), so I almost have to recommend at least giving it a shot.

The downside of C# for game programming? Yeah, learning it means you get to use Unity. But, that’s about all you get to use. Besides MonoGame and the defunct XNA, C# doesn’t see a lot of use in the game world. For the wider world of programming, though, it’s one of the standard languages, the Microsoft-approved alternative to…


Examples: LibGDX, JMonkeyEngine, anything on Android

Java is the old standard for cross-platform coding. The Java Virtual Machine runs just about anywhere you can think of, even places it shouldn’t (like a web browser). It’s the language of Minecraft and your average Android app. And it was meant to be so simple, anybody could learn it. Sounds perfect, don’t it?

Indeed, Java is simple to learn. And it has some of the best tools in the world. But it also has some of the slowest, buggiest, most bloated and annoying tools you have ever had the misfortune of using. (These sets do overlap, by the way.) The language itself is, in my opinion, the very definition of boring. I don’t know why I feel that way, but I do. Maybe because it’s so simple, a child could use it.

Obviously, if you’re working on Android, you’re going to use Java at some point. If you have an engine that runs on other platforms, you might not have to worry about it, since “native” code on Android only needs a thin Java wrapper that Unity and others provide for you. If you’re not targeting Android, Java might not be on your radar. I can’t blame you. Sure, it’s a good first language, but it’s not a good language. The me from five years ago would never believe I’m saying this, but I’d pick C# over Java for a beginning game developer.


Examples: Pygame, RenPy

I’ll gladly admit that I think Python is one of the best beginner languages out there. It’s clean and simple, and it does a lot of things right. I’ll also gladly admit that I don’t think it can cut it for game programming. I can say this with experience as I have tried to write a 2D game engine in Python. (It’s called Pyrge, and you can find the remnants of it on my Github profile that I won’t link here out of embarrassment.). It’s hard, mostly because the tools available aren’t good enough. Python is a programmer’s language, and Pygame is a wonderful library, but there’s not enough there for serious game development.

There’s always a “but”. For the very specific field of “visual novels”, Python does work. RenPy is a nice little tool for that genre, and it’s been used for quite a few successful games. They’re mostly of the…adult variety, but who’s counting? If that’s what you want to make, then Python might be the language for you, just because of RenPy. Otherwise, as much as I love it, I can’t really recommend it. It’s a great language to learn the art of programming, but games have different requirements, and those are better met by other options.

Engine-Specific Scripting

Examples: GameMaker, Godot Engine, Torque, Inform 7

Some engine developers make their own languages. The reasons why are as varied as the engines themselves, but they aren’t all that important. What is important is that these engine-specific languages are often the only way to interact with those environments. That can be good and bad. The bad, obviously, is that what you learn in GML or GDScript or TorqueScript doesn’t carry over to other languages. Sometimes, that’s a fair trade, as the custom language can better interact with the guts of the engine, giving a performance boost or just a better match to the engine’s quirks. (The counter to this is that some engines use custom scripting languages to lock you into their product.)

I can’t evaluate each and every engine-specific programming language. Some of them are good, some are bad, and almost all of them are based on some other language. Godot’s GDScript, for example, is based on Python, while TorqueScript is very much a derivative of JavaScript. Also, I can’t recommend any of these languages. The engines, on the other hand, all have their advantages and disadvantages. I already discussed GameMaker above, and I think Godot has a lot of promise (I’m using it right now), but I wouldn’t say you should use it because of its scripting language. Instead, learn the scripting language if you like the engine.

The Field

There are plenty of other options that I didn’t list here. Whether it’s because I’m not that familiar with the language, or it doesn’t see much use in game development, or because it doesn’t really work as a first language, it wasn’t up there. So here are some of the “best of the rest” options, along with some of the places they’re used:

  • Swift (SpriteKit) and Objective-C (iOS): I don’t have a Mac, which is a requirement for developing iOS apps, and Swift is really only useful for that purpose. Objective-C actually does work for cross-platform programming, but I’m not aware of any engines that use it, except those that are Apple-specific.

  • Haxe (HaxeFlixel): Flash is dying as a platform, and Haxe (through OpenFL) is its spiritual successor. HaxeFlixel is a 2D engine that I’ve really tried to like. It’s not easy to get into, though. The language itself isn’t that bad, though it may be more useful for porting old Flash stuff than making new games.

  • Ruby (RPG Maker VX Ace): Ruby is one of those things I have an irrational hatred for, like broccoli and reality shows. (My hatred of cats, on the other hand, is entirely rational.) Still, I can’t deny that it’s a useful language for a lot of people. And it’s the scripting language for RPG Maker, when you have to delve into that engine’s inner workings. Really, if you’re not using RPG Maker, I don’t see any reason to bother with Ruby, but you might see things differently.

  • JavaScript variants (Phaser): Some people (and corporations), fed up with JavaScript’s limitations, decided to improve it. But they all went in their own directions, with the result of a bewildering array of languages: CoffeeScript, TypeScript, LiveScript, Dart, and Coco, to name a few. For a game developer, the only one directly of any use is TypeScript, because Phaser has it as a secondary language. They all compile into JS, though, so you can choose the flavor you like.

If there’s anything I missed, let me know. If you disagree with my opinions (and you probably do), tell me why. Any other suggestion, criticism, or whatever can go in the comments, too. The most important thing is to find something you like. I mean, why let somebody else make your decisions for you?