Godot Engine 2.0 released

Finally!

I’ve been saying for a while now that I think Godot is one of the best game engines around for indie developers. It’s open source, it’s free, you never have to worry about royalties—all it really needed was a bit more polish. Well, version 2.0 is out, and that brings some of that much-needed polish. Downloads and changelogs are at the link above, but I’ll pick a few of the improvements that stand out to me.

Scenes

Godot is, for lack of a better term, a scene-based game engine. Scenes are the core construct, and the engine has always been built around making them easy yet powerful. With 2.0, that’s now even more true.

Thanks to the new additions to scene instancing, Godot scenes got even better. Now, every scene in a Godot game is, to put it in Unity terms, a prefab. If you’ve used Unity, you know how helpful prefabs can be; Godot gives them to you for free. Basically, every instance of a scene can be edited in any way. All of its child nodes, including sub-scenes, are there for the changing.

It gets better, because now scenes can be inherited, too. The obvious use for this is a “base” object that is slightly altered to quickly create others. Enemies with subtle AI or animation changes, for example, or palette-swapped pickups. But I’m sure you can find plenty of other ways inheritance can help you. I mean, it wouldn’t be used so much in programming if you couldn’t.

The editor

Without the editor, Godot would be nothing more than Yet Another Engine. But it does have the editor, and that’s one of its biggest draws. The new version gives the editor a major overhaul, adding tons of new features. It’ll take some time to work out how—and how much—they help, but it’s hard to imagine that they won’t.

The most important, from my view, are multiple scene editing and the new Script view. Working with Godot, one of the biggest pains was the constant need to switch between scenes. They’re the central component of your engine, but you can only have one of them open at a time? No more, and that change alone will probably double your productivity.

The dissociation of the script editor from the scene editor turns Godot into more of an IDE. That will make it seem more familiar to people coming from code-heavy engines, for one thing. But it also means that we can keep multiple scripts open across scene changes. Again, the time-consuming context switch when editing was one of my main gripes with Godot’s editor. Now it’s gone.

Live editing

This one deserves its own section. Live editing is, simply put, the ability to edit your game while it’s running. I’ll have to try it out to see how well it works, but if it does, this is pretty huge. Especially in the later stages of development, fine-tuning can take forever if you’re constantly going through the edit-compile-run cycle. If Godot can take even some of that pain away…wow.

Combine this with the improvements to the debugger, including a video RAM view and collision/navigation debugging, and it gets even better. Oh, and if you’re working on a newer Android game, you can even have live editing on the device.

The announcement at the Godot homepage has a video of live editing in action. I suggest watching it.

The rest

Godot version 2.0 is a massive update. Those features I mentioned are only the best parts, and there are a lot of minor additions and changes. Some of them are of…questionable benefit, in my opinion (I’m not sold on heatmaps in the list of open scripts, for instance, and why not use JSON for your scene’s text format, like everyone else?), but those are far outweighed by the undeniable improvements.

I’ve said it before, and I’ll say it again. If you’re an indie game dev, especially if you’re focusing on 2D games, you owe it to yourself to check out Godot. It really is one of the best around for that niche. And it’s not like it’ll cost you anything.

Thoughts on Haxe

Haxe is one of those languages that I’ve followed for a long time. Not only that, but it’s the rare programming language that I actually like. There aren’t too many on that list: C++, Scala, Haxe, Python 2 (but not 3!), and…that’s just about it.

(As much as I write about JavaScript, I only tolerate it because of its popularity and general usefulness. I don’t like Java for a number of reasons—I’ll do a “languages I hate” post one of these days—but it’s the only language I’ve written professionally. I like the idea of C# and TypeScript, but they both have the problem of being Microsoft-controlled. And so on.)

About the language

Anyway, back to Haxe, because I genuinely feel that it’s a good programming language. First of all, it’s strongly-typed, and you know my opinion on that. But it’s also not so strict with typing that you can’t get things done. Haxe also has type inference, and that really, really helps you work with a strongly-typed language. Save time while keeping type safety? Why not?

In essence, the Haxe language itself looks like a very fancy JavaScript. It’s got all the bells and whistles you expect from a modern language: classes, generics, object literals, array comprehensions, iterators, and so on. You know, the usual. Just like everybody else.

But there’s also a few interesting features that aren’t quite as common. Pattern matching, for instance, which is one of my favorite things from “functional” languages. Haxe also has the idea of “static extensions”, something like C#’s extension methods, which allow you to add extra functionality to classes. Really, most of the bullet points on the Haxe manual’s “Language Features” section are pretty nifty, and most of them are in some way connected to the type system. Of any language I’ve ever used, only Scala comes close to helping me understand the power and necessity of types as much as Haxe.

The platform

But wait, there’s more. Haxe is cross-platform, in its own special way. Strictly speaking, there’s no native output. Instead, you have a choice of compilation targets, and some of these can then be turned into native binaries. Most of these let you “transpile” Haxe code to another language: JavaScript, PHP, C++, C#, Java, and Python. There’s also the Neko VM, made by Haxe’s creator but not really used much, and you can even have the Haxe compiler spit out ActionScript code or a Flash SWF. (Why you would want to is a question I can’t answer.)

The standard library provides most of what you need for app development, and haxelib is the Haxe-specific answer to NPM, CPAN, et al. A few of the available libraries are very good, like OpenFL (basically a reimplementation of the Flash API). Of course, depending on your target platform, you might also be able to use libraries from NPM, the JVM, or .NET directly. It’s not as easy as it could be—you need an extern interface class, a bit like TypeScript—but it’s there, and plenty of major libraries are already fixed for you.

The verdict

Honestly, I do like Haxe. It has its warts, but it’s a solid language that takes an idea (types as the central focus) and runs with it. And it draws in features from languages like ML and Haskell that are inscrutable to us mere mortals, allowing people some of the power of those languages without the pain that comes in trying to write something usable in a functional style. Even if you only use it as a “better” JavaScript, though, it’s worth a look, especially if you’re a game developer. The Haxe world is chock full of code-based 2D game engines and libraries: HaxePunk, HaxeFlixel, and Kha are just a few.

I won’t say that Haxe is the language to use. There’s no such thing. But it’s far better than a lot of the alternatives for cross-platform development. I like it, and that’s saying a lot.

Cooldowns

A lot of games these days have embraced a real-time style of fighting involving powers or other special abilities that, once activated, can’t be used again for a specific amount of time. They have to “cool down”, so to speak, leading the waiting period to be called a cooldown. FPS, RTS, MOBA…this particular style of play transcends genres. It’s not only for battles, either. Some mobile games have taken it to the extreme, putting even basic gameplay on a cooldown timer. Of course, if you don’t mind dropping a little cash, they’ll gladly let you cut out the waiting.

A bit of history

The whole idea of cooldowns in gaming probably goes back to role-playing games. In RPGs, combat typically works by rounds. Newer editions of D&D, for example, use rounds of 6 seconds. A few longer actions can be done, resulting in your “turn” being skipped in the following round, but the general ratio is one action to one round. This creates a turn-based style of play that usually isn’t time-sensitive. (It can be, though. Games such as Baldur’s Gate turn this system into one supporting real-time action.)

A more fast-paced, interactive style comes from the “Active Time” battles in some of the Final Fantasy games. This might be considered the beginning of cooldowns, at least in popular gaming. Here, a character’s turn comes around after a set period of time, which can change based on items, spells, or a speed stat. Slower characters take longer to fill up their “charge time”, and Haste spells make it fill faster.

Over the past couple of decades, developers have refined and evolved this system into the one we have today. Along the way, some of them have largely discarded the suspension of disbelief and story reasoning for cooldowns. Especially in competitive gaming, they’re nothing more than another mechanic like DPS or area of effect. But they are pretty much everywhere these days, in whatever guise, because they serve a useful purpose: forcing resource management based on time.

Using cooldowns

At the most basic level, that’s what cooldowns are all about. They’re there for game balance. Requiring you to wait between uses of your “ultimate” ability means you have to learn to use the smaller ones. Limiting healing powers to one use every X seconds gives players a reason to back off from a bigger foe; it also frees you from the need to place (and plan for) disposable items like potions. Conversely, if you use cooldowns extensively in your game, you have to make sure that the scenarios where they come into play are written for them.

On the programming side, cooldown timers are fairly easy to implement. Most game engines have some sort of timer functionality, and that’s a good base to build from. When an ability is used, set the timer to the cooldown period and start it. When it signals that it’s finished, that means that the ability is ready to go again.

But to better illustrate how they work—and because not every game engine likes having dozens or hundreds of timers running at once—here’s a different approach. We’ll start with a kind of “cooldown object”:

class CooldownAbility {
    // ...

    void activateAbility();

    void updateTimer(int timeDelta);

    int defaultCooldown;
    int cooldown;
    int coolingTime;
    bool isCoolingDown;
};

(This is C++-like pseudocode made to illustrate the point. I wouldn’t write a real game like this.)

activateAbility should be self-explanatory. It would probably have a structure like this:

void activateAbility() {
    // do flashy stuff
    // ...

    // start the cooldown period
    coolingTime = 0;
    isCoolingDown = true;
}

The updateTimer method here does just that. Each time it’s called, it adds the timeDelta value (this should be the time since the last update) to the coolingTime, and checks to see if it reached the cooldown limit:

void updateTimer(int timeDelta) {
    coolingTime += timeDelta;

    isCoolingDown = (coolingTime < coolDown);
}

Most games have a nice timer built right in: the game loop. And there’s likely already code in there for keeping track of the time since the last run of the loop. It’s simple enough to hook that in to a kind of “cooldown manager”, which runs through all of the “unusable” abilities and updates the time since last use. That might look something like this:

for (auto&& cd : allCooldowns) {
    cd.updateTimer(timeThisFrame);

    if (!cd.isCoolingDown) {
        // tell the game that the ability is ready
    }
}

(Also, the reason I gave this object both a cooldown and a defaultCooldown is so that, if we wanted, we could implement power-ups that reduce cooldown or penalties that increase it.)

Implementing this same thing in an entity-component engine can work almost the same way. Abilities could be entities with cooldown components, and you could add in a system that does the updating, cooldown reduction/increase, etc.

For a certain style of game, timed resource use makes sense. It makes gameplay better. It opens up new tactics, new strategies, especially in multiplayer gaming. And while it takes a lot of design effort to keep a cooldown-based game balanced and fun, the code isn’t that hard at all. That’s especially good news for indie devs, because they get more time to spend on the balancing part.

Software internals: Arrays

I’m a programmer. I think that should be obvious. Even though most of my coding these days is done at a pretty high level, I still enjoy the intricacies of low-level programs. It’s great that we have ways to hide the complexities inherent in software, but sometimes it’s fun to peel back the layers and look at just how it works. That’s the idea behind this little series of posts. I want to go into that deeper level, closer to the metal. First up, we’ll take a peek at that simplest and dumbest of data structures: the array.

Array, array

At the basic level, an array is nothing more than a sequence of values. Different languages have different rules, especially regarding types, but the idea is always the same. The values in an array are its elements, and they can be identified by an index. In C, for example, a[0] has the index 0, and it refers to the first element of the array named a. (C-style languages start counting from 0. We’ll see why shortly.)

For the purposes of this post, we’ll start with the simplest kind of array, a one-dimensional array whose values are all of the same type—integers, specifically. Later, we can expand on this, but it’s best to start small. Namely, we’ll have an array a with four values: {1, 2, 3, 4}. Also, we’ll mainly be using lower-level languages like C and C++, since they give the best look at how the code really runs.

In memory

One of the main reasons to use something like C++ is because of memory concerns, so let’s look at how such an array is set up in memory. On my PC, using 64-bit Debian Linux and GCC 5.3, it’s about as simple as can be. The compiler knows all the values beforehand, so all it does is put them in a “read-only data” section of the final executable. (In the assembly output, this shows up as .long statements in the .rodata section.) The elements of the array are in contiguous locations; that’s not just required by the C standard, but by the very definition of an array. It also makes them fast, especially when cache comes into play.

In C++, 4 integers in an array take up the space of, well, 4 integers. On a 64-bit system, that’s 32 bytes, half that if you’re still on 32-bit. There’s no overhead, because an array at this level is literally nothing more than a sequence of memory locations.

That contiguous layout makes working with the array trivial. Given an array a or n-byte elements, the first element—index 0—is at the same address as the array itself (&(a[0]) == &a in C parlance). To find any other one, all you have to do is multiply the index by the size of each element: &(a[i]) == &a + i * sizeof(int). Addition is just about the fastest thing a processor does, and word sizes as powers of 2 mean that the multiplication is nothing more than a bit shift, so array indexing is hard to beat.

Copying these arrays is easy, too: copy each element, and you’re done. Want to compare them? Nothing more than going down the line, looking for differences. Sure, that takes linear—O(n)—time, but it’s a great start. Of course, there are downsides, too. Arrays like this are fixed in size, and they all have to be the same type.

Complicating matters

There’s not much more to be said for the humble array, so let’s add some kinks. To start, what do you do if you don’t know all the values to begin with? Then, you need an uninitialized array, or a buffer. Compilers typically use a trick called a BSS segment to make these, while higher-level languages tend to initialize everything to a null value. Either way, all you really get is a block of memory that you’re expected to fill in later.

Changing the other assumptions of the array (fixed size and type) means changing the whole structure. Dynamically-sized arrays, like C++’s vector, need a different way of doing things. Usually, this means something like having an internal array—with a bit of room to grow—and some bookkeeping data. That gets into dynamic memory allocation, another topic for later, but from the programmer’s point of view, they work the same way. In fact, vector is required to be a drop-in replacement for arrays. (If you want arrays where elements can be of different types, like in JavaScript, then you have to abandon the simple mathematical formula and its blazing speed. At that point, you’re better off ditching the “array” concept completely.)

Moving up to higher levels doesn’t really change how an array functions. At its core, it’s still a sequence of values. One of JavaScript’s newer features is the typed array, which is exactly that. It’s intended to be used where speed is of the essence, and it’s little more than a friendlier layer on top of C-style arrays.

Implementation details

Just about every usable language already has something like an array, so there’s almost never a need to make one yourself. Indeed, it’s nearly impossible to do so. But maybe you’re working in assembly language. There, you don’t have the luxury.

Fixed-size arrays are nothing more than blocks of memory. If your array has n elements, and each one is size s, then you need to reserve n * s bytes of memory. That’s it. There’s your array. If you need it initialized, then fill it in as necessary.

Element access uses the formula from above. You need to know the address a of the array, the element size s, and the index i. Then, accessing an element is nothing more than loading the value at a + i * s. Note, though, that this means elements are numbered starting at 0. (And it’s exactly why, for that matter.)

Since arrays are dumb, you can pass them around as blocks of memory, but you always need to know their size. If you’re not careful, you can easily get buffer overflows and other out-of-bounds conditions. That’s the reason why so many “safe” C functions like snprintf take an extra “maximum size” argument. The array-as-memory-block notion, incidentally, is why C lets you treat pointers and arrays as the same thing.

The end

The array, in whatever form, is the most basic of data structures, so it made for a good starting point. I hope it set the tone for this series. Later on, I’ll get into more complicated structures and algorithms, like linked lists, sorting, and so on. It’s all stuff that programmers in something like JavaScript never need to worry about, but it’s fun to peek under the hood, isn’t it?

Thoughts on types

Last week, I talked about an up-and-coming HTML5 game engine. One of the defining features of that engine was that it uses TypeScript, not regular JavaScript, for its coding. TypeScript has its problems (it’s made by Microsoft, for one), but it cuts to the heart of an argument that has raged for decades in programming circles: strong versus weak typing.

First off, here’s a quick refresher. In most programming languages, values have types. These can be simple (an integer, a string of text) or complex (a class with a deep inheritance hierarchy and 50 or so methods), but they’re part of the value’s identity. Variables can have type, too, but different languages handle that in different terms. Some require you to set a variable’s type when it is first defined, and they strictly enforce that type. Others are more lenient: if x holds the value 123, it’s an integer; if you set it to "foo", then it becomes a string. And some languages allow you to mix types in an expression, while others will throw errors the minute you even dare add a string to a number.

A position of strength

I’m of two minds on types. On the one hand, I do think that a “strong” type system, where everything knows what it is and conversions must be explicit, is good for the specific kind of programming where data corruption is an unforgivable sin. The Ada language, one of the most notorious for strict typing, was made that way for a reason: it was intended for use in situations where errors are literally life-threatening.

I also like the idea of a strongly-typed language because it can “write itself” in a sense. That’s one of the things Haskell supporters are always saying, and it’s very reminiscent of the way I solved a lot of test questions in physics class. For example, if you know your answer needs to be a force in newtons (kg m/s²), and you’re given a mass (kg), a velocity (m/s), and a time (s), then it’s pretty obvious what you need to do. The same principle can apply when you’ve got code that returns a type constructed from a number of seemingly unrelated ones: figure out the chain that takes you from A to B. You can’t really do that in, say, JavaScript, because everything can return anything.

And strong types are an extra form of documentation, something sorely lacking in just about every bit of code out there. The types give you an idea of what you’re dealing with. If they’re used right, they can even guide you into using an API properly. Of course, that puts more work on the library developer, which means it’s less likely to actually get done, but it’s a nice thought.

The weak shall inherit

In a “weak” type system, objects can still have types, but variables don’t. That’s the case in JavaScript, where var x (or let x, if you’re lucky enough to get to use ES6) is all you have to go on. Is it a number? A string? A function? The answer: none of the above. It’s a variable. Isn’t that enough?

I can certainly see where it would be. For pure, unadulterated hacking, give me weak typing. Coding goes so much faster when you don’t have to constantly ask yourself what something should be. Scripting languages tend to be weakly-typed, and that’s probably why. When you know what you’re working with, and you don’t have to worry as much about error recovery, maintenance, or things like that, types only get in the way.

Of course, once I do need to think about changing things, a weakly-typed language starts to become more of a hindrance. Look at any large project in JavaScript or Ruby. They’re all a tangled mess of code held together by layers of validation and test suites sometimes bigger than the project itself. It’s…ugly. Worse, it creates a kind of Stockholm Syndrome where the people developing that mess think it’s just fine.

I’m not saying that testing (or even TDD) is a bad thing, mind you. It’s not. But so much of that testing is unnecessary. Guys, we’ve got a tool that can automate a lot of those tests for you. It’s called a compiler.

So, yeah, I like the idea of TypeScript…in theory. As programmers look to use JavaScript in “bigger” settings, they can’t miss the fact that it’s woefully inadequate for them. It was never meant to be anything more than a simple scripting language, and it shows. Modernizing efforts like ES5 and ES6 help, but they don’t—can’t—get rid of JavaScript’s nature as a weakly-typed free-for-all. (How bad is it? Implicit conversions have become accepted idioms. Want to turn n into a number? The “right” way is +n! Making a string is as easy as n+"", and booleans are just !!n.)

That’s not to say strong typing is the way to go, either. Take that too far, and you risk the opposite problem: losing yourself in conversions. A good language, in my opinion, needs a way to enforce types, but it also needs a way to not enforce them. Sometimes, you really do want an “anything”. Java’s Object doesn’t quite work for that, nor does the C answer of void *. C++ is getting any soon, or so they say; that’ll be a step up. (Note: auto in C++ is type inference. That’s a different question, but I personally think it’s an absolute must for a strongly-typed language.) But those should be used only when there’s no other option.

There’s no right answer. This is one of those debates that will last forever, and all I can do is throw in my two cents. But I like to think I have an informed opinion, and that was it. When I’m hacking up something for myself, something that probably won’t be used again once I’m done, I don’t want to be bothered with types; they take too much time. Once I start coding “for real”, I need to start thinking about how that code is going to be used. Then, strong typing saves time, because it means the compiler takes care of what would otherwise be a mound of boilerplate test cases, leaving me more time to work on the core problem.

Maybe it doesn’t work for you, but I hope I’ve at least given you a reason to think about it.

First glance: Superpowers

It seems like each new day brings us a new tool for game development, whether it’s an engine, a framework, a library, or any of a number of other things. Best of all, many of these up-and-coming dev tools are open source and free, so even the poorest game makers (like me!) can use them. And the quality is only going up, too. No longer must indies be content with alpha-level, code-only engines, uncompiled libraries, and NES-era assets. No, even the zero-cost level of game development is becoming packed with tools that even professionals of a few years ago wished they could have had.

The one I’m looking at today is called Superpowers, by Sparklin Labs. It’s yet another HTML5 game maker that has recently been released as open source software, under the ISC license. (ISC is functionally equivalent to MIT or “new” BSD; basically, it’s not much more than “do what you want, but give us credit”.) It’s not entirely a volunteer effort, and there are a couple of options for supporting it. Their download host, indie game publisher itch.io, gives you a donation option, but the primary way to send money is through Patreon. (There’s a link on the Superpowers main page.)

Let’s take a look

What does Superpowers bring to the table? Well, first of all, it’s an HTML5 engine. The maker itself runs as a nativized web app, and games can be compiled into standalone apps or exported in a browser-friendly format. There’s also a mobile option using the Intel XDK, but I haven’t really looked into that.

Second, and even more important, is the fact that this engine comes with a visual editor. That’s something sorely lacking in the free HTML5 arena. Granted, it’s not exactly up to the level of the editors for Unity or Unreal, but it’s much better than what we had, i.e., not much. It’s got all the usual bells and whistles: tree-based scene management, component editors (these seem a little buggy on my machine, but that’s probably just a local thing), drag-and-drop actors, and so on. For what’s technically still a beta, it’s pretty nice.

Coding works about the same way. You can attach scripts to the various parts of a scene, and they’ll know what to do. The whole thing is mostly behavior-driven, following the component style that is so popular these days. The scripts themselves are written in TypeScript, and I’m a little ambivalent about that. On the one hand, it’s an easier way of writing JavaScript (Superpowers is HTML5-based, so it’s going to end up as JavaScript eventually). On the other, TypeScript is a Microsoft project, so you never know what might happen.

One of the big features that looks interesting is the collaboration support. The Superpowers “app” has a client-server architecture, and it takes advantage of it. When you start it, it creates a server. Now, that’s pretty common in Node applications, but Superpowers actually uses it. After a little initial setup, you can have other people connect to your editor instance and work with you in real-time. I can’t tell you how well that works, since I’m just a lonely guy, but if it comes anywhere close to what’s advertised, then…wow.

There’s a lot more than this, and what I’m seeing looks very good. There’s support for prefabs (like those in Unity) in the editor, for instance, and the engine has all the usual suspects: 2D physics, multiple cameras, etc. Debugging works like in Chrome, since the whole thing runs on NW.js. (IMO, Chrome is a horrible browser, but an okay wrapper for web apps. The developer tools aren’t half bad, though.)

That’s not to say the Superpowers is perfect. Far from it. It’s early in development, and there’s bound to be a few unsquashed bugs here and there. There’s also the TypeScript dependency I mentioned above, but they’re working on that; the developers have an alpha (I think) version of the editor using Lua and the LÖVE engine. And, being on GitHub, I noticed a “Code of Conduct” file, which could be worrisome to free-speech advocates like myself. Also, there’s no online API documentation. You’re supposed to use the editor’s built-in docs. The developers’ reasoning (it boils down to “But there might be plugins!”) sounds weak to my ears. Every other HTML5 engine can do it, so why not this one?

In the end, I think the good outweighs the bad. Give it some time, and Superpowers might become one of the go-to tools for making indie games. Or it could bomb, and we’ll never hear from it again. I doubt that, though. Give me some proper online API docs, support for multiple languages (including pure JavaScript, preferably of the ES6 variety), and quite a bit more polish, and I’ll gladly put it up there with Phaser at the top of the list. For now, I’ll definitely be keeping an eye on this one.

Randomness and V8 (JS)

So I’ve seen this post linked in a few places recently, and I thought to myself, “This sounds interesting…and familiar.”

For those of you who don’t quite have the technical know-how to understand what it means, here’s a quick summary. Basically, the whole thing is a flaw in the V8 JavaScript engine’s random number generator. V8, if you don’t know, is what makes JavaScript possible for Chrome, Node, and plenty of others. (Firefox uses a different engine, and Microsoft’s browsers already have far bigger problems.) In JavaScript, the RNG is accessed through the function Math.random(). That function is far from perfect as it is. There’s no need to make it worse.

But V8, until one of the newest versions (4.9.40), actually did make it worse. An outline of their code is in the post above, but the main problems with it are easy to explain. First, Math.random() returns JavaScript numbers—i.e., 64-bit floating-point numbers—between 0 and 1. The way those numbers work leaves the algorithm 52 bits to play with, but V8’s code worked by converting a 32-bit integer into a floating-point number. That’s a pretty common operation, and there’s nothing really wrong on the face of it. Well, except for the part where you’re throwing away 20 out of your 52 random bits.

Because of the way V8’s RNG algorithm (MWC1616), this gets even better. MWC stands for “multiply with carry”, an apt description of what we’re dealing with. Internally, the code has two state variables, each a 32-bit unsigned integer, or uint32_t. These start off as seeded values (JavaScript programmers have no way of influencing this part, unfortunately), and each one undergoes a simple transformation: the low 16 bits are multiplied by one of two “magic” constants, then added to the high 16 bits. The function then creates its result in two parts, with the upper half of the result coming from one state variable’s lower half, while the lower 16 bits are taken from the other state’s upper half.

The whole thing, despite its shell-game shifting of bits, is not much more than a pair of linear congruential generators welded together. LCGs have a long history as random generators, because they’re easy to code, they’re fast, and they can give okay randomness for simple applications. But now that JavaScript is being used everywhere, the cracks are starting to appear.

Since V8’s Math.random() implementation uses 32-bit numbers and none of the “extra” state found in more involved RNGs, you’re never getting more than 2^32^ random numbers before they start repeating. And I do mean repeating, as linear congruential generators are periodic functions. Given the same state, they’ll produce the same result; generate enough random numbers, and you’ll repeat a state, which restarts the cycle. But that 2^32^ is a maximum, and you need some planning to get it. The magic numbers that make an LCG work have to be chosen carefully, or you can sabotage the whole thing. All the bit-shifting tricks are little more than a distraction.

So what can you do? Obviously, you, as a user of Chrome/Node/V8/whatever, can upgrade. The new algorithm they’re using is xorshift128+, which is highly regarded as a solid RNG for non-cryptographic work. (If you’re interested in how it works, but you don’t think you can read C++, check out the second link above, where I roll my own version in JavaScript.) Naturally, this doesn’t fix all the other problems with Math.random(), only the one that caused V8’s version of it to fail a bunch of the statistical tests used to quantify how “good” a specific RNG is. (The linked blog post has a great visualization to illustrate these.) Seeded, repeatable randomness, for example, you’ll still have to handle yourself. But what we’ve got is good enough for a lot of purposes, and it’s now a better foundation to build upon.

Looking forward to 2016

So, it’s a new year. The slate has been cleaned. We can put 2015 behind us, and look ahead to 2016. From a programming point of view, what does this new year hold? Let’s take a look.

Programming languages

This year should be an exciting one if you like programming languages for their own sake.

  • JavaScript: Most everybody is using a browser capable of most of ECMAScript 5 (ES5). By the end of the year, expect both parts of that to increase. More people are going to have modern capabilities, and the browsers themselves will cover more of the standard. Speaking of standards, I’d look to ES6 support becoming more widespread in browsers, even Microsoft’s.

  • C++: The next revision of the C++ standard, C++17, is still a ways away. (You’re crazy if you’re betting on it actually coming out in 2017. Remember, C++11 was codenamed C++0x for years.) However, we should start seeing parts of it becoming fixed. Ranges are a big thing right now, concepts are coming (finally!), and it looks like C++ might get some sort of compile-time reflection. Things are looking up, but we’re not there yet.

  • Perl: I’m serious. Perl 6 is out. (I think that leaves Star Citizen as the final harbinger of Armageddon.) In the works for a decade and a half, with a set of operators best described by a periodic table, and seemingly designed to be impossible to implement, who can’t love Perl? At the very least, it’ll be fun to write in, and the new, incompatible version will spur a new generation of Perl golfers and other code artists. But I think it might turn out a bit like Python 3. Perl 5 has history, and that’s not going away.

  • The rest: I can easily see Rust gaining a bigger cult following over the next year, especially at places like Github. PHP has their version 7, but the less said about PHP, the better. C# and Java are going to be simmering for another twelve months, at least, and I don’t see much new news coming out of either of them. Ruby will continue its slow slide into irrelevance, probably dragging Python with it. (I wouldn’t mind them taking Haskell along, but I digress.) Newcomers will arise, and I’d say we’re in for another round of “visual coding”. And hey, maybe this will finally be the year for Scala.

Hardware and the like

The big thing on everyone’s lips right now is Vulkan, the official successor to OpenGL. It was supposed to be out in time for Christmas, but it got pushed back. (Funnily enough, the same thing happened two years ago with Kaveri, AMD’s first processor line that could support Vulkan. But I’m not bitter.) Personally, I don’t see much out of Vulkan this year. It’ll be released, and we’ll see a few early, buggy drivers and experimental alphas of games, most of which will be glorified tech demos. I’d give it till 2018 before I start worrying about replacing OpenGL.

Tiny computers are going to get bigger this year, I think. I mean that in a figurative way, of course. The Raspberry Pi 2 is the big name in this field, but you’ve also got the BeagleBone and things like that, not to mention the good old Arduino. However you look at it, it’s a mature area. We’ve moved beyond revolution, now it’s time for evolution. These computers will get more powerful, easier to use, and more ubiquitous. Next Christmas, I can easily see a stick computer being like this year’s quadcopters.

On the other hand, as much as I hate to say it, I’m not holding out a lot of hope for 3-D printing. We’ve been hearing about it for half a decade, and there has definitely been incremental progress. But 2016, in my opinion, is not going to be the year we see inexpensive 3-D printers flying off the shelves. They’ll stay in the background. (The whole “Internet of Things”, however, will only grow, but it’s not intended to be programmable, so it doesn’t help us.)

Libraries, engines, etc.

Look for Unity and Unreal to continue their competition, with a bunch of smaller guys chomping at the bit. Godot, assuming they don’t screw themselves over by switching to Vulkan prematurely, might get a boost as the indie engine of choice. And JavaScript engines have near-infinite upside, especially for mobile coding. Game development in 2016 will be like it was in 2015, but better in every way.

I do think the Node.js fad is dying down, and not a moment too soon. That doesn’t mean Node is done, only that I see people evaluating it for what it is, rather than what it’s advertised as. It’s the same thing as Ruby a few years ago, back in the early days of Rails. Or JavaScript and Angular a couple of years ago, for that matter. Still, Node is a solid platform for a lot of things. It’s not going away, but this is the year that it fades from the spotlight.

The same can be said for the current crop of JS web frameworks. There’s no chance of the whole Internet getting behind a single framework, nor two or even ten. But this is an area where the churn is so great, what’s popular next December hasn’t even been written yet. I can tell you that it’ll be slower, more bloated, and less comprehensible than what’s out there, though.

In the end

For programming, 2016 has a lot to look forward to, and I’ve barely scratched the surface here. (I haven’t even mentioned learning to code, which will get even bigger this coming year.) Whether native or browser, desktop or mobile, it’s a good time to code.

First glance: Unreal.js

With Christmas coming up, I don’t exactly have the time to write those 2,000-word monologues that I’ve been doing. But I want to post something, if only because laziness would intervene otherwise. Inertia would take over, and I’d stop posting anything at all. I know it would happen. I’ve done it once before. So, these last few Wednesdays of 2015 will be short and sweet.

This time around, I want to talk about something I recently found on a wonderful site called Game From Scratch. It’s called Unreal.js, and it’s open-source (Apache license). What does it do? Well, that’s the interesting thing, and that’s what I’m going to ramble on about.

You’ve probably heard of UnrealEngine. It’s the latest iteration of the game engine used to power a wide array of games, from Unreal Tournament to AAA titles like the newest Street Fighter and Tekken to hundreds of up-and-coming indie games. The most recent version, UnrealEngine 4, is getting a lot of press mainly because of its remarkably open development and friendly pricing scheme. (Compared to other professional game engines, anyway.) Lately, Unreal has become a serious competitor to Unity for the middle tier of game development, and competition is an indisputably good thing.

But Unreal has a problem compared to Unity. You see, Unity runs on Microsoft’s .NET framework. (Strictly speaking, it runs on Mono, which is a Microsoft-approved knockoff of .NET that used to be fully open, to the point where most Linux distributions preinstalled it a few years ago. Now…not so much.) Anyway, Unity uses .NET, and one of the nifty things about .NET is that, like the JVM, it’s not restricted to a single language. Sure, you’re most likely to use C#, but you don’t have to. Unity explicitly supports JavaScript, and it used to have full support for a Python clone called Boo. (Supposedly, there are ways to get other languages like F# to work with it, but I don’t know why anyone would want to.)

Unreal, on the other hand, uses C++. From a performance perspective, that’s a great thing. C++ is fast, it can use far less memory than even C#, and it’s closer to the hardware, making it easier to take advantage of platform-specific optimizations. However, C++ is (in my experienced opinion) one of the hardest programming languages to learn well. It’s also fairly ugly. The recent C++11 standard helps a lot with both of these problems, but full support just isn’t there yet, even 4 years later. C++17 looks like it will go a few steps further in the “ease of use” direction, but you’ll be lucky to use it before 2020.

The makers of UnrealEngine know all of this, so they included a “visual” programming language, Blueprints. Great idea, in theory, but there are a lot of languages out there that you don’t need to invent. Why not use one of them? Well, that’s where Unreal.js comes in. Its developers (some guys called NCSoft; you may have heard of them) have made a plugin that connects the V8 JavaScript engine from Chrome/Safari/Node.js/everywhere into Unreal. The whole thing is still in a very early stage, but it’s shaping up to be something interesting.

If Unreal.js takes off, then it can put Unreal well ahead of Unity, even among hobbyists and lower-end indies. JavaScript is a lot easier on the brain than C++ (take it from someone who knows both). And it has a huge following, not just for webapps and server stuff. The Unreal.js project page claims support for “(Full) access to existing javascript libraries via npm, bower, …”

That’s huge. Sure, not all npm packages are of the highest quality, but there are plenty that are, and this would let you use all of them to help make a game. Game engines, historically, have been some of the worst about code reuse, 3rd-party libraries, and other niceties that “normal” applications get to use. Well, that can change.

And then there’s one other factor: other languages. Since Unreal.js is pretty much just the V8 engine from Node, and it can load most Node packages, that opens the possibility of using some of the many “transpiled” languages that are transformed to Node-friendly JavaScript. Think CoffeeScript, TypeScript (which recently released its new 1.7 version), or even my April Fools’ Day joke language Elan.

Maybe I’m wrong. Maybe Unreal.js will fizzle. Perhaps it’s destined to join the legions of other failed attempts at integrating game development with the rest of the programming world. I hope not. The past few years have seen a real move in the direction of democratizing the art of game-making again. I’d like to see that trend continue in 2016 and beyond.

Programming paradigm primer

“Paradigm”, as a word, has a bad reputation. It’s one of those buzzwords that corporate people like to throw out to make themselves sound smart. (They usually fail.) But it has a real meaning, too. Sometimes, “paradigm” is exactly the word you want. Like when you’re talking about programming languages. The alliteration, of course, is just an added bonus.

Since somewhere around the 1960s, there’s been more than one way to write programs, more than one way to view the concepts of a complex piece of software. Some of these have revolutionized the art of programming, while others mostly languish in obscurity. Today, we have about half a dozen of these paradigms with significant followings. They each have their ups and downs, and each has a specialty where it truly shines. So let’s take a look at them.

Now, it’s entirely possible for a programming language to use or encourage only a single paradigm, but it’s far more common for languages to support multiple ways of writing programs. Thanks to one Mr. Turing, we know that essentially all languages are, from a mathematical standpoint, equivalent, so you can create, say, C libraries that use functional programming. But I’m talking about direct support. C doesn’t have native objects (struct doesn’t count), for example, so it’s hard to call it an object-oriented language.

Where it all began

Imperative programming is, at its heart, nothing more than writing out the steps a program should take. Really, that’s all there is to it. They’re executed one after the other, with occasional branching or looping thrown in for added control. Assembly language, obviously, is the original imperative language. It’s a direct translation of the computer’s instruction set and the order in which those instructions are executed. (Out-of-order execution changes the game a bit, but not too much.)

The idea of functions or subroutines doesn’t change the imperative nature of such a program, but it does create the subset of structured or procedural programming languages, which are explicitly designed for the division of code into self-contained blocks that can be reused.

The list of imperative languages includes all the old standbys: C, Fortran, Pascal, etc. Notice how all these are really old? Well, there’s a reason for that. Structured programming dates back decades, and all the important ideas were hashed out long before most of us were born. That’s not to say that we’ve perfected imperative programming. There’s always room for improvement, but we’re far into the realm of diminishing returns.

Today, imperative programming is looked down upon by many. It’s seen as too simple, too dumb. And that’s true, but it’s far from useless. Shell scripts are mostly imperative, and they’re the glue that holds any operating system together. Plenty of server-side code gets by just fine, too. And then there’s all that “legacy” code out there, some of it still in COBOL…

The imperative style has one significant advantage: its simplicity. It’s easy to trace the execution of an imperative program, and they’re usually going to be fast, because they line up well with the computer’s internal methods. (That was C’s original selling point: portable assembly language.) On the other hand, that simplicity is also its biggest weakness. You need to do a lot more work in an imperative language, because they don’t exactly have a lot of features.

Objection!

In the mid-90s, object-oriented programming (OOP) got big. And I do mean big. It was all the rage. Books were written, new languages created, and every coding task was reimagined in terms of objects. Okay, but what does that even mean?

OOP actually dates back much further than you might think, but it only really began to get popular with C++. Then, with Java, it exploded, mainly from marketing and the dot-com bubble. The idea that got so hot was that of objects. Makes sense, huh? It’s right there in the name.

Objects, reduced to their most basic, are data structures that are deeply entwined with code. Each object is its own type, no different from integers or strings, but they can have customized behavior. And you can do things with them. Inheritance is one of them: creating a new type of object (class) that mimics an existing one, but with added functionality. Polymorphism is the other: functions that work differently depending on what type of object they’re acting on. Together, inheritance and polymorphism work to relieve a huge burden on coders, by making it easier to work with different types in the same way.

That’s the gist of it, anyway. OOP, because of its position as the dominant style when so much new blood was entering the field, has a ton of information out there. Design patterns, best practices, you name it. And it worked its way into every programming language that existed 10-15 years ago. C++, Java, C#, and Objective-C are the most used of the “classic” OOP languages today, although every one of them offers other options (including imperative, if you need it). Most scripting-type languages have it bolted on somewhere, such as Python, Perl, and PHP. JavaScript is a bit special, in that it uses a different kind of object-oriented programming, based on prototypes rather than classes, but it’s no less OOP.

OOP, however, has a couple of big disadvantages. One, it can be confusing, especially if you use inheritance and polymorphism to their fullest. It’s not uncommon, even in the standard libraries of Java and C#, to have a class that inherits from another class, which inherits from another, and so on, 10 or more levels deep. And each subclass can add its own functions, which are passed on down the line. There’s a reason why Java and C# are widely regarded as having some of the most complete documentation of any programming language.

The other disadvantage is the cause of why OOP seems to be on the decline. It’s great for code reuse and modeling certain kinds of problems, but it’s a horrible fit for some tasks. Not everything can be boiled down to objects and methods.

What’s your function?

That leads us to the current hotness: functional programming, or FP. The functional fad started as a reaction to overuse of OOP, but (again) its roots go way back.

While OOP tries to reduce everything to objects, functional programming, shockingly enough, models the world as a bunch of functions. Now, “function” in this context doesn’t necessarily mean the same thing as in other types of programming. Usually, for FP, these are mathematical functions: they have one output for every input, no matter what else is happening. The ideal, called pure functional programming, is a program free of side effects, such that it is entirely deterministic. (The problem with that? “Side effects” includes such things as user input, random number generation, and other essentials.)

FP has had its biggest success with languages like Haskell, Scala, and—amazingly enough—JavaScript. But functional, er, functions have spread to C++ and C#, among others. (Python, interestingly, has rejected, or at least deprecated, some functional aspects.)

It’s easy to see why. FP’s biggest strength comes from its mathematical roots. Logically, it’s dead simple. You have functions, functions that act on other functions, functions that work with lists, and so on. All of the basic concepts come straight from math, and mistakes are easily found, because they stick out like a sore thumb.

So why hasn’t it caught on? Why isn’t everybody using functional programming? Well, most people are, just in languages that weren’t entirely designed for it. The core of FP is fairly language-agnostic. You can write functions without side effects in C, for example, it’s just that a lot of people don’t.

But FP isn’t everywhere, and that’s because it’s not really as simple as its proponents like to believe. Like OOP, not everything can be reduced to a network of functions. Anything that requires side effects means we have to break out of the functional world, and that tends to be messy. (Haskell’s method of doing this, the monad, has become legendary in the confusion it causes.) Also, FP code really, really needs a smart interpreter, because its mode of execution is so different from how a computer runs, and because it tends to work at a higher level of abstraction. But interpreters are universally slower than native, relegating most FP code to those higher levels, like the browser.

Your language here

Another programming paradigm that deserves special mention is generic programming. This one’s harder to explain, but it goes something like this: you write functions that accept a set of possible types, then let the compiler figure out what “real” type to use. Unlike OOP, the types don’t have to be related; anything that fits the bill will work.

Generic programming is the idea behind C++ templates and Java or C# generics. It’s also really only used in languages like that, though many languages have “duck-typing”, which works in a similar fashion. It’s certainly powerful; most of the C++ standard library uses templates in some fashion, and that percentage is only going up. But it’s complicated, and you can tie your brain in knots trying to figure out what’s going on. Plus, templates are well-known time sinks for compilers, and they can increase code size by some pretty big factors. Duck-typing, the “lite” form of generic programming, doesn’t have either problem, but it can be awfully slow, and it usually shows up in languages that are already slow, only compounding the problem.

What do I learn?

There’s no one right way to code. If we’ve learned anything in the 50+ years the human race has been doing it, it’s that. From a computer science point of view, functional is the way to go right now. From a business standpoint, it’s OOP all the way, unless you’re looking at older code. Then you’ll be going procedural.

And then there are all those I didn’t mention: reactive, event-driven, actor model, and dozens more. Each has its own merits, its own supporters, and languages built around it.

My best advice is to learn whatever you’re preferred language offers first. Then, once you’re comfortable, move on, and never stop learning. Even if you’ll never use something like Eiffel in a serious context, it has explored an idea that could be useful in the language you do use. (In this case, contract programming.) The same could be said for Erlang, or F#, or Clojure, or whatever tickles your fancy. Just resist the temptation to become a zealot. Nobody likes them.

Now, some paradigms are harder than others, in my opinion. For someone who started with imperative programming, the functional mindset is hard to adjust to. Similarly, OOP isn’t easy if you’re used to Commodore BASIC, and even experienced JavaScript programmers are tripped up by prototypes. (I know this one first-hand.)

That’s why I think it’s good that so many languages are adopting a “multi-paradigm” approach. C++ really led the way in this, but now it’s popping up everywhere among the “lower” languages. If all paradigms (for some suitable value of “all”) are equal, then you can use whatever you want, whenever you want. Use FP for the internals, wrapped by an event-driven layer for I/O, calling OOP or imperative libraries when you need them. Some call it a kitchen-sink approach, but I see programmers as like chefs, and every chef needs a kitchen sink.