Modern C++ for systems, part 1

In today’s world, there are two main types of programming languages. Well, there are a lot of main types, depending on how you want to look at it, but you can definitely see a distinction between those languages intended for, say, web applications and operating systems. It’s that distinction that I want to look at here.

For web apps (and most desktop ones), we have plenty of options. JavaScript is becoming more and more common on the desktop, and it’s really the only choice for the web. Then we’ve got the “transpilers” bolted on top of JS, like TypeScript, but those are basically the same thing. And we can also write our user applications in Python, C#, or just about anything else.

On the other side—and this is where it gets interesting, in my opinion—the “systems” side of the equation doesn’t look so rosy. C remains the king, and it’s a king being assaulted from all sides. We constantly see articles and blog posts denigrating the old standby, and wouldn’t you know it? Here’s a new, hip language that can fix all those nasty buffer overflows and input-sanitization problems C has. A while back, it was Go. Now, it’s Rust. Tomorrow, it might be something entirely different, because that’s how fads work. Here today, gone tomorrow, as they say.

A new (or old) challenger

C has its problems, to be sure. Its standard library is lacking, the type system is pretty much a joke, and the whole language requires a discipline in coding that really doesn’t mesh with the fast and loose attitudes of today’s coders. It’s also mature, which hipsters like to translate as “obsolete”, but here that age does come with a cost. Because C has been around so long, because it’s used in so many places, backwards compatibility is a must. And that limits what can be done to improve the language. Variable-length arrays, for instance, are 18 years old at this point, and they still aren’t that widely used.

On the other hand, “user-level” languages such as Python or Ruby tend to be slow. Too slow for the inner workings of a computer, such as on the OS level or in embedded hardware. Even mobile apps might need more power. And while the higher-level languages are more expressive and generally easier to work with, they often lack necessary support for lower-level operations.

Go and Rust are attempts at bridging this gap. By making languages that allow the wide range of operations needed at the lowest levels of a system, while preventing the kinds of simple programming errors a C compiler wouldn’t even notice, they’re supposed to be the new wave in system-level code. But Go doesn’t have four decades of evolution behind it. Rust can’t compile a binary for an 8-bit AVR microcontroller. And neither has the staying power of the venerable C. As soon as something else comes along, Rust coders will chase that next fad. It happened with Ruby, so there’s no reason it can’t happen again.

But what if I told you there’s a language out there that has most of C’s maturity, more modern features, better platform support1, and the expressive power of a high-level language? Well, if you didn’t read the title of this post, you might be surprised. Honestly, even if you did, you might still be surprised, because who seriously uses C++ nowadays?

Modern world

C++ is about as old as I am. (I’m 33 for the next week or so, and the original Cfront compiler was released in 1985, so not too much difference.) That’s not quite C levels of endurance, but it’s pretty close, and the 32-bit clocks will overflow before most of today’s crop of lower-level languages reach that mark. But that doesn’t mean everything about C++ is three decades out of date.

No, C++ has evolved, and never so much as in 2011. With the release of that version of the standard, so much changed that experienced programmers consider “Modern” C++ to be a whole new language. And I have to agree with them on that. Compared to what we have now, pre-2011 C++ looks and feels ancient, clunky, baroque. Whatever adjectives your favorite hater used in 2003, they were probably accurate. But no more. Today, the language is modern, it’s fresh, and it is much improved. So much improved, in my opinion, that it should be your first choice when you need a programming language that is fast, safe, and expressive. Nowhere else can you get all three.

Why not Java/C#?

C++ often gets compared to Java and C#, especially when talk turns to desktop applications. But on the lower levels, there’s no contest. For one, both Java and C# require a lot of infrastructure. They’re…not exactly lightweight. Look at Android development if you don’t believe me on the Java side. C# is a little bit better, but still nowhere near as efficient in terms of space.

Second, you’re limited in platform support. You can’t very well run Java apps on an iPhone, for instance, and while Microsoft has opened up on .NET in the past few years, it’s still very much a Windows-first ecosystem. And good luck getting either of them on an embedded architecture. (Or an archaic one.)

That’s not to say these aren’t good languages. They have their uses, and each has its own niche. Neither fits well for our purposes, though.

Why not Go/Rust?

Go, Rust, and whatever other hot new ideas are kicking around out there don’t suffer from the problem of being unfit for low levels. After all, they’re made for that purpose, and it’d be silly to make a programming language that didn’t fill its intended role.

However, these lack the maturity of C++, or even Java or C#. They don’t have the massive weight of history, the enormous library of documentation and third-party code and support. Worse, they aren’t really standardized, so you’re basically at the mercy of their developers. Tech companies have a reputation for short attention spans, which means you can’t be sure they won’t stop development tomorrow because they think they’ve come up with something even better. (Ask me about Angular.)

Why C++?

Even if you think some other language works better for your system, I’d still argue that you should give Modern C++ a shot. It can do essentially everything the others can, and that’s not just an argument about what’s Turing-complete. With the additions of C++11, 14, and 17, we’re not talking about the “old” style anymore. You won’t be seeing screen-long type declarations and dizzying levels of bracket nesting. Things are different now, and C++ even has a few features lacking from other popular languages. So give it a shot.

I know I will. Later on, I hope to show you how C++ can replace both the old, dangerous C and the new, limiting languages like Rust. Stay tuned for that.


  1. Strictly speaking, Microsoft doesn’t have a C compiler, only a C++ one. MSVC basically doesn’t track new versions of the C standard unless they come for free with C++ support. 

The JavaScript package problem

One of the first lessons every budding programmer learns is a very simple, very logical one: don’t reinvent the wheel. Chances are, most of the internal work of whatever it is you’re coding has already been done before, most likely by someone better at it than you. So, instead of writing a complex math function like, say, an FFT, you should probably hunt down a library that’s already made, and use that. Indeed, that was one of the first advances in computer science, the notion of reusable code.

Of course, there are good reasons to reinvent the wheel. One, it’s a learning experience. You’ll never truly understand the steps of an algorithm until you implement them yourself. That doesn’t mean you should go and use your own string library instead of the standard one, but creating one for fun and education can be very rewarding.

Another reason is that you really might be able to improve on the “standard”. A custom version of a standard function or algorithm might be a better fit for the data you’ll be working with. Or, to take this in another direction, the existing libraries might all have some fatal flaw. Maybe they use the wrong license, or they’re too hard to integrate, or you’d have to write a wrapper that’s no less complicated than the library itself.

Last of all, there might not be a wheel already invented, at least not in the language you’re using. And that brings us to JavaScript.

Bare bones

JavaScript has three main incarnations these days. First, we have the “classic” JS in the browser, where it’s now used for pretty much everything. Next is the server-side style exemplified by Node, which is basically the same thing, but without the DOM. (Whether that’s a good or bad thing is debatable.) Finally, there’s embedded JavaScript, which uses something like Google’s V8 to make JS work as a general scripting language.

Each of these has its own quirks, but they all share one thing in common. JavaScript doesn’t have much of a standard library. It really doesn’t. I mean, we’re talking about a language that, in its standardized form, has no general I/O. (console.log isn’t required to exist, while alert and document.write only work in the browser.) It’s not like Python, where you get builtin functions for everything from creating ZIP files to parsing XML to sending email. No, JS is bare-bones.

Well, that’s not necessarily a problem. Every Perl coder knows about CPAN, a vast collection of modules that contains everything you want, most things you don’t, and a lot that make you question the sanity of their creators. (Question no longer. They’re Perl programmers. They’ve long since lost their sanity.) Other languages have created similar constructs, such as Python’s PyPi (or whatever they’re using these days), Ruby’s gems, the TeX CTAN collection, and so on. Whatever you use, chances are you’ve got a pretty good set of not-quite-standard libraries, modules, and the like just waiting to be used.

So what about JavaScript? What does it have? That would be npm, which quite transparently started out as the Node Package Manager. Thanks to the increase in JS tooling in recent years, it’s grown to become a kind of general JavaScript repository manager, and the site backing it contains far more than just Node packages today. It’s a lot more…democratic than some other languages, and JavaScript’s status as the hipster language du jour has given it a quality that sometimes seems a bit questionable, but there’s no denying that it covers almost everything a JS programmer could need. And therein lies the problem.

The little things

The UNIX philosophy is often stated as, “Do one thing, and do it well.” JavaScript programmers have taken that to heart, and they’ve taken it to the extreme, and that has caused a serious problem. See, because JS has such a small standard library, there are a lot of little utility functions, functions that pop up in almost any sizable codebase, that aren’t going to be there.

For most languages, this would be no trouble. With C++, for instance, you’d link in Boost, and the compiler would only add in the parts you actually use. Java or C#? If you don’t import it, it won’t go in. And so on down the line, with one glaring exception.

Because JavaScript was originally made for the browser—because it was never really intended for application development—it has no capability for importing or even basic encapsulation. Node and recent versions of ECMAScript are working on this, but support is far from universal at this point. Worse, since JavaScript comes as plain text, rather than some intermediate or native binary format, even unused code wastes space and bandwidth. There’s no compilation step between server and client, and there’s no way to take only the parts of a library that you need, so evolutionary pressure has caused the JavaScript ecosystem to create a somewhat surprising solution.

That is the NPM solution: lots of tiny packages with myriad interdependencies, and a package manager that integrates with the build system to put everything together in an optimized bundle. JavaScript, of course, has no end of build systems, which come in and out of style like seasonal fashions. I haven’t really looked into this space in about eight months, and my knowledge is already obsolete! (Who uses Bower anymore? It’s all Webpack…I think. Unless something else has replaced by the time this post goes up.)

This is a prime example of the UNIX philosophy in action, and it can work. Linux package managers do it all the time: for reference, my desktop runs Debian, and it has about 2000 packages installed, most of which are simple C or C++ libraries, or else “data” packages used by actual applications. But I’m not so sure it works for JavaScript.

Picking up the pieces

From that one design decision—JavaScript sent as plain text—comes the necessity of small packages, but some developers have taken that a bit too far. In what other language would you need to import a new module to test if a number is positive? Or to pad the left side of a string? The JS standard library provides neither function, so coders have created npm packages for both, and those are only two of the most egregious examples. (Some might even be jokes, like the one package that does nothing but return the number 5, but it’s often hard to tell what’s serious and what isn’t. Think Poe’s Law for programmers.)

These wouldn’t be so bad, but they’re one more thing to remember to import, one more step to add into the build system. And the JavaScript philosophy, along with the bandwidth requirements its design enforces, combine to make “utility” libraries a nonstarter. Almost nobody uses bigger libraries like Underscore or Lodash these days; why bother adding in all that extra code you don’t need? People have to download that! The same even goes for old standbys like jQuery.

The push, then, is for ever more tiny libraries, each with only one use, one function, one export. Which wouldn’t be so bad, except that larger packages—you know, applications—can depend on all these little pieces. Or they can depend on each other. Or both, with the end result a spaghetti tangle of interdependent parts. And what happens if one of those parts disappears?

You might think that’s crazy, but it did happen. That “left pad” function I mentioned earlier? That one actually did vanish, thanks to a rogue developer, and it broke everything. So many applications and app libraries depended on little old leftpad, often indirectly and without even noticing, that its disappearance left them unable to update, unable to even install. For a few brief moments, half the JavaScript world was paralyzed because a package providing a one-line function, something that, in a language with simple string concatenation, comes essentially for free, was removed from the main code-sharing repository.

Solutions?

Is there a middle ground? Can we balance the need for small, space-optimized codebases with the robustness necessary for building serious applications? Or is NPM destined to be nothing more than a pale imitation of CPAN crossed with an enthusiast’s idea of the perfect Linux distro? I wish I knew, because then I’d be rich. But I’ll give a few thoughts on the matter.

First off, the space requirement isn’t going away anytime soon. As long as we have mobile and home data caps, bandwidth will remain important, and wasting it on superfluous code is literally too expensive. In backwards Third World countries without net neutrality, like perhaps the US by the time this post goes up, it’ll be even worse. Bite-size packages work better for the Internet we have.

On the other hand, a lot of the more ridiculous JS packages wouldn’t be necessary if the language had a halfway decent standard library. I know the standards crew is giving it their best shot on this one, but compatibility is going to remain an issue for the foreseeable future. Yes, we can use polyfills, but then we’re back to our first problem, because that’s just more code that has to be sent down the wire, code that might not be needed in the first place.

The way the DOM is set up doesn’t really help us here, but there might be a solution hiding in there. Speculative loading, where a small shim comes first, checking for the existence of the needed functions. If they’re found, then the rest of the app can come along. Otherwise, send out a request for the right polyfills first. That would take some pretty heavy event hacking, but it might be possible to make it work. (The question is, can we make it work better than what we’ve got?)

As for the general problem of ever-multiplying dependencies, there might not be a good fix. But there also might not be a need to keep the old maxim in mind. Do we really need to import a whole new package to put padding at the beginning of a string? Yes, JS has a wacky type system, but " " + string is what the package would do anyway. (Probably with a lot of extra type checking, but you’re going to do that, too.) If you only need it once, why bother going to all the trouble of importing, adding in dependencies, and all that?

Ultimately, what has happened is that, as JavaScript lacks even the most basic systems for code reuse, its developers have reinvented them, but poorly. As it has a stunted standard library, the third party has had to fill those gaps. Again, they have done so poorly, at least in comparison to more mature languages. That’s not to say that it doesn’t work, because everything we use on the Internet today is proof that it does. But it could be better. There has to be a better way, so let’s find it.

Lettrine and other packages

TeX and its descendants (LaTeX, et al.) have a vast array of add-on packages for an author to use. Most of these are so specific that they’d probably only be useful to a handful of people, but some are almost universal. Memoir, of course, is one of them, though I’ve already spoken about it. This time, I’d like to look at a few others that I use.

Lettrine

The lettrine package is what I use to make drop caps and raised initials, as you’ll recall from the debacle that is my Pandoc filters. For paperback books, especially fiction, these are a nice typographic touch, the kind of thing that, I feel, makes a book look more professional. Personally, I prefer raised initials rather than the dropped capitals, but lettrine works for both.

It’s geared towards European languages, and the examples are actually only in French and German, not English. The documentation, however, is perfectly readable.

Using lettrine isn’t that hard. Unless you need some serious customization, you can get by with just putting the first letter (the one you want to raise or drop) in one set of braces, then anything you’d like in small caps in another: \lettrine{L}{ike this}.

By default, that gives you a two-line dropped capital, but you can change that with options that you place in square brackets before the text. So, to get my preferred raising, I would do: \lettrine[lines=1]{T}{his}. The manual has more options you can use, mostly for tweaking problem letters like J and Q, typesetting opening quotation marks in normal size, and even adding images for something like a medieval manuscript.

Microtype

The second package, microtype, is one of the more complicated ones. Fortunately, there’s not a lot you have to do to use it. Just including it in your document already gets you something subtly better.

What microtype actually does is hard to explain without delving deep into typography. Basically, it gives you a way to change aspects of a font such as kerning and have those changes affect the entire document. I’ll freely admit that I don’t understand everything it does, nor how it works. And the manual is over 240 pages long, so that won’t change anytime soon. Still, I can’t deny its usefulness.

selnolig

Finally, we have selnolig. This one is a bit obscure, compared to the other two, but it turned out to be exactly what I needed for one very specific scenario. Thus, I thought it made a good illustration of the breadth of TeX packages.

If you look closely at a (good) printed book, you’ll notice that the letters aren’t always distinct. In printing, we use a number of ligatures to join letters, which helps them “flow” together. Letter pairs and triplets like “fl”, “ffi”, and “ft” are often combined like this, though there are cases where it’s recommended that you don’t.

The selnolig package handles all that for you, breaking up the automatic ligatures TeX likes to add in the words where they don’t necessarily belong. It also activates some “historic” ligatures, if you want them, so your book can look like it was written in the 1700s.

Far more important, however, is the ability to selectively disable ligatures that gives the package its name. The font I used in Before I Wake (which I’ll probably continue to use in future books) has a very annoying “Th” ligature. Personally, I just don’t like the way it looks; it makes that combination of letters look too…thin. So I went looking for a way to get rid of it, and I found selnolig. Ridding myself of this pesky addition was a single line of code: \nolig{Th}{T|h}. That tells selnolig to always break “Th” into a separate “T” and “h”, with an invisible space in between. This space stops the ligature from forming, which is exactly what I wanted.

Everything else

I haven’t even touched on the myriad other TeX packages out there. There’s not enough time in my life to go through them. Of course, there are quite a few that I couldn’t live without: geometry, fontspec, graphicx, etc. For my aborted attempt at a popular book on mathematics, I used tikz to draw diagrams. I tried to write a book about conlangs almost ten years ago, and I used tipa for that one. Whatever you’re looking for, you’ll probably find it over at CTAN.

And that concludes this little series. Now, it’s back to writing books, rather than writing about writing them. Look for the latest fruit borne from my work with Pandoc, LaTeX, Memoir, and all the rest coming in July.

Pandoc filter for books

Pandoc is a great tool, as I’ve already stated, but it can’t do everything. And some of the things it dies aren’t exactly what you’d want when creating a book. This is especially true when working on a print-ready PDF, as I’ve been doing.

Fortunately, there is a solution. Unfortunately, it’s not a pretty one. The way Pandoc works internally—I’m simplifying a lot here, so bear with me—is by turning your input document (Markdown, in my case) into an AST, then building your output from that. That’s basically the same thing a compiler does, so you could think of Pandoc as something like a Markdown compiler that can output PDF, EPUB, HTML, or whatever.

In addition to the usual “compiler” niceties, you’re also given access to an intermediate stage. If you tell Pandoc to let you, you can use filters to modify the AST before it’s sent off to the output stage. That lets you do a lot of modifications and effects that aren’t possible with plain Markdown or LaTeX or even HTML. It’s great, but…

Cruel and unusual

But Pandoc is written in Haskell. Haskell, if you’re not familiar with programming languages, is the tenth circle Dante didn’t tell you about. It’s awful, if you’ve ever written code in any other language, because it’s designed around a philosophy that doesn’t really match anything else in the programming world. (Seriously, go look up “monads” if you’re bored.) For us mere mortals, it’s sheer torture trying to understand a Haskell program, much less write one. And Pandoc’s default language for writing filters, alas, is this monstrosity.

If I had to do that, I’d have given up months ago. But I’m in luck, because Pandoc’s developer recognizes that we’re not all masochists, and he gave us the option to write filters in Python instead. Now that I can use. It’s not pretty. It’s not nice. But it gets the job done, and it does so without needing to install extra libraries or anything like that.

So, I’ve written a few filters that take care of some of the drudgery of converting Markdown into a decent-looking PDF. You can find them in this Gist if you want to see the gory details, and I’ll describe each of them below.

Fancy breaks

In Pandoc’s version of Markdown, you can get a horizontal rule (the HTML hr element) by making a line containing only asterisks with spaces between them: * * * is what I use these days. It’s simple enough, and you can use CSS to make it not appear as an actual line across the page, but as a nice vertical blank space that serves as a scene break. It even carries over into MOBI format when you use Kindlegen.

But it doesn’t work for PDFs. Well, it does, but there’s an even better way. Since I’m using Memoir, I get what are called “fancy” breaks. In print, they’re nothing more than a centered set of asterisks, stars, or any other icon you’d like to use. Those can be a bit tacky if they show up after every seen, though, so there’s another option that only shows the “fancy” breaks when they’d be at the end of a page, but instead puts in a “plain” blank otherwise. In Memoir, this is the \pfbreak command, and it’s smart enough to choose the right style every time.

So all the fancybreak.py filter does is swap out Pandoc’s HorizontalRule AST element, replacing it with the raw LaTeX code for Memoir’s “plain fancy break”. Take out the boilerplate, and it’s literally only three lines of code. Simple, even for me.

Writing links

Another difference between print and digital editions of a book comes from the formatting available. E-books are interactive in a way paper can’t be. They can use hyperlinks, and I do exactly that. But it’s impossible to click on a link in a paperback, and blue doesn’t show up in a black and white book, so I need to get rid of the link part. Ideally, I’d like to keep the address, though.

For this, I wrote the writelinks.py filter. This one’s a little bit harder to explain from a code point of view. From the reader’s perspective, though it’s easy: every link is removed, but its address is added to the text in parentheses instead. It comes out as preformatted (or verbatim) text, in whatever monospaced font I’m using. (I actually don’t remember which one.)

The guts of this filter are only 5 lines, and the hardest part was working out exactly what I had to do to get the link address. Pandoc’s API documentation isn’t very helpful in this regard, and it gets even worse in a moment.

Drop caps and raised initials

Here’s where I was ready to gouge my own eyeballs out. If you look at the code for dropcaps.py and raisedinitials.py, you’ll probably see why. Let’s back up just a second, though, so we can ask a simple question: What was I thinking? (Don’t answer that.)

I like the “raised initial” style for books. With this, the first letter of a chapter is printed bigger than the rest, and the rest of the first word is printed in regular-sized small caps. Other people like “drop caps”, where the initial letter hangs down into the first paragraph. Either way, one LaTeX package, lettrine, takes care of your needs. Using it with Memoir is a matter of importing it and adding a bit of markup at the beginning of each chapter.

Using it with Pandoc, on the other hand, takes more work. Since I don’t want to sprinkle LaTeX code all over my source documents, I made these filters to inject that code later in the process. And that was…not fun at all. After a lot of trial and error (going from Haskell to Python and back doesn’t give you a lot of useful diagnostics), I settled on the process I used in these filters. They’re the same thing, by the way. The only real difference is their output.

Well, and dropcaps.py has to break up a Quoted element so it doesn’t blow up the opening quotation mark instead of the first letter. Doing that required some trickery. If you’d like to try it for yourself, I suggest drinking heavily. If you don’t drink, well, you’ll want to by the time you’re done.

Limitations and future expansion

Anyway, after I finished this herculean task, I had a set of filters that would let me use my original source files but produce something much more suited to Memoir and the paperback format. Now I’ve got fancy scene breaks, links that write themselves out when they’re in a PDF, and those wonderfully enormous initial letters starting each chapter.

Can I do more? Of course I can. The last two filters don’t take into account “front matter” chapters. For my current novels, that’s not a problem, as I don’t use those. But if you need something with, say, an extended foreword, then you’d need to hack on the scripts to fix that.

There’s also nothing at all I can do for the opening pages of the book, the parts that come before the text. Here, the formats are too different even for filters. I’m talking about the title page, copyright page, dedication, and things like that. (These, in fact, are considered front matter, but they’re not part of a chapter, so the last paragraph doesn’t apply.) I still need to maintain two versions of those, and I don’t see any real way around that.

Still, what I’ve got so far is good. It was a lot of work, but it’s work I only have to do once. That’s the beauty of programming in a nutshell. It’s automation. Sure, I could have done the editing by hand instead of writing scripts to do it for me, and I probably would have been done sooner, but now I won’t have to do it all over again for Nocturne or any other book I write in the future.

To close out this miniseries, I have one more post in mind. This one will look at some of the additional LaTeX packages I used, like the lettrine one I mentioned above. By the time that comes out, maybe I’ll even have another book ready.

Playing with Memoir

Last time, I talked a little about how I used Pandoc to create a paperback book. Well, since I wrote that, I’ve not only posted the thing, but I have a copy of my own. Seriously. That’s a strange feeling, as I wrote about on Patreon.

Anyway, I promised I’d talk about how I did it, so that’s what I’ll do. First off, we’ll look at Memoir, one of the greatest inventions in the history of computer-aided authorship.

Optional text

Memoir is a LaTeX class; essentially, it’s a software package that gives you a framework for creating beautiful books with less painstaking effort than you would expect. (Not none, mind you. If you don’t know what you’re doing—I can’t say I do—then it can be…unwieldy.)

It’s not perfect, and the documentation is lacking in some respects (the package’s author actively refuses to tell you how to do some things that upset his aesthetic sensibilities), but it’s far superior to anything you’d get out of a word processor. Oh, and it’s like code, too, which is great for logical, left-brain types like me.

So, let’s assume you know how to use LaTeX and include classes and all that, because this isn’t a tutorial. Instead, I’ll talk about what I did to beat this beast into shape.

First off, we’ve got the class options. Like most LaTeX packages, Memoir is customizable in the extreme. It’s not meant only for books; you can do a journal article with it, or a thesis, or just about anything that could appear in print. So it has to be ready for all those different printing formats. Want to make everything print only on one side of the page? You can do that. Multicolumn output, like in a newspaper? Sure, why not?

The list goes on, but I only need a few options. “Real” books are single-column and double-sided, so I’ll be using the appropriate class options, onecolumn and twosided. Books in English start on the right-hand page, so add in openright. But wait! Since most books use these options anyway, Memoir simply makes them the default, so I don’t have to do anything! (Now, if you’re making manga or something, you might need to use openleft instead, but that’s the exception, not the rule.)

Besides those, I only need to specify two other options. One is ebook, which sets the page to a nice 6″ x 9″—exactly the same as Amazon’s default paperback size. If you want something else, it can get…nontrivial, but let’s stick to the basics. Oh, and I want american, because I am one; this changes some of the typography rules, though I’ll confess I don’t know which ones.

Set it up

The remainder of the LaTeX “coding” is mostly a series of markup commands, which work a bit like HTML tags. The primary “content” ones are \frontmatter, \mainmatter, and \backmatter, which are common to Memoir and other packages; they tell the system where in the book you are. A preface, for instance, is in the front matter, and you can configure things so it gets its pages numbered in Roman numerals. Pretty much the usual, really, and not Memoir-specific.

For typography, some of the things I did include:

  • Changing margins. Amazon is finicky when it comes to these. It actually rejected my original design, because Memoir’s 0.5″ is apparently less than their 0.5″. So I’m using 0.75″ on the left and right for Before I Wake, and I suspect Nocturne will need something even bigger on the inside edge. Top and bottom get 1″ each, which seems comfortable.

  • Adding subtitle support. I don’t need this for either of the two novels I mentioned, but I might later on. Pandoc passes the subtitle part of its metadata through to LaTeX, but Memoir doesn’t support it. So I fixed that.

  • Creating a new title page. This was fun, for varying values of “fun”. Mostly, I just needed something functional. Then I had to do it again, to make the “half-title” page that professional books have.

  • Fixed headers and footers. This was mostly just configuration: page numbers in the outer corner of the header, author and title alternately in the middle, and footers left blank. Not too bad.

  • Changing the chapter style. Here’s where I almost gave up. By default, Pandoc tells LaTeX to create numbered chapters. Well, I did that myself. Rather than go back and change that (it would screw up the EPUB creation), I told Memoir to ignore the pre-made numbering completely. This is especially important when I get to Nocturne, because it has a prologue and epilogue. Having it put “Chapter 1: Prologue” would just be stupid.

  • Add blank pages. Now, you might be wondering about this one. Trust me, it’s for a good cause. Memoir is smart enough to add blank pages to make a chapter start on the right side (that openright thing I mentioned earlier), but it won’t do that at the end of the book, or if you go and manually make a title page, like I did. Oh, and if you’re doing a print book, remember that it ends on the left page.

The whole thing was almost a hundred lines of code, including the text for, e.g., the copyright and dedication pages. All in all, it took about three or four hours of work, but I really only have to do it once. Next time around, I just tweak a few values here and there, and that’s it. Automation. It’ll eventually take everybody’s job.

Coming up

So that’s enough to get something that looks like a book, but I’m still not done. Next up, you’ll get to see the bane of my existence: Pandoc filters. And then I’ll throw in a little bit about some interesting LaTeX packages I use, because I need Code posts. See you then!

Pandoc, LaTeX, and Memoir

A while back, I wrote about the “inner workings” of my writing. My stories are created using Markdown, which I run through a program called Pandoc to turn into EPUB format. (Then, to make Amazon happy, I send that through KindleGen, which spits out a MOBI file that can then go on the Kindle Store.) It works, and there’s a minimum of fuss. No fiddling with margins and page layout, no worrying about arcane or proprietary file formats, just a lot of text that already looks pretty much like a book.

Well, Amazon has a new thing for their KDP self-publishers: paperbacks. If you remember Createspace, it’s kinda like that, but integrated with the “main” Kindle Store. All you really have to do is upload a new format manuscript, and they’ll even give you an ISBN. (Note for non-US readers: my country seriously overcharges for ISBNs, so getting one for free is a big deal.) And the paper book shows up on Amazon as an option alongside the Kindle digital version. My brother already tried it with his book Angel’s Sin, and it seems to have worked.

So, of course, now I’m going to do the same with Before I Wake and the forthcoming Nocturne, as well as some of my future projects. To do this, however, I’ve had to delve deeper into the mechanics of my workflow.

The format issue

Amazon doesn’t like EPUBs. That’s well known. For digital books, they really, really want you to send them either a MOBI file, or something like HTML or a Word document. That’s most assuredly because of DRM. (It can’t be because they don’t know how to convert, since they give you a command-line tool to do so!) Be that as it may, I don’t really mind the last little step of running KindleGen to make an Amazon-friendly version; it’s easily automated, and I’ll still have the EPUB ready to go on Patreon or wherever.

With this new paperback option, however, there’s a problem: they don’t take MOBI, either! Nope, if you want to upload a manuscript for actual printing, your options are Word DOC/DOCX, plain HTML (possibly zipped with images and stylesheets), or “print-ready” PDF. That last is code for, “Do all the layout yourself, ’cause we ain’t touching it.”

Well, there’s the dilemma. Pandoc will happily output just about whatever format you like, but each of the options available has its downsides. Microsoft Word documents require (naturally) Microsoft Word, which isn’t really an option for a Linux user like myself. (The web app version of Office is also a nonstarter, for much the same reasons.) Zipped HTML is essentially an EPUB already, but then you have all the layout issues that come from shoving a “streaming” markup format like HTML into the “blocks” of a printed page. Fiddly bits like margins and headers and page numbers, and all with no usable previewer.

So what does that leave? Only one thing: PDF. And Pandoc can make a PDF, but not by itself. Fortunately, it knows someone who can help.

The type type

TeX (that’s really how it’s meant to be written in plain text) is the famous typesetting program originally developed by the equally famous Donald Knuth. I’ve used it many times before, on Linux and on Windows, and it works great for what it is: a “programmer’s” interface to text layout. Not a word processor, but a text processor.

TeX has been extended a few times over the past 40 or so years, and it has accrued an entire ecosystem of add-ons, bells and whistles, and documentation. If you’re willing to put in the work, you can get a seriously beautiful document. By default, it comes out in PostScript format, which is relatively arcane and not really useful to anyone. But far more common these days is its PDF option. Its print-ready PDF option.

I don’t mind writing a bit of code. I’d rather do that than play around in a word processor GUI, clicking at buttons and tweaking margins. Give me the linear word any day of the week. So I decided I’d try to use TeX (actually, the much simpler wrapper LaTeX, and be absolutely sure you capitalize that one right!) with Pandoc to make a printable PDF of one of my books.

Writing my memoir

The full story is going to play out over the next few weeks. I’ve been searching for new material for the “Code” posts here, and now I’ve found it: a deep look into what it takes for me, a very non-artistic writer experienced with programming in multiple languages and environments, to create something that looks like a book.

In the first of multiple upcoming posts, I’ll look at memoir, a wonderful LaTeX extension (“class”, as they’re called) used for creating books that truly look like they were designed by professionals. It’s not exactly plug-and-play, and I’ll gladly admit that I had to do a lot of work to beat it into shape, but I only had to do it once. Now, every book I write can use the same foundation, the same basic template.

After that, I’ll go back to Pandoc and show you the work I did to convince it to do what I wanted. I’ve never written a horror story before, but this might be the closest to it, from a programmer’s perspective. It was a coding nightmare, one I’m not sure I’m out of yet, but the end result is everything I need in a book, as you’ll see.

Practical Typescript: dice roller

In the last few “code” posts of Prose Poetry Code, there’s been one thing missing. One very important thing, and you might have noticed it. That’s right: there’s no code! I’ve been writing about generalities and theory and the like for a while now, but I’ve been neglecting the ugly innards. Part of that is because I haven’t been in much of a coding mood these past few months. Writing fiction was more interesting at the time. But I’ve had a few spare hours, and I really do want to get back into coding, so here goes.

A while back, I mentioned Typescript, a neat wrapper that sits on top of JavaScript and generally makes it palatable. And at the end of that post, I said I’d be playing around with the language. So here’s what I’ve got, a practical example of using Typescript. Well, dice rollers aren’t exactly practical—they’re a dime a dozen, and nobody really needs them—but you get the idea.

The code

I’ve posted a full ZIP of the code, including a config file, an HTML skeleton, and the JavaScript output. But here’s the Typescript code (dice.ts) for your perusal.

// Simple function simulating a single die roll
function roll(sides: number): number {
    return 1 + Math.floor(Math.random() * sides);
}

// Our "back-end" will roll a number of simulated dice,
// possibly adding a bonus or subtracting a penalty.
// It will return an object containing the rolls and their total.
// (This is really for no reason other than to show off interfaces.)
interface RollResult {
    rolls: number[],
    total: number
}

// Roll a number of dice, and return the results,
// using the object we defined above.
function rollMany(count: number, sides: number): RollResult {
    let rolls : number[] = [];

    for (let i = 0; i < count; i++) {
        rolls.push(roll(sides));
    }

    let result = {
        rolls: rolls,
        total: rolls.reduce((a: number, b: number) => a+b, 0)
    };

    return result;
}

// This is where the interactive portion of the script begins.
// It's really just basic DOM stuff. In a moment, we'll actually
// hook it all up to the page.
function diceRoller() {
    console.log("Rolling...");

    let countBox = document.getElementById("count") as HTMLInputElement;
    let sidesBox = document.getElementById("sides") as HTMLInputElement;
    let addsBox = document.getElementById("adds") as HTMLInputElement;
    let resultRollsText = document.getElementById("result");
    let resultTotalText = document.getElementById("total");

    let count = +countBox.value;
    let sides = +sidesBox.value;
    let adds = +addsBox.value;

    let result = rollMany(count, sides);
    let totalRoll = result.total + adds;

    resultRollsText.innerHTML = result.rolls.join(" ");
    resultTotalText.innerHTML = ""+ totalRoll;
}

// This clears out our results and options.
function clearAll() {
    console.log("Clearing...");

    // Note that this gives us an HTMLCollection...
    let boxes = document.getElementsByClassName("entry");

    // ...which can't be used like an array in for/of...
    for (let b in boxes) {
        // ...and contains only generic HTMLElements.
        (boxes[b] as HTMLInputElement).value = "";
    }

    let resultText = document.getElementById("result");
    let totalText = document.getElementById("total");
    resultText.innerHTML = "";
    totalText.innerHTML = "";
}

// Here's where we connect our functions to the page.
document.addEventListener("DOMContentLoaded", function () {
    let clearButton = document.getElementById("clear");
    let rollButton = document.getElementById("roll");

    clearButton.onclick = clearAll;
    rollButton.onclick = diceRoller;
});

Honestly, if you’ve seen JavaScript, you should have a pretty good idea of what’s going on here. We start with a helper function, roll, which does the dirty work of generating a number from 1 to N, exactly as if you rolled a die with N sides. (It’s not perfect, as JavaScript uses pseudorandom numbers, but it’s the best we can do.) If you take out the two type declarations, you wouldn’t be able to tell this was Typescript.

Next comes something that inarguably identifies our source as not normal JS: an interface. We don’t really need it for something this simple, but it doesn’t cost anything, and it’s a good check on our typing. Our RollResult interface simply defines an object “layout” that we’ll pass from our back-end rolling function to the front-end output. If we screw up, the compiler lets us know—that’s the whole point of strong typing.

After this is the rollMany function. It builds on the simple roll, calling it multiple times and storing each result in an array. This array, along with the sum of its contents, will become the returned object, matching our interface.

We’ll skip over the diceRoller function briefly, as it’s the grand finale of our little app, and I wanted to save it for last. Instead, we move to clearAll. It would be an unremarkable DOM manipulation function, if not for two quirks in Typescript. First, our simple HTML page defines a few entry boxes: one for the number of dice to roll, one for how many sides each die has, and one for a flat bonus or penalty to add to the total. These are <input> text elements, and they all have a CSS class of entry. Iterating over them (and only them) should be a piece of cake, right?

Not quite. See, the obvious way to do this, document.getElementsByClassName, returns an HTMLCollection, which isn’t quite the same thing as a JavaScript array. And Typescript, unless you tell it to spit out the latest and greatest ECMAScript standard (and I haven’t), won’t let you use the easier for..of loop. Instead, you have to use for..in, which iterates over the keys of a collection—for arrays, these are the indexes.

And that brings us to our second problem. See, our DOM collection is full of HTMLElements. These could be anything at all, and are thus usable as essentially nothing. In particular, we can’t change the value property that all input textboxes have, because not every HTML element has them. As any Java or C# programmer knows, we need an explicit cast into the more specific type of HTMLInputElement. That seems like needless drudgery, but it pays off in more complex applications, and we could always use jQuery or something instead. In a “real” setting, we probably would be.

The areas where we output our rolls and their total are simple divs. We don’t have to do any casting with them; we can just clear their innerHTML properties. And the last bit is not much more than your usual “let’s set up some event handlers” block.

That leaves diceRoller. First up are a few variables to hold the DOM elements: 3 text boxes and 2 output areas. Again, we have to do a cast on anything that’s an input element, but that should be old hat by now.

Following that, we make a few variables that hold our input values in number form. I used the idiomatic “unary plus” conversion from string to number here.

Next comes result, which holds (naturally) our resulting roll. I threw in a totalRoll variable to hold the total (including the “adds” roll bonus/penalty), but you don’t really need it; you can calculate that as you’re putting it into the output field. Speaking of which, that’s where we end: joining the result array into a space-separated string (you could use commas or whatever), then putting that and the total into their proper places.

Conclusion

So Typescript isn’t that hard. Counting comments and blank lines, this little thing weighs in at about 80 lines. The resulting JavaScript is less than 50. The difference comes from the strong typing, an interface definition that is purely for the benefit of the Typescript compiler and the programmer, and a few other odds and ends that contribute nothing physical to the finished product.

It may seem silly. We’re writing more code, locking ourselves into the type system, and we’re not really getting much out of it. Sure, we’ve got protection from typos. If I’d misspelled the name of one of the RollResult fields, I’d get an error telling me where I went wrong, but that’s about it.

For something this simple, even that’s enough. I get that error when compiling. I don’t end up with a blank screen or unresponsive page and no indication as to why. JavaScript’s dynamic nature is great, but it’s also terrible. In coding, unlike in real life, there’s such a thing as too much freedom.

Now, if you like, feel free to take this tiny app and improve on it. The HTML, for instance, expects a stylesheet; you could make one. Add in some bells and whistles. Throw up a few preset buttons. Store the last 100 rolls in DOM local storage. Most of all, have fun with it.

First glance: Scala.js

I’ve learned and used a lot of different programming languages. If I had to pick one to take as the moneymaker, it’d probably be JavaScript right now. But JavaScript is pretty awful. It wouldn’t be my first choice if money was no object. That would likely be C++, but there’s one other contender, one language I liked from the first time I saw it. That language is Scala.

Scala has its problems, of course. It’s closely tied to the Java ecosystem, which…isn’t the greatest. Sure, it’s got a huge selection of libraries and frameworks for whatever you want to do, but the Java language itself is atrocious, the JVM has a well-deserved reputation for bloat, and it’s practically banned on the desktop. (And you run the risk of getting sued if you use it on mobile, as Google discovered.) So Scala isn’t exactly a popular option, for very good reason.

But it doesn’t have to be that way. Plenty of languages have been turned into tools for web development by the clever trick of compiling them into JavaScript. For Scala, that would be the job of Scala.js. It’s a relative newcomer (version 0.6.14 at the time of this writing), but it’s already looking good.

What is it?

Scala.js is, simply enough, a tool for turning Scala code into JavaScript. In that, it’s not much different from “transpilers” like TypeScript or Babel. Only the source language isn’t something vaguely reminiscent of the JS it will become. No, you write in Scala, so you have all the expressive power of that language.

I like Scala. It’s one of the few languages I’ve seen that accomplishes the feat of getting me interested in functional-style programming. I absolutely hate the “pure functional” approach of, e.g., Haskell, where you can’t actually do anything, because all those nasty “impurities” like, say, I/O, are restricted, and variables can’t, you know, vary. (Yeah, you can use monads, but they’re limited in how they interact with the rest of your code. That’s kinda the point.) Scala, instead, makes FP style encouraged, but not required. That’s much better.

On top of that, Scala is just a better Java than Java. It’s got most of what Gosling took out as “too hard”, like operator overloading. It’s got pattern matching. Lambdas exist, and they’re easy to write. Scala is what Java programmers wish they could use. Putting it in the browser? Sure.

The good

So we’ve got a better language, and that’s where most of the good comes from. Like what, you may ask? How about strong typing? Sure, you can get that in TypeScript, but Scala uses it to its full extent. The functional style is front and center, and it’s not hidden behind bad syntax like in JS. OOP is class-based, not prototype-based, and you don’t have to worry about browser compatibility. And then there are a lot of things Scala has that just aren’t feasible in JS, like case classes and traits.

Outside of the language proper, you’ve got a good HTML templating system, Scalatags, that has full type safety. And it’s not an addon, not in the JS sense. No, you’re working with something that’s based on Java, remember? You’ve got Java-derived build tools and libraries.

I could go on, but the main advantages of Scala.js come from the simple fact that you’re using Scala in place of JavaScript. Everything else great about it follows naturally from that. If you like Scala, that should be enough to give it a shot. If you hate JavaScript, same deal.

The bad

But it’s not all good. That same Java heritage is the greatest weakness of Scala.js. You’re using Java tools, a Java-based API, JVM-centric constructs. Not a fan of Java? Tough. You really can’t get away from it with Scala. (Really, though, this isn’t that much different from that thing Google did a few years ago. JWT? I can’t remember, and I’m too lazy to look it up.)

A worse shortcoming is the number of limitations Scala.js has on what Scala code you can use. Mostly, this makes sense—you can’t use parallel programming in the inherently single-threaded environment of the browser. You also can’t use any Java libraries; your code, and all the code you import, has to be pure Scala, with a very few exceptions. And then there are a few nits to pick, cases where the developers of Scala.js couldn’t figure out a way to make the construct work in JS. Reflection is one of these: you can’t use it at all, and you can’t use anything that uses it. Macros (the recommended replacement) are a poor substitute for reflection, just as templates are in C++.

Lastly, you’ve got the build environment. If you’ve used any sort of JavaScript tools made this decade, you’re probably familiar with Node, Grunt, Gulp, and the like. Scala.js inherits Scala’s sbt, which is yet another thing to learn. And IDEs don’t seem to like it very much; I’ve never managed to get an sbt-based project talking to an IDE, and Eclipse, Netbeans, and IDEA all laughed at my attempts to use sbt for their projects. Since Scala, like Java, really wants an IDE, this can be a problem. (Maybe they’ve fixed things since then, though.)

The verdict

Although the “bad” section outweighs the “good” in quantity, don’t let that make you think I don’t like Scala.js. I like the idea of it. I can even like the implementation. File sizes are a bit on the hefty side, but they’re working on that. Some of the API limitations can be changed. The thing’s still in beta, remember.

On the whole, if you can live with the restrictions it places on you, Scala.js looks to be a very interesting way of creating web apps without writing raw JavaScript. I know I’m keeping an eye on it.

Playing with TypeScript

JavaScript sucks, and we all know it. But it’s the lingua franca of the web (not like WebAssembly is ever taking off), so we have to learn to deal with it. That doesn’t, however, mean we have to write it. Oh, no. There’s a whole subculture of languages that compile into JavaScript, meaning you can write your code in something that looks sane (to you) without worrying about the end result being readable.

This trend really started a few years ago with CoffeeScript, which did an admirable job of making JS look like Ruby. From there, similar projects spun off, like Coco and LiveScript, which were far better in that they didn’t have anything to do with Ruby. And then Microsoft got involved. That’s where TypeScript comes from. Yes, that Microsoft.

But it’s not as bad as it sounds. The language and compiler use the Apache license, and that gives us a bit of protection from the worst of corporate silliness. And that means it might be interesting to look at TypeScript for what it is: an attempt at making a better JavaScript than JavaScript.

Up and running

TypeScript fits into the usual Node-hipster-modern ecosystem. You can install it through npm, and it works with most of the other big JavaScript tools like testing systems. Some of the big “web app” frameworks even use it by default now, meaning that I’m a little behind the times. Oh, and you can also use it with Visual Studio, but I can’t, because that’s Windows-only. (And VS Code is not the same.) But you don’t have to: the homepage even has a link to TypeScript support for the best text editor out there. I’ll leave it to you to decide which one that is, but I’ll tell you that it’s the same one I’m using to write this post.

My type

So you get TypeScript installed, and you’ve got a nifty little compiler that spits out perfectly fashionable JavaScript, but what about the source language? What’s so special about it? In a word: types. (You’ll note that this word makes up half the language’s name.)

Essentially, TypeScript is nothing more than JavaScript with strong typing. That may not seem like much, but it’s actually a huge change that alters the whole way you write code. Variables are typed, functions are typed, objects are typed. To be fair, they all are in JavaScript, too, but that language doesn’t do anything with those types. TypeScript is made to use them. By itself, that’s almost enough to overcome my instinctive hatred of all things Microsoft. Almost.

On top of the bare bones JavaScript gives us, we’ve got tuples, enums, generics (C#-like, so not the best we could get, but far better than what JS offers), and an any type that lets us ignore the type system when necessary. If you wanted to, you could just about get away with writing regular JavaScript, tagging everything as any, and running it through the TypeScript compiler. But why would you do that? You’re supposed to be using a better language, right? So use it! Besides, like any decent strongly-typed language, we can use type inference to save us most of the trouble of specifying what something should be.

The next level

Just having the added safety of stronger typing already gives TypeScript a leg up on its parent language. But if it didn’t offer anything else, I’d tell you not to bother. Fortunately for this post, it has a lot more.

JS is currently in a state of flux. Most (but not all) browsers support ES5, some let you use a lot of the new ES6 features, and the language standard itself has transitioned to the ridiculous rapid-release model that’s all the rage today, so it’s looking like we’ll soon be back in the bad old days of “Best viewed in {browser X}”. Polyfills can only take us so far, you see. For coders writing raw JavaScript, it’s a serious problem making something maximally compatible, and most just throw their hands up and say, “Just use Chrome like everyone else.”

That’s another case where I’ll give the MS team credit. TypeScript takes a lot of the newer JS additions and lets you use them now, just like Babel or other “transpilers”. You can declare your variables with let instead of var, and the compiler will do the right thing. You get the new class syntax for free, which is great if you never could wrap your head around prototype OOP. Generators aren’t even in JS yet, so TypeScript even looks a bit like the future in some cases.

Reservations

If there’s anything wrong with TypeScript, it’s not the kind of thing that shows up in a cursory inspection of the documentation. No, the main problems are twofold. One, it’s a project started by Microsoft, a company with a history of bad blood towards the open source community. The Apache license helps in that regard, though I don’t think it can ever completely alleviate some people’s fears.

Second, TypeScript has been around for a while now, but it’s only recently been picking up steam. Angular and React both like it a lot, as do some indie game engines like Phaser. But the JS community is fad-driven, and this new acceptance could be an indication of that. If TypeScript is simply “the next big thing”, then interest will fade once some other shiny thing catches the eyes of the hipsters. We can’t prevent that, but we can do our best to ignore it.

Will TypeScript become the future of web development? I can’t say. It’s definitely one option for the present, though. And it’s a pretty good option, from what I’ve seen. I think I’ll play around with it some more. Who knows? Maybe I’ll show you what I’ve made.

Programming in 2016: game development

I tried to make a game this year. My body failed me. But I’ve been keeping up with the news in the world of game development, and 2016 has been exciting, if a bit frustrating.

Unity

Unity’s still the big kahuna for indie development. But they’ve gone to that same “rapid release” model that everyone else has, the same one that has all but ruined Firefox, Windows, and so many other projects. On top of that, they switched to a subscription model. Rather, they switched to a subscription-only model.

Yes, that’s right. You can only rent the Unity engine on a monthly basis now. It’s still free for tiny devs, but it actually costs more now for everybody else. Sure, there’s the new Plus tier (something like $40 a month, I think), but it doesn’t give you much over the Free version. By the time you need it, you can probably afford the full subscription.

On the technical side, they’re making progress towards Vulkan support, and there are rumblings about actually upgrading their version of C# to something approaching modern. That’s probably thanks to the .NET Core open-sourcing I mentioned last week, but I don’t care what the reasoning is. Any upgrade is welcome here.

The other rumor is that they might switch to C++. I…don’t know about that one. On the one hand, I have to say, “Yes, please!” Modern C++ is just as good as C# in almost every way. In many, it’s better. However, what does this do to that huge body of C# Unity code? If there’s a compatibility layer, then you’ve got inefficiencies. If they simply include the “old” engine, they’ve only made more work for themselves. And then you have JavaScript, which is still (mostly) a supported language for Unity coding. How would it fit in to a C++ future?

Godot

Godot is still my favorite 2D engine. It’s free, the source is open, and it’s very easy to use. 3D is a known problem, but that doesn’t bother me much; I’m not capable of making a 3D game anyway.

Well, Godot made their big announcement back in the summer, with the release of version 2.1. It’s not really revolutionary, but it sets the stage for greater things. Time will tell if those come to pass, but I think they will. With 2.2, we’re supposed to get a better renderer and possibly C# support. The big 3.0 might even add Vulkan to the mix, not that it helps me. And the Asset Library, well, it can only get bigger, right?

The main problem for Godot has been its documentation, and that’s much improved over this time last year. There’s a growing body of tutorials out there, too. I don’t think the engine has reached critical mass yet, but I also don’t think it has peaked.

Maybe—if I don’t get sick the day after I announce it—I’ll try another “game in a month” thing. If I do, it’ll be in Godot.

Lots of little ones

I didn’t do much in the way of development in 2016. I didn’t look at Unreal in anything other than passing, for example. But I’ve kept an eye on happenings in the game dev world, and here are some quick thoughts on other engines out there:

  • Unreal is, like the C++ it’s written in, solid and relatively unexciting. That’s what makes it exciting.
  • Superpowers might be a nice little JavaScript platform, but it’s got this horrible bug that makes all the dropdown boxes turn solid black. Makes it hard to use, you know?
  • Clickteam Fusion may or may not be getting bigger in 2017. They’re working on their version 3 release, and it might be cross-platform. Stay tuned for more on that front.
  • Amazon put out their Lumberyard (a fork of CryEngine). It’s free, as long as you’re willing to use their cloud services, but the real cost is in the machine you need to run the environment.
  • CryEngine itself is…strange. They’ve put out source code, but it’s not open. In fact, reading the license, t’s almost impossible to find a game you could even make! Maybe they’ll fix that, but I wouldn’t hold my breath.
  • The Atomic Game Engine looked like a promising release a few months ago, but it seems to be dead. The developers haven’t put out any news since May, and the forums were shut down in favor of Facebook. Sounds like they don’t want new users to me.
  • Finally, RPG Maker has a new version. It’s finally becoming something other than Windows-only, and the coding part has followed the hipster crowd from Ruby to JavaScript. In my opinion, that can only be a good thing.

I could go on, but I’m running out of year, so I’ll stop. Let’s just say this: 2016 was a good year for an independent game developer. 2017 will be even better. You’ve got a massive selection of engines at your disposal, from solid open-source offerings to AAA beasts. Maybe next year will be when we finally solve the asset problem. We’re getting there, slowly but surely.