The axioms

Let us consider the following statements as axiomatic:

  1. Principle of Necessity: Government, in some form, is a construct necessary for the creation and function of any collection of people larger than the tribe or village.

  2. Principle of Purpose: A government exists solely to protect the liberty, health, safety, and well-being of its constituents.

  3. Principle of Evolution: A form of government is not cast in stone, but it must have the potential for change.

  4. Principle of Equality: A system of rule that classifies, based on intrinsic factors, some people as greater or lesser in rights or in being is untenable.

  5. Principle of Cooperation: A society of any size is able to achieve greater feats by working in concert rather than in competition.

  6. Principle of Initial Conditions: The Constitution of the United States is not a perfect document, but it is the best available starting point for enumerating the rights of the people and the responsibilities of government.

This set of axioms should be considered a framework, a definition of the boundaries of a space. With logic and deductive reasoning—two qualities sadly lacking in modern politics—it is possible to derive a system of political thought and a model for governance, one which upholds the principles above while remaining rooted in the practicalities of the modern world.

Indie genres

Being a game developer is hard. Going at it as an indie is that much harder. You don’t have the budget that the big guys have, which means that you can’t afford flashy, state-of-the-art graphics, A-list voice actors, or a soundtrack full of 80s hits. That doesn’t mean that indie games can’t be great. They can, as the past few years have shown. (After I finish writing this post, I’ll probably go back to Sunless Sea, one of those great indie games.)

But the lack of money, and the lack of assets, technology, and infrastructure that it creates, that does have an impact on what an indie dev can do. That’s starting to change, but we’re not quite there yet. For the time being, entire swaths of gaming’s concept space are blocked off for those of us not backed by the AAA guys.

It’s for this reason that some gaming genres are far more common for indies. Those that are the easiest to write, those using the least amount of “expensive” bits, and those that fit within the capabilities of indie-available tools are best represented. Others are rarer, because they’re so much more difficult for a small studio or lone developer.

So let’s see what we can do. I’ve taken the most popular game genres and divided them into three broad categories: those that are very indie-friendly, the ones that indies probably can’t make, and a few that are on the bubble.

The good

  • Puzzles: Puzzle games are some of the first that budding game developers learn how to make, and they’re still highly popular. They’re easy to make, they don’t require a lot of high-quality assets, and people know they won’t be very fancy. Yet puzzle games can be incredibly addictive, even for casual players. These are always a good choice for an indie project, particularly a solo one.

  • Platformers: Another popular genre that perfectly fits the indie mold. Most of the low-cost and free engines have special support baked in for platformers, almost like the creators are steering you towards the genre. And it’s good that they are, as platformers have brought success to quite a few developers over the years. People thought they were dead when the 3D revolution hit, but time has only proven them wrong.

  • Adventure: Adventure gaming is another genre left for dead in decades past. But it also found resurgence in recent years. Point-and-click adventures, survival horror, visual novels, and even interactive fiction have grown in popularity. For indies, adventure is a good genre because it rewards great writing over eye-candy. Art is hard for most programmers, but there are a lot of free assets you can use here.

  • Shooters: First-person or third-person, shooters are popular everywhere. Most of the major engines are tuned for them, and for a while it seemed like that was all the big guys knew how to make. Now, indies aren’t going to make the next blockbuster franchise, but that’s no reason to give up on the shooter genre. Plenty of niche gamers have a favorite shooter. The hardest parts are the assets (but Unity, for instance, has a lot of them available at a decent price) and the multiplayer backend. Get those straight, and you’re golden.

  • Retro: I’m using “retro” as a catchall term for a number of smaller genres that have been around for a long time and are still popular today…in their original forms. Think shoot-em-ups, for example. A lot of people seem to like the retro feel. Pixel art, 8-bit sounds, garish color schemes, and frustratingly difficult gameplay are the rule here, and a subset of gamers will eat that up. In other words, retro gaming is a perfect fit for indies: assets are easy to create, nobody expects them to be good, and you don’t have to worry much about game balance.

  • Sandbox: Ah, Minecraft. The open-world sandbox might be the defining genre of the 2010s, and it’s one that the bigger studios haven’t really tackled very much. Some of the best sandbox games, according to the gamers I’ve talked to, are the indie ones. The genre does have an allure to it. No story needed, AI is only for enemies, the world is made by code, and players will swoon over the “emergent” gameplay. What’s not to love? (Of course, it’s never that simple, but that hasn’t stopped indies from taking over the sandbox space.)

The bad

These next few genres are the ones that don’t seem indie-friendly, but a dedicated developer could make something out of them.

  • Turn-based strategy: Of the strategy variants, I prefer this one for its laid-back nature. But it’s a hard one for an indie. You’ve got two real choices. You can focus on single-player, but then you need a good handle on AI. Or you can make a multiplayer-centric game, with all the problems that entails. Still, it can be done, even as a volunteer effort. Freeciv and Battle for Wesnoth are two examples of free strategy games that have won hearts and minds.

  • Simulation: Sims are another tantalizing genre. They’re a little more complex than their sandbox cousins, often because they have restrictions and goals and such, but most of the same arguments apply. Simulation games also tend to be better focused, and focus is a powerful tool ignored by many devs on either side of the money line. But they’re a lot of work, and it’s a different kind of work. Simulations need to be researched, then written in such a way that they (mostly) reflect reality. Can indies do it? Kerbal Space Program proves they can, but it’s not going to be easy.

  • Role-playing: RPGs, in my opinion, are right on the line of what’s doable for a solo developer. They’re well within the reach of an indie studio, however. It’s not the code that’s the problem—RPG Maker takes care of most of that—but the combination of gameplay, artwork, and writing that makes an RPG hard. That shouldn’t stop you from trying. When done right, this genre is one of the best. Free or cheap assets help a lot here, although you still have to do the storyline yourself.

The ugly

Some genres are mostly beyond indie capabilities. Not that they haven’t given them a shot. Sometimes, it even pays off.

  • Sports: With one exception, sports games are very unfriendly to smaller studios. You’re competing against juggernauts who’ve been at this for decades, and they have all the licenses, endorsements, and publicity. But if you’re willing to invent a new sport, you just might be able to slip into the picture. If you have a bigger budget, that is. It worked for Rocket League.

  • MMO: Just no. MMOs combine all the problems of RPGs with all the hassle of multiplayer shooters, and the whole is much greater, in terms of trouble, than the sum of its parts. Indies have tried, but few have really succeeded. It’s a combination of content and infrastructure that tends to doom them. The general lack of dedicated tooling (MMO-specific engines, etc.) doesn’t help matters.

  • Real-time strategy: The RTS genre is almost dead unless your game is named StarCraft, all but replaced by the MOBA. Either way, it’s a tall order for indies. Strong AI, good artwork, racks of servers to host the multiplayer matches, and the list only grows from there. To top it off, the real-time genres seem to attract more cheaters than just about anything else, so you have to be watchful. But monitoring games for cheating means taking your eyes off the code or hiring moderators. Neither is a good option for a dev on a tiny budget. If you somehow manage it, the payoff can be enormous, because of one recently coined word: esports.

  • Console: This is a platform, not a genre, but it’s worth mentioning here. Thanks to the closed nature of consoles, indie development usually isn’t worth pursuing. Yes, Sony and Microsoft have become more open in the last few years (Nintendo is, if anything, becoming even worse), but there are still too many barriers in the way to make console games a good use of your valuable time. Stick to the three PC platforms (Windows, Linux, and Mac) and the two mobile ones (Android, iOS) at the start.

Fathers

Let me be frank: I don’t have a lot of good things to say about fathers. Mine left over 20 years ago, about a month after my 12th birthday. I haven’t seen him in over a decade and a half, and I’ve only spoken to him once or twice in that same period. The life-changing event of his departure colors all my earlier memories, as well. I only hope that I someday have the chance to prove to myself that I can do a better job.

That’s not to say I know nothing of the subject. I have a stepfather, and that tie that binds us has lasted essentially my entire adult life. For that reason, however, I’ve never seen him as a father in the parental sense. In my mind, the relationship between us is closer to equal.

My grandfather, who passed away in 2012, was also like a father to me, as much as he was to his seven children and the other descendants. Again, though, it’s not the same. He was far older (62 years, to be exact), so I didn’t feel the same bond that exists between parent and child.

In the media, fathers fall into a few broad categories. There’s the abusive alcoholic, the saintly sage, the blue-collar buffoon, and the vaguely man-shaped void that appears far too often in life and art. Characterizing a real, living man in such a way diminishes him, though. I understand the needs of the medium, but how hard is it to give depth to such an integral part of a family, especially in a story centered on that family?

I can’t say I’m an expert on fatherly affection. It’s something I’ve been denied for so long that I’ve all but forgotten what it’s like. But that doesn’t mean I can’t hold opinions on what it should be. Fathers should be leaders. They should be knowledgable, strong of body and spirit, yet also sympathetic. Perhaps it’s outdated to say that a father is the head of a household, but he still holds one of the top positions, no matter where you fall on the political spectrum. He should act like it. Fathers have less attachment to their children by design—they didn’t spend nine months carrying them around—but “less” doesn’t have to mean “none”.

If you’re writing a story about fathers, now’s a good time. Yesterday was a day for them. The other 364? They should share them with their sons and daughters.

Democratization of development

Ten years ago, you only had a very few options for making a game. I know. I was there. For indies, you were basically limited to a few open-source code libraries (like Allegro) or some fairly expensive professional stuff. There were a few niche options—RPG Maker has been around forever, and it’s never cost too much—but most dev tools fell into the “free but bad” or “good if you’ve got the money” categories. And, to top it off, you were essentially limited to the PC. If you wanted to go out of your way, Linux and Mac were available, but most didn’t bother, and consoles were right out.

Fast forward five years, to 2011. That’s really around the time Unity took off, and that’s why we got so many big indie games around that time. Why? Because Unity had a free version that was more than just a demo. Sure, the “pro” version cost an arm and a leg (from a hobbyist perspective), but you only had to get it once you made enough profit that you could afford it. And so the 2010s have seen a huge increase in the number—and quality—of indie titles.

Five more years bring us to the present, and it’s clear that we’re in the midst of a revolution. Now, Unity is the outlier because it costs too much. $1500 (or $75/month) is now on the high end for game engines. Unreal uses a royalty model. CryEngine V is “pay what you want”. Godot and Atomic lead a host of free engines that are steadily gaining ground on the big boys. GameMaker, RPG Maker, and the like are still out there, and even the code-only guys are only getting better.

Engines are a solved problem. Sure, there’s always room for one more, and newcomers can bring valuable insights and new methods of doing things. The basics, though, are out there for everyone. Even if you’re the most hardcore free-software zealot, you’ve got choices for game development that simply can’t be beat.

If you follow development in other media, then you know what’s happening. Game development is becoming democratized. It’s the same process that is bringing movie production out of the studio realm. It’s the same thing that gave every garage band or MIDI tinkerer a worldwide audience through sites like SoundCloud and Bandcamp. It’s why I was able to put Before I Wake into a store offering a million other e-books.

Games are no different in this respect. The production costs, the costs of the “back-end” necessities like software and distribution, are tending towards zero. Economists can tell you all about the underlying reasons, but we, as creators, need only sit back and enjoy the opportunity.

Of course, there’s still a ways to go. There’s more to a game than just the programming. Books are more than collections of words, and it takes more than cameras to make a movie. But democratization has cut down one of the barriers to entry.

Looking specifically at games, what else needs to be done? Well, we’ve got the hard part (the engine) out of the way. Simpler ways of programming are always helpful; Unreal’s Blueprints and all the other “code-less” systems have some nifty ideas. Story work doesn’t look like it can be made any easier than it already is, i.e., not at all. Similarly, game balance is probably impossible to solve in a general sense. Things like that will always have to be left to a developer.

But there is one place where there’s lots of room for improvement: assets. I’m talking about the graphics, sounds, textures, and all those other things that go into creating the audiovisual experience of a game. Those are still expensive or time-consuming, requiring their own special software and talent.

For asset creation, democratization is hard at work. From the venerable standbys of GIMP and Inkscape and Blender to the recently-freed OpenToonz, finding the tools to make game assets isn’t hard at all. Learning how to use them, on the other hand, can take weeks or months. That’s one of the reasons why it’s nearly impossible to make a one-man game these days: audiences expect the kind of polish that comes with having a dedicated artist, a dedicated musician, and so on.

So there’s another option, and that’s asset libraries. We’ve got a few of those already, like OpenGameArt.org, but we can always use more. Unity has grown so popular for indie devs not because it’s a good engine, but because it’s relatively inexpensive and because it has a huge amount of assets that you can buy from right there in the editor. When you can get everything you need for a first-person shooter for less than a hundred dollars (or that Humble Bundle CryEngine collection from a while back), that goes a long way towards cutting your development costs even further.

Sure, asset libraries won’t replace a good team of artists working on custom designs specifically for your game, but they’re perfect for hobbyists and indies in the “Early Access” stage. If you hit it big, then you can always replace the stock artwork with something better later on. Just label the asset-library version “Alpha”, and you’re all set.

Looking ahead to the beginning of the next decade, I can’t say how things will play out. The game engines that are in the wild right now won’t go away, especially those that have been released for free. And there’s nowhere to go but up for them. On the asset side of things, it’s easy to see the same trend spreading. A few big “pack” releases would do wonders for low-cost game development, while real-world photography and sound recording allow amateurs to get very close to professional quality without the “Pro” markup.

As a game developer, there’s probably no better time to be alive. The only thing that comes close is the early generation of PC games, when anyone could throw together something that rivaled the best teams around. Those days are long past, but they might be coming back. We may be seeing the beginning of the end for the “elite” mentality, the notion that only a chosen few are allowed to produce, and everyone else must be a consumer. Soon, the difference between “indie” and “AAA” won’t be because the tools used. And that’s democracy in action.

Mothers

Yesterday was Mother’s Day. (Coincidentally enough, it was also my mom’s birthday.) That’s a time to reflect on motherhood in general, or on how it has affected us. If you think about it, being a mother is the most altruistic thing a woman can do. Months of pregnancy, hours of labor, and years of care, all for the biological, psychological, and physical well-being of another. Look at it from that perspective, and they deserve a whole lot more than just one day.

I consider myself blessed to have grown up so close to my mother and, until about a year ago, her mother. Not everyone has that opportunity, unfortunately. So let’s take this day off to think about what we do have. Let’s bring a little bit of yesterday into today.

If you’re a writer, what are your characters’ mothers like? What do they think of the adventures of their children?

For worldbuilders, today’s the day to contemplate motherhood as a cultural notion. Are mothers revered among your fictitious people? Are they granted leniency or honor that a childless woman wouldn’t receive?

And for everybody: we all had a mother at some point. Some of us still do, and we should be thankful.

The future of auxlangs

Auxlangs are auxiliary languages: conlangs specifically created to be a medium of communication, rather than for artistic purposes. In other words, auxlangs are made to be used. And two auxlangs have become relatively popular in the world. Esperanto is actually spoken by a couple million people, and it has, at times, been considered a possibility for adoption by large groups of people. Lojban, though constructed on different principles, is likewise an example of an auxlang being used to communicate.

The promise of auxlangs, of course, is the end of mistranslation. Different languages have different meanings, different grammars, different ways of looking at the world. That results in some pretty awful failures to communicate; a quick Internet search should net you hundreds of “translation fails”. But if we had a language designed to be a go-between for speakers of, say, English and Spanish, then things wouldn’t be so bad, right?

That’s the idea, anyway. Esperanto, despite its numerous flaws, does accomplish this to a degree. Lojban is…less useful for speaking, but it has a few benefits that we’ll call “philosophical”. And plenty of conlangers think they will make the one true international auxiliary language.

So let’s fast-forward a few centuries. Esperanto was invented on the very edge of living memory, as we know, and Lojban is even younger than that, but Rome wasn’t built in a day. Once auxlangs have a bit of history behind them, will any of them achieve that Holy Grail?

The obvious contender

They’d have to get past English, first. Right now, the one thing holding back auxlang adoption is English. Sure, less than a quarter of the world’s population speaks it, but it’s the language for global communication right now. Nothing in the near future looks likely to take its place, but let’s look at the next best options.

Chinese, particularly Mandarin, may have a slight edge in sheer numbers, but it’s, well, Chinese. It’s spoken by Chinese, written by Chinese, and it’s almost completely confined to China. Sure, Japan, Korea, and much of Southeast Asia took parts of its writing system and borrowed tons of words, but that was a thousand years ago. Today, Chinese is for China. No matter how many manufacturing jobs move there (and they’re starting to leave), it won’t be the world language. That’s not to say we won’t pick a few items from it, though.

On the surface, Arabic looks like another candidate. It’s got a few hundred million speakers right now, and they’re growing. It has a serious written history, the support of multiple nations…it’s almost the perfect setup. But that’s Classical Arabic, the kind used in the Koran. Real-life “street” Arabic is a horrible mess of dialects, some mutually unintelligible. But let’s take the classical tongue. Can it gain some purchase as an auxlang?

Probably not. Again, Arabic is associated with a particular cultural “style”. It’s not only used by Muslims or even Arabs, mind you, but that’s the common belief. There’s a growing backlash against Muslims in certain parts of the world, and some groups are taking advantage of this to further fan the flames. (I write this a few hours after the Brussels bombings on March 22.) But Arabic’s problems aren’t entirely political. It’s an awful language to try to speak, at least by European standards. Chinese has tones, yes, but you can almost learn those; pharyngeal and emphatic consonants are even worse for us. Now imagine the trouble someone from Japan would have.

Okay, so the next two biggest language blocks are out. What’s left? Spanish is a common language for most of two continents, although it has its own dialect troubles. Hindi is phonologically complex, and it’s not even a majority language in its own country. Latin is dead, as much as academics hate to acknowledge that fact. Almost nothing else has the clout of English, Chinese, and Arabic. It would take a serious upheaval to make any of them the world’s lingua franca.

Outliving usefulness

It’s entirely possible that we’ll never need an international auxiliary language at all, because automatic translation becomes good enough for daily use in real-time. Some groups are making great headway on this right now, and it’s only expected to get better.

If that’s the case, auxlangs are then obsolete. There’s no other way of putting it. If computers can translate between any two languages at will, then why do you need yet another one to communicate with people? It seems likely that computing will only become more ubiquitous. Wearables look silly to me, but I’ll admit that I’m not the best judge of such things. Perhaps they’ll go mainstream within the next decade.

Whatever computers you have on your person, whether in the form of a smartphone or headgear, likely won’t be powerful enough to do the instantaneous translation needed for conversation, but it’ll be connected to the Internet (sorry, the cloud), with all the access that entails. Speech and text could both be handled by such a system, probably using cameras for the latter.

For auxlang designers, that’s very much a dystopian future. Auxiliary languages effectively become a subset of artlangs. But never fear. Not everyone will have a connection. Not everyone will have the equipment. It’ll take time for the algorithms to learn how to translate the thousands of spoken languages in the world, even if half of them are supposed to go extinct in the coming decades.

The middle road

Auxlangs, then, have a tough road ahead. They have to displace English as the world language, then hold off the other natural contenders. They need real-time translation to be a much more intractable problem than Google and Microsoft are making it out to be. But there’s a sliver of a chance.

Not all auxlangs are appropriate as an international standard of communication. Lojban is nice in a logical, even mathematical way, but it’s too complicated for most people. A truly worldwide auxlang won’t look like that. So what would it look like?

It’ll be simple, that’s for sure. Think something closer to pidgins and creoles than lambda calculus. Something like Toki Pona might be too far down the scale of simplicity, but it’s a good contrast. The optimum is probably nearer to it than to Lojban. Esperanto and other simplified Latins can work, but you need to strip out a lot of filler. Remember, everyone has to speak this, from Europeans to Inuits to Zulus to Aborigines, and everywhere in between. You can’t please everybody, but you can limit the damage.

Phonology would also tend to err on the side of simplicity. No tones, no guttural sounds half the world would need to learn, no complex consonant clusters (but English gets a pass with that one, strangely enough). The auxiliary language of the future won’t be Hawaiian, but it won’t be Georgian, either. Again, on the lower side of medium seems to be the sweet spot.

The vocabulary for a hypothetical world language will be the biggest problem. There’s no way around it that I can see, short of doing some serious linguistic analysis or using the shortcut of “take the same term in a few big languages and find the average”. Because of this, I can seriously see a world auxlang as being a pidgin English. Think a much simplified grammar, with most of the extraneous bits thrown out. Smooth out the phonology (get rid of “wh”, drop the dental fricatives, regularize the vowels, etc.) and make the whole thing either more isolating or more agglutinative—I’m not sure which works best for this. The end result is a leaner language that is easier to pick up.

Or just wait for the computers to take care of things for us.

Worldbuilding and the level of detail

Building a world, whether a simple stage, a vast universe, or anywhere in between, is hard work. Worse, it’s work that never really ends. One author, one team of writers, one group of players—none of these could hope to create a full-featured world in anything approaching a reasonable time. We have to cut corners at every turn, just to make the whole thing possible, let alone manageable.

Where to cut those corners is the hard part. Obviously, everything pertinent to the story must be left in. But how much else do we need? Only the creator of the work can truly answer that one. Some stories may only require the most rudimentary worldbuilding. Action movies and first-person shooters, for example, don’t need much more than sets and (maybe) some character motivations. A sprawling, open-world RPG has to have a bit more effort put into it. The bigger the “scale”, the more work you’d think you need.

Level of detail

But that’s not necessarily true. In computer graphics programming, there’s a concept called mipmapping. A large texture (like the outer surface of a building) takes up quite a chunk of memory. If it’s far enough away, though, it’ll only be a few pixels on the screen. That’s wasteful—and it slows the game down—so a technique was invented where smaller versions of the texture could be loaded when an object was too far away to warrant the “full-sized” graphics. As you get closer, the object’s texture is progressively changed to better and better versions, until the game engine determines that it’s worth showing the original.

The full set of these textures, from the original possibly down to a single pixel, is called a mipmap. In some cases, it’s even possible to control how much of the mipmap is used. On lower-end machines, some games can be set to use lower-resolution textures, effectively taking off the top layer or two of the mipmap. Lower resolution means less memory usage, and less processing needed for lighting and other effects. The setting for this is usually called the level of detail.

Okay, so that’s what happens with computer graphics, but how, you might be wondering, does that apply to worldbuilding? Well, the idea of mipmapping is a perfect analogy to what we need to do to make a semi-believable world without spending ages on it. The things that are close to the story need the most detail, while people, places, and events far away can be painted in broad, low-res strokes. Then, if future parts of the story require it, those can be “swapped out” for something more detailed.

The level of detail is another “setting” we can tweak in a fictional work. Epic fantasy cries out for great worldbuilding. Superhero movies…not so much. Books have the room to delve deeper into what makes the world tick than the cramped confines of film. Even then, there’s a limit to how much you should say. You don’t need to calculate how many people would be infected by a zombie plague in a 24-hour period in your average metropolis. But a book might want to give some vague figures, whereas a movie would just show as many extras as the producers could find that day.

Branches of the tree

The level of detail, then, is more like a hard cutoff. This is how far down any particular path you’re willing to go for the story. But you certainly don’t need to go that far for everything. You have to pick and choose, and that’s where the mipmap-like idea of “the closer you are, the more detail you get” comes in.

Again, the needs of the story, the tone you’re trying to set, and the genre and medium are all going to affect your priorities. In this way, the mipmap metaphor is exactly backwards. We want to start with the least detail. Then, in those areas you know will be important, fill in a bit more. (This can happen as you’re writing, or in the planning stages, depending on how you work.)

As an example, let’s say you’re making something like a typical “space opera” type of work. You know it’s going to be set in the galaxy at large, or a sizable fraction of it. Now, our galaxy has 100 billion stars, but you’d be crazy to worry about one percent of one percent of that. Instead, think about where the story needs to be: the galactic capital, the home of the lost ancients, and so on. You might not even have to put them on a map unless you expect their true locations to become important, and you can always add those in later.

In the same vein, what about government? Well, does it matter? If politics won’t be a central focus, then just make a note to throw in the occasional mention of the Federation/Empire/Council, and that’s that. Only once you know what you want do you have to fill in the blanks. Technology? Same thing. Technobabble about FTL or wormholes or hyperspace would make for usable filler.

Of course, the danger is that you end up tying yourself in knots. Sometimes, you can’t reconcile your broad picture with the finer details. If you’re the type of writer who plans before putting words on the page, that’s not too bad; cross out your original idea, and start over. Seat-of-the-pants writers will have a tougher time of it. My advice there is to hold off as long as feasible before committing to any firm details. Handwaving, vague statements, and the unreliable narrator can all help here, although those can make the story seem wishy-washy. It might be best to steer the story around such obstacles instead.

The basic idea does need you to think a little bit beforehand. You have to know where your story is going to know how much detail is enough. Focus on the wrong places, and you waste effort and valuable time. Set the “level of detail” dial too low, and you might end up with a shallow story in a shallower world. In graphics, mipmaps can often be created automatically. As writers, we don’t have that luxury. We have to make all those layers ourselves. Nobody ever said building a world was easy.

The problem with emoji

Emoji are everywhere these days. Those little icons like 📱 and 😁 show up on our phones, in our browsers, even on TV. In a way, they’re great. They give us a concise way to express some fairly deep concepts. Emotions are hard to sum up in words. “I’m crying tears of joy” is so much longer than 😂, especially if you’re limited to 140 characters of text.

From the programmer’s point of view, however, emoji can rightfully be considered a pox on our house. This is for a few reasons, so let’s look at each of them in turn. In general, these are in order from the most important and problematic to the least.

  1. Emoji are Unicode characters. Yes, you can treat them as text if you’re using them, but we programmers have to make a special effort to properly support Unicode. Sure, some languages say they do it automatically, but deeper investigation shows the hollowness of such statements. Plain ASCII doesn’t even have room for all the accented letters used by the Latin alphabet, so we need Unicode, but that doesn’t mean it’s easy to work with.

  2. Emoji are on a higher plane. The Unicode character set is divided into planes. The first 65,536 code points are the Basic Multilingual Plane (BMP), running from 0x0000 to 0xFFFF. Each further plane is considered supplemental, and many emoji fall in the second plane, with code points around 0x1F000. At first glance, the only problem seems to be an additional byte required to represent each emoji, but…

  3. UCS-2 sucks. UCS-2 is the fixed-width predecessor to UTF-16. It’s obsolete precisely because it can’t handle higher planes, but we still haven’t rid ourselves of it. JavaScript, among others, essentially uses UCS-2 strings, and this is a very bad thing for emoji. They have to be encoded as a surrogate pair, using two otherwise-invalid code points in the BMP. It breaks finding the length of a string. It breaks string indexing. It even breaks simple parsing, because…

  4. Regular expressions can’t handle emoji. At least in present-day JavaScript, they can’t. And that’s the most used language on the web. It’s the front-end language of the here and now. But the JS regex works in UCS-2, which means it doesn’t understand higher-plane characters. (This is getting fixed, and there are libraries out there to help mitigate the problem, but we’re still not to the point where we can count on full support.)

  5. Emoji are hard to type. This applies mostly to desktops. Yeah, people still use those, myself included. For us, typing emoji is a complicated process. Worse, it doesn’t work everywhere. I’m on Linux, and my graphical applications are split between those using GTK+ and those using Qt. The GTK+ ones allow me to type any Unicode character by pressing Ctrl+Shift+U and then the hexadecimal code point. For example, 😂 has code point 0x1F602, so I typed Ctrl+Shift+U, then 1f602, then a space to actually insert the character. Qt-based apps, on the other hand, don’t let me do this; in an impressive display of finger-pointing, Qt, KDE, and X all put the responsibility for Unicode handling on each other.

So, yeah, emoji are a great invention for communication. But, speaking as a programmer, I can’t stand working with them. Maybe that’ll change one day. We’ll have to wait and see.

Thoughts on Vulkan

As I write this (February 17), we’re two days removed from the initial release of the Vulkan API. A lot has been written across the Internet about what this means for games, gamers, and game developers, so I thought I’d add my two cents.

I’ve been watching the progress of Vulkan with interest as both user and programmer, and on a “minority” platform (Linux). For both reasons, Vulkan should be making me ecstatic, but it really isn’t. I’m not trying to be the wet blanket here, but everything I see about Vulkan is written in such a gushing tone that I feel the need to provide a counterweight.

What is it?

First off, the rundown. Vulkan is basically the “next generation” of OpenGL. OpenGL, of course, is the 3D technology that powers everything that isn’t Windows, as well as quite a few games on Windows. Vulkan is intended to be a lower-level—and thus faster—API that achieves its speed by being closer to the metal. It’s supposed to be a better fit for the actual hardware of a GPU, rather than the higher-level state machine of OpenGL. Oh, and it’s cross-platform, unlike DirectX.

As of 2/17, there’s only one game out there that can use Vulkan: The Talos Principle. Drivers are similarly scarce. AMD’s are alpha-quality on Windows and nonexistent on Linux, nVidia only has an old beta for Linux, but much better Windows support, and Intel is, well, Intel. Hurray for competition.

Why it’s good

The general rule in programming is that the higher in the “stack” you go, the slower you get. High-level languages like JavaScript, Python, and Ruby are all dreadfully slow when compared to the lower-level C and C++. And assembly is the fastest you can get, because it’s the closest thing to a native language. For GPUs, the same thing is true. OpenGL is fairly high up in the stack, and it shows.

Vulkan was made to fit in a lower level. It has better support for multithreading, multicore programming. Shaders are faster. Everything about it was made to speed things up while remaining stable and supported. In essence, the purpose is to put everyone on a level playing field everywhere except the GPU. To make the OS irrelevant to graphics.

That’s a good thing. I say that not only because I use Linux, not only because I’d like more games for it. I say that as someone who loves the idea of computers in general and as gaming machines. Anything that makes things better while keeping the PC open is a win for everybody. DirectX might be the best API ever invented (I’ve heard people say it is), but if you’re using something other than Windows or an Xbox, it might as well not exist. OpenGL works just about everywhere there’s graphics. If Vulkan can do the same, then there’s no question that it’s good.

Why it’s not

But it won’t. That’s the problem. Vulkan ultimately derives from AMD’s Mantle API, which was mostly made for the Xbox One and PS4, to give them a much-needed power boost. The PC wasn’t exactly an afterthought, but it doesn’t seem like it was ever going to be the main focus of Mantle. Now, that console-oriented nature probably got washed away in the transition to Vulkan, but it causes a ripple effect, meaning that…

Vulkan doesn’t work everywhere.

Yeah, I said it. Currently, it requires some serious hardware support, and it’s mostly limited to the latest couple of generations of GPU. Intel only makes integrated graphics, and some of those can use it, but you know how that goes. For the GTX line, you need at least a 6-series, and then only the best of them. AMD has the widest support, as you’d expect, but it’s full of holes. On Linux, the R9 290 won’t be able to use Vulkan, because it uses the wrong driver (radeonsi instead of amdgpu).

And that brings me to my problem. For AMD’s APU integrated graphics, you have to have at least the Kaveri chipset, because that’s when they started putting in the GCN stuff that Vulkan requires. Kaveri came out in early 2014, a mere two years ago. It was supposed to release in late 2013, but delays crept in. Since I built my current PC for Christmas 2013, I’m out of luck, unless I want to buy a new video card.

But there’s no good choice for that right now, not on Linux. Do I get something from nVidia, where I’m stuck with proprietary drivers, and I can’t even upgrade the kernel without worrying that they’ll crash? Or do I buy AMD, the same company that got me into this mess in the first place? Sure, they have better open-source drivers, but who’s to say that they’ll actually work? You can ask the 290 owners what they think about that one.

The churn

So, for now, I’m on the outside looking in when it comes to Vulkan. But I can see the benefit in that. I get to watch while all the early adopters work out the kinks.

Vulkan isn’t going to take over the world in a night, or a month, or even a year. There are just too many people out there with computers that can’t use it. It’ll take some time before that critical mass is reached, when there are enough Vulkan-capable PCs out there to make it worthwhile to dump OpenGL. (DirectX isn’t really a factor here. It’s tied to Windows, and to a specific Windows version. I don’t care if DX12 is the Second Coming, it’s not going to make me get Windows 10.)

Game engines can start supporting Vulkan right now. Quite a few of them are, like Valve’s Source Engine. As an alternate code path, as an optimization used if possible, it’s fine. As a replacement for the OpenGL rendering system of an engine? Not a chance. Not yet.

Give it some time. Give the Khronos Group a couple of versions to fix the inevitable bugs. Give the world a few years to cycle through their current—underpowered or unsupported—computers or GPUs. When we get to that point, you might be able to see Vulkan reach its full potential. 2020 is a nice year, I think. It’s four years into the future, so that’s a couple of generations of graphics cards and about one upgrade cycle for most people and time for a new set of consoles. If Vulkan hasn’t taken off by then, it probably never will. But it will, eventually.

Thoughts on Haxe

Haxe is one of those languages that I’ve followed for a long time. Not only that, but it’s the rare programming language that I actually like. There aren’t too many on that list: C++, Scala, Haxe, Python 2 (but not 3!), and…that’s just about it.

(As much as I write about JavaScript, I only tolerate it because of its popularity and general usefulness. I don’t like Java for a number of reasons—I’ll do a “languages I hate” post one of these days—but it’s the only language I’ve written professionally. I like the idea of C# and TypeScript, but they both have the problem of being Microsoft-controlled. And so on.)

About the language

Anyway, back to Haxe, because I genuinely feel that it’s a good programming language. First of all, it’s strongly-typed, and you know my opinion on that. But it’s also not so strict with typing that you can’t get things done. Haxe also has type inference, and that really, really helps you work with a strongly-typed language. Save time while keeping type safety? Why not?

In essence, the Haxe language itself looks like a very fancy JavaScript. It’s got all the bells and whistles you expect from a modern language: classes, generics, object literals, array comprehensions, iterators, and so on. You know, the usual. Just like everybody else.

But there’s also a few interesting features that aren’t quite as common. Pattern matching, for instance, which is one of my favorite things from “functional” languages. Haxe also has the idea of “static extensions”, something like C#’s extension methods, which allow you to add extra functionality to classes. Really, most of the bullet points on the Haxe manual’s “Language Features” section are pretty nifty, and most of them are in some way connected to the type system. Of any language I’ve ever used, only Scala comes close to helping me understand the power and necessity of types as much as Haxe.

The platform

But wait, there’s more. Haxe is cross-platform, in its own special way. Strictly speaking, there’s no native output. Instead, you have a choice of compilation targets, and some of these can then be turned into native binaries. Most of these let you “transpile” Haxe code to another language: JavaScript, PHP, C++, C#, Java, and Python. There’s also the Neko VM, made by Haxe’s creator but not really used much, and you can even have the Haxe compiler spit out ActionScript code or a Flash SWF. (Why you would want to is a question I can’t answer.)

The standard library provides most of what you need for app development, and haxelib is the Haxe-specific answer to NPM, CPAN, et al. A few of the available libraries are very good, like OpenFL (basically a reimplementation of the Flash API). Of course, depending on your target platform, you might also be able to use libraries from NPM, the JVM, or .NET directly. It’s not as easy as it could be—you need an extern interface class, a bit like TypeScript—but it’s there, and plenty of major libraries are already fixed for you.

The verdict

Honestly, I do like Haxe. It has its warts, but it’s a solid language that takes an idea (types as the central focus) and runs with it. And it draws in features from languages like ML and Haskell that are inscrutable to us mere mortals, allowing people some of the power of those languages without the pain that comes in trying to write something usable in a functional style. Even if you only use it as a “better” JavaScript, though, it’s worth a look, especially if you’re a game developer. The Haxe world is chock full of code-based 2D game engines and libraries: HaxePunk, HaxeFlixel, and Kha are just a few.

I won’t say that Haxe is the language to use. There’s no such thing. But it’s far better than a lot of the alternatives for cross-platform development. I like it, and that’s saying a lot.