Languages I hate

Anyone who has been writing code for any length of time—anyone who isn’t limited to a single programming language—will have opinions on languages. Some are to be liked, some to be loved, and a few to be hated. Naturally, which category a specific language falls into depends on who you’re talking to. In the years I’ve been coding, I’ve seriously considered probably a dozen different languages, and I’ve glanced at half again as many. Along the way, I have seen the good and the bad. In this post, I’ll give you the bad, and why I think they belong there. (Later on, I’ll do the same for my favorite languages, of course.)

Java

Let’s get this one out of the way first. I use Java. I know it. I’ve even made money writing something in it, which is more than I can say for any other programming language. But I will say this right now: I have never met anyone who likes Java.

The original intent of Java was a language that could run “everywhere” with minimal hassle. Also, it had to be enough like C++ to get the object-oriented goodness that was in vogue in the nineties, but without all that extraneous C crap that only led to buffer overflows. So everything is an object—except for “primitive” types like, say, integers. You don’t get to play with pointers—but you can get null-pointer exceptions. In early versions of the language, there was no way to make an algorithm that worked with any type; the solution was to cast everything to Object, the root class underlying the whole system. But then they bolted on generics, in a mockery of C++ templates. They do work, except for the niggling bit called type erasure.

And those are just some of the design decisions that make Java unbearable. There’s also the sheer verbosity of the language, a problem compounded by the tendency of new Java coders to overuse object-oriented design patterns. Factories and abstract classes have their place, but that place is not “everywhere I can put them”. Yes, that’s the fault of inexperienced programmers, but the language and its libraries (standard and 3rd-party) only reinforce the notion.

Unlike most of the other languages I hate, I have to grin and bear it with Java. It’s too widespread to ignore. Android uses it, and that’s the biggest mobile platform out there. Like it or not, Java won’t go away anytime soon. But if it’s possible, I’d rather use something like Scala.

Ruby

A few years ago, Ruby was the hipster language of choice, mostly thanks to the Rails framework. Rails was my first introduction to Ruby, and it left such a bad taste in my mouth that I went searching for something better. (Haven’t found it yet, but hope springs eternal…) Every bit of Ruby I see on the Internet only makes me that much more secure in my decision.

This one is far more subjective. Ruby just looks wrong to me, and it’s hard to explain why. Most of it is the cleverness it tries to espouse. Blocks and symbols are useful things, but the syntax rubs me the wrong way. The standard library methods let you write things like 3.times, which seems like it’s trying to be cute. I find it ugly, but that might be my C-style background. And then there’s the Unicode support. Ruby had to be dragged, kicking and screaming, into the modern world of string handling, and few of the reasons why had anything to do with the language itself.

Oh, and Ruby’s pitifully slow. That’s essentially by design. If any part of the code can add methods to core types like integers and strings, optimization becomes…we’ll just say non-trivial. Add in the Global Interpreter Lock (a problem Python also has), and you don’t even get to use multithreading to get some of that speed back. No wonder every single Ruby app out there needs such massive servers for so little gain.

And even though most of the hipsters have moved on, the community doesn’t seem likely to shed the cult-like image that they brought. Ruby fans, like those of every other mildly popular language, are zealous when it comes to defending their language. Like the “true” Pythonistas and those poor, deluded fools who hold up PHP as a model of simplicity, Ruby fanboys spin their language’s weaknesses into strengths.

Java is everywhere, and that helps spread out the hate. Ruby, on the other hand, is concentrated. Fortunately, that makes it easy to ignore.

Haskell

This one is like stepping into quicksand—I don’t know how far I’m going to sink, and there’s no one around to help me.

Haskell gets a lot of praise for its mathematical beauty, its almost-pure functional goodness, its concision (quicksort is only two lines!) and plenty of other things. I’ll gladly say that one Haskell application I use, Pandoc, is very good. But I would not want to develop it.

The Haskell fans will be quick to point out that I started with imperative programming, and thus I don’t understand the functional mindset. Some would even go as far as Dijkstra, saying that I could never truly appreciate the sheer beauty of the language. To them, I would say: then who can?. The vast majority of programmers didn’t start with a functional programming language (unless you count JavaScript, but it still has C-like syntax, and that’s how most people are going to learn it). A language that no one can understand is a language no one can use. Isn’t that what we’re always hearing about C++?

But Haskell’s main problem, in my opinion, is its poor fit to real-world problems. Most things that programs need to do simply don’t fit the functional mold. Sure, some parts of them do, but the whole doesn’t. Input/output, random numbers, the list goes on. Real programs have state, and functional programming abhors state. Haskell’s answer to this is monads, but the only decent description of a monad I’ve ever seen had to convert it to JavaScript to make sense!

I don’t mind functional programming in itself. I think it can be useful in some cases, but it doesn’t work everywhere. Instead of a “pure” functional language, why can’t I have one that lets me use FP when I can, but switch back to something closer to how the system works when I need it. Oh, wait…

PHP

I’ll just leave this here.

Magic and tech: power

One of the great drivers of technological innovation throughout history has been the need for power. Not military power, nor electrical, but motive power, mechanical power. Long before the Industrial Revolution transformed the way we think about power, machines were invented. Simple machines, complex machines, even some that we don’t quite understand. But every machine requires an input of force to get things started.

Power

Today, we have electricity, obtained from a vast array of methods: solar energy, fossil fuels, nuclear fission, all the way down to wind and water. Many of our modern forms of power generation, however, are, well, modern. They rely on technology developed relatively recently. Man-made nuclear reactors didn’t—couldn’t—exist 80 years ago. Although the mechanism that makes solar panels work was worked out by Einstein, we need present-day electronics to actually use it.

Go back not all that long ago, and you miss out on a lot of ways to generate power. Solar and nuclear are less than a century old. Coal and oil and natural gas have only been used in industrial capacities for two or three times that. For a large majority of our history, power was hard to come by, and there weren’t a lot of options. Yes, earlier generations didn’t use anywhere near as much power as we do, and they didn’t use electricity at all—except maybe in Baghdad—but you can argue cause and effect all day long. Did they not use power because they didn’t have as much of it, or did they not produce as much because they didn’t need it?

However you come down on that argument, the truth is plain to see: all the way through the Renaissance, at least, there weren’t a lot of ways to produce power. You could use human or animal power, as many cultures did. It works for travel, but also for machines that require an impetus, such as millstones, potters’ wheels, pulleys, and most other things that the people of a thousand years ago would need.

Wind and water provide a better path to power, and this was figured out some two thousand years ago. Since then, the technology has only been refined. A blowing breeze or flowing stream can spin a wheel with far less human intervention than muscle power, and they’re cheaper than beasts of burden in the long run. Even the first windmills and waterwheels, built backwards by the standards of our imagination (horizontal blades for wind and undershot wheels for water), nonetheless freed up the labor of both man and beast for other, better things.

Now with magic

This triumvirate of wind, water, and muscle was enough to get us through the ages. But what can our little bit of magic add to the mix? We’ve already seen that magical stores of energy are available to our fictional culture, and they can be used to propel a wheeled vehicle. Hook them up to any other type of wheel, and they’ll do the same thing. For a relatively small price, the people of this land have a magical alternative to wind and water. That’s not to say those won’t be used; it’s more likely that the magical means will complement them.

Even this is a huge development, but let’s see if we can do anything else before we look at how it would transform society. Most magic involves manipulating natural forces, especially fire and water and air. So why not lightning? Now, that’s not to say that mages can summon thunderbolts from the sky, no more than they can call a tidal wave or shoot fireballs from their fingertips. This is more subtle.

Static electricity is pretty easy to discover. We encounter it all the time. In the winter, it’s even worse, because the air’s drier and we tend to wear thicker clothing. I know that I cringe whenever I go to open a door this time of year, and I’m sure I’m not alone. The small shocks we get don’t have a lot of energy (on the order of millijoules), but you can ask anyone who’s ever been struck by lightning or hit with the discharge from an old CRT about the potential power of static electricity.

Electric current is a bit harder to get, but that’s where the magic comes in. As of now, it’s in its early stages, but mages have begun to store an electric charge in much the same fashion that they store mechanical power. Charging is easier, for those who know the proper lightning-element spells, and some truly massive containers can be built, resembling globe-sized versions of those plasma balls that used to be all the rage. Using the current requires some way of interfacing with the containing sphere, typically by wrapping a lightly infused bit of metal around it. This, for all intents and purposes, creates an electrode.

The first uses of this magical technology were purely medical. “Shock therapy” was briefly considered a cure-all, until it was found that it didn’t really cure much of anything. A few practical uses came out of the earliest generations: an easy spark generator, handy for starting fires (if far more expensive than sticks and rocks); a way of creating better magnets than any lodestone; electroplating metals. For a decade, the fashion among mages was to find a new and exciting way of using this captured lightning.

Then somebody figured out how to make an electric motor. This was very recently in our magical society’s history—not just within living memory, but within a generation—and it’s mostly a curiosity right now. Small electric spheres can’t provide enough current to produce a significant amount of power, and the larger versions are too costly for practical use. However, that hasn’t stopped people from trying. Some very rich individuals have contracted higher mages to develop a mill powered by this new source of energy, but no one else thinks it’s a viable replacement for the motive spheres…yet.

A few mages are traveling down a different path. Instead of trying to harness the lightning they have imprisoned for mechanical power, they are investigating the possibilities of using the electrical energy directly. They’ve made some interesting discoveries in doing this, like the fact that some materials conduct electricity, while others stop it. Small mundane devices can store tiny amounts of energy and dissipate it slowly—capacitors. And, of course, our mages are learning about the intimate connection between electricity and magnetism.

In the end, our magical society can be said to have the beginnings of electrical technology, although they came about it by a different route. As of yet, they haven’t been able to do too much with it, apart from toys, scientific experiments, and a new form of lighting that aims to be better than the old oil lamp in every way. They have, in our terms, early batteries, motors, and light filaments. Once these get out of the mage’s laboratory, they will have the same effect as their Earthly equivalents had on us.

The development of magic-powered propulsion, however, is much more of a culture shock. With the storage of mechanical energy, most repetitive labor can be automated. Looms, mills, mints, forges, nearly every aspect of medieval-style living benefits from this. The need for workers (or slaves, for that matter) has decreased severely in our fictional society’s recent times. People still need to be able to feed their families, but the unskilled masses are finding new jobs.

And they won’t remain unskilled for too long. The machines have already taken over the roles once relegated to child labor, but the children have to go somewhere. Why not school? Trade schools, whether operated by guilds or skilled craftsmen, are beginning to appear in the cities, a supply coming into existence to meet the demand. And many of these trades must teach the basics of education, as well.

Power to the people

Just by giving the populace a way to move things can we transform a people. Muscle power is very limited, and it’s tiring, even with the endurance spells we’ve already said this society has. Waterwheels need specific conditions to be productive. Not everywhere is lucky enough to have the sustained winds to make that form of power practical. But magical power levels the playing field.

Historically, the increase of power with technology has had the immediate effect of giving the affected segment of the population more time to spend not working. They naturally find ways to fill those gaps. Art, hobbies, education—the same things we do in our free time. Some of those spare-time activities end up becoming full-time jobs of their own, and so the cycle continues.

But it’s a positive feedback cycle. Each time the power available to a society increases, that’s that much less work that has to be done by its people. As we know, the less time you spend doing what you have to do, the more time you get to do the things you want to do. Greater power, then, leads to a higher standard of living, even if it’s hard to see the tangible benefits.

Sound changes: consonants

Languages change all the time. Words, of course, are the most obvious illustration of this, especially when we look at slang and such. Grammar, by contrast, tends to be a bit more static, but not wholly so; English used to have noun case, but it no longer does.

The sounds of a language fall into a middle ground. New words are invented all the time, while old ones fall out of fashion, but the phonemes that make up those words take a longer time to change. This does, however, occur more often than wholesale grammatical alterations. (In fact, sound change can lead to changes in grammar, but it’s hard to see how the opposite can happen.)

This brief miniseries will detail some of the main ways sounds can change in a language. The idea is to give you, the conlanger, a new tool for making naturalistic languages. I won’t be covering everything here—I don’t have time for that, nor do you. Examples will be necessarily brief. The Index Diachronica is a massive catalog of sound changes that have occurred in real-world languages, and it’s a good resource for conlangers looking for this sort of thing.

Consonants

We’ll start by looking at some of the main sound changes that can happen to consonants. Yes, some effects are equally valid for consonants and vowels, but I had to divide this up somehow.

Lenition

Lenition is one of the most common sound changes. Basically, it’s a kind of “weakening” of a consonant into another. Stops can weaken into affricates or fricatives, for instance; German did this after English and its relatives broke away, hence “white” versus weiß. Another word is “father”, which shows two examples of this—compare it to Latin pater, which isn’t too far off from the ancestral form. (Interestingly, you can even say that “lenition” itself is a victim.)

Fricatives can weaken further into approximants (or even flaps or taps): one such change, of /s/ to /h/, happened early on in Greek, hence “heptagon”, using the Greek-derived root “hepta-“. Latin didn’t take this particular route, giving us “September” from Latin septem “seven”.

Approximants don’t really have anywhere to go. They’re already weak enough as it is. The only place for them to go is away, and that sometimes happens, a process called elision. Other sounds can be elided, but the approximants are the most prone to it. In English, for instance, we’ve lost /h/ (and older /x/) in a lot of places. (“im” for “him” is just the same process continuing in the present day.)

Lenition and elision tend to happen in two main places: between vowels and at the end of a word. Those aren’t the only places, however.

Assimilation

Assimilation is when a sound becomes more like another. This can happen with any pair of phonemes, but consonants are more susceptible, if only because they’re more likely to be adjacent.

Most assimilation involves voicing or the point of articulation. In other words, an unvoiced sound next to a voiced one is an unstable situation, as is a cluster like /kf/. Humans are lazy, it seems, and they want to talk with the least effort possible. Thus, disparate sequences of sounds like /bs/ or /mg/ tend to become more homogenized. (Good examples in English are all those Latin borrowings where ad- shows up as “al-” or “as-“, like “assimilation”.)

Obviously, there are a few ways this can play out. Either sound can be the one to change—/bs/ can end up as /ps/ or /bz/—but it tends to be the leading phoneme that gets altered. How it changes is another factor, and this depends on the language. If the two sounds are different in voicing, then that’ll likely shift first. If they’re at different parts of the vocal tract, then the one that changes will slide towards the other. Thus, /bs/ will probably come out as /ps/, while /mg/ ends up as /ŋg/.

Assimilation is also one way to get rid of consonant clusters. Some of the consonants will assimilate, then they’ll disappear. Or maybe they won’t, and they’ll create geminates, as in Italian

Metathesis

Anyone who’s ever heard the word “ask” pronounced as “ax” can identify metathesis, the rearranging of sounds. This can happen just about anywhere, but it often seems to occur with sound sequences that are relatively uncommon in a language, like the /sk/ cluster in English.

This one isn’t quite as systematic in English, but other languages do have regular metathesis sound changes. Spanish often swapped /l/ and /r/, for example, sometimes in different syllables. One common thread that crosses linguistic barriers involves the sonority hierarchy. A cluster like /dn/ is more likely to turn into /nd/ than the other way around.

Palatalization, etc.

Any of the “secondary” characteristics of a consonant can be changed. Consonants can be palatalized, labialized, velarized, glottalized, and so on. This usually happens because they’re next to a sound that displays one of those properties. It’s like assimilation, in a way.

Palatalization appears to be the most common of these, often affecting consonants adjacent to a front vowel. (/i/ is the likely culprit, but /e/ and /y/ work, too.) Labialization sometimes happens around back rounded vowels like /u/. Glottal stops, naturally, tend to cause glottalization, etc. Often, the affecting sound will disappear after it does its work.

Dissimliation

Dissimliation is the opposite of assimilation: it makes sounds more different. This can occur in response to a kind of phonological confusion, but it doesn’t seem to be very common as a regular process. Words like “colonel” (pronounced as “kernel”) show dissimilation in English, and examples can be found in many other languages.

Even more…

There are a lot of possible sound changes we haven’t covered, and that’s just in the consonants! Most of the other ways consonants can evolve are much rarer, however. Fortition, for example, is the opposite of lenition, but instances of it are vastly outnumbered by those of the latter.

Vowels present yet more opportunities to change up the sound of a language, and we’ll see them next week. Then, we’ll wrap up the series by looking at all the other ways the sound of a word can change over time.

Software internals: Strings

A string, as just about any programmer knows, is a bit of text, a sequence of characters. Most languages have some built-in notion of strings, usually as a fundamental data type on par with integers. A few older programming languages, including C, don’t have a separate “string” type, but they still have strings. Even many assemblers allow you to define strings in your assembly language code, though you’re left to deal with them yourself.

The early string

At its heart, a string really isn’t much more than a bunch of characters. It’s a sequence, like an array. Indeed, that’s one way of “making” strings: stuff some characters into an array that’s big enough to hold them. Very old code often did exactly that, especially with strings whose contents were known ahead of time. And there are plenty of places in modern C code where text is read into a buffer—nothing more than an array—before it is turned into a string. (This usually leads to buffer overflows, but that’s not the point.)

Once you actually need to start working with strings, you’ll want something better. Historically, there were two main schools of thought on a “better” way of representing strings. Pascal went with a “length-prefixed” data structure, where an integer representing the number of characters in the string was followed by the contents. For example, "Hi!" as a Pascal string might be listed in memory as the hexadecimal 03 48 69 21. Of course, this necessarily limits the length of a string to 255, the highest possible value of a byte. We could make the length field 16 bits (03 00 48 69 21 on a little-endian x86 system), bringing that to 65535, but at the cost of making every string a byte longer. Today, in the era of terabyte disks and gigs of memory, that’s a fair trade; not so in older times.

But Pascal was very much intended more for education and computer science than for run-of-the-mill software development. On the other side of the fence, C took a different approach: the null-terminated string. C’s strings aren’t their own type, but an array of characters ending with a null (00) byte. Thus, our example in C becomes 48 69 21 00.

Which style of string is better is still debated today, although modern languages typically don’t use a pure form of either of them. Pascal strings have the advantage of easily finding the length (it’s right there!), while C’s strlen has to count characters. C strings also can’t have embedded null bytes, because all the standard functions will assume that the null is only at the end. On the other hand, a few algorithms are easier with null-terminated strings, they can be as long as you like, and they’re faster if you don’t need the length.

In modern times

In today’s languages, the exact format of string doesn’t matter. What you see as the programmer is the interface. Most of the time, that interface is similar to the array, except with a few added functions for comparison and the like. In something like C#, you can’t really make your own string type, nor would you want to. But it’s helpful to know just how these things are implemented, so you’ll know their strengths and weaknesses.

Since everything ultimately has to communicate with something written in C, there’s probably a conversion to a C-style string somewhere in the bowels of any language. That doesn’t mean it’s what the language works with, though. A Pascal-like data structure is perfectly usable internally, and it’s possible to use a “hybrid” approach.

Small strings are a little special, too. As computers have gotten more powerful, and their buses and registers have grown wider, there’s now the possibility that strings of a few characters can be loaded in a single memory access. Some string libraries use this to their advantage, keeping a “small” string in an internal buffer. Once the string becomes bigger than a pointer (8 bytes on a 64-bit system), putting it in dynamic memory is a better deal, space-wise. (Cache concerns can push the threshold of this “small string optimization” up a bit.)

There are also a few algorithms and optimizations that string libraries can use internally to speed things up. “Copy-on-write” means just that: a new copy of a string isn’t created until there’s a change. Otherwise, two variables can point to the same memory location. The string’s contents are the same, so why bother taking up space with exact copies? This also works for “static” strings whose text is fixed; Java, for one, is very aggressive in eliminating duplicates.

UTF?

Nowadays, there’s a big problem treating strings as nothing more than an array of characters. That problem is Unicode. Of course, Unicode is a necessary evil, and it’s a whole lot better than the mess of mutually incompatible solutions for international text that we used to have. (“Used to”? Ha!) But Unicode makes string handling exponentially harder, particularly for C-style strings, because it breaks a fundamental assumption: one byte equals one character.

Since the world’s scripts together have far more than 255 characters (the most a byte can distinguish), we have to do something. So we have two options. One is a fixed-size encoding, where each character—or code point—takes the same amount of space. Basically, it’s ASCII extended to more bits per character. UTF-32 does this, at the huge expense of making every code point 4 bytes. Under this scheme, any plain ASCII string is inflated to four times its original size.

The alternative is variable-length encoding, as in UTF-8. Here, part of the “space” in the storage unit (byte for UTF-8, 2 bytes for UTF-16) is reserved to mark a “continuation”. For example, the character ë has the Unicode code point U+00EB. In UTF-8, that becomes C3 AB. The simple fact of the first byte being greater than 7F (decimal 127) marks this as a non-ASCII character, and the other bits determine how many “extended” bytes we need. In UTF-32, by contrast, ë comes out as 000000EB, twice as big.

The rules for handling Unicode strings are complex and unintuitive. Once you add in combining diacritics, the variety of spaces, and all the other esoterica, Unicode becomes far harder than you can imagine. And users of high-level, strings-are-black-boxes languages aren’t immune. JavaScript, for instance, uses UCS-2, a 16-bit fixed-width encoding. Until very recently, if you wanted to work with “high plane” characters—including emoji—you had some tough times ahead. So there’s still the possibility, in 2016, that you might need to know the internals of how strings work.

On ancient artifacts

I’ve been thinking about this subject for some time, but it was only after reading this article (and the ones linked there) that I decided it would make a good post. The article is about a new kind of data storage, created by femtosecond laser bursts into fused quartz. In other words, as the researchers helpfully put it, memory crystals. They say that these bits of glass can last (for all practical purposes) indefinitely.

A common trope in fiction, especially near-future sci-fi, is the mysterious artifact left behind by an ancient, yet unbelievably advanced, civilization. Whether it’s stargates in Egypt, monoliths on Europa, or the Prothean archives on Mars, the idea is always the same: some lost race left their knowledge, their records, or their technology, and we are the ones to rediscover them. I’m even guilty of it; my current writing project is a semi-fantasy novel revolving around the same concept.

It’s easy enough to say that an ancient advanced artifact exists in a story. Making it fit is altogether different, particularly if you’re in the business of harder science fiction. Most people will skim over the details, but there will always be the sticklers who point out that your clever idea is, in fact, physically impossible. But let’s see what we can do about that. Let’s see how much we can give the people a hundred, thousand, or even million years in the future.

Built to last

If your computer is anything like mine, it might last a decade. Two, if you’re lucky. Cell phone? They’re all but made to break every couple of years. Writable CDs and DVDs may be able to stand up to a generation or two of wear, and flash memory is too new to really know. In our modern world of convenience, disposability, and frugality, long-lasting goods aren’t popular. We buy the cheap consumer models, not the high-end or mil-spec stuff. When something can become obsolete the moment you open it, that’s not even all that unwise. Something that has to survive the rigors of the world, though, needs to be built to a higher standard.

For most of our modern technology, it’s just plain too early to tell how long it can really last. An LED might be rated for 11,000 hours, a hard drive for 100,000, but that’s all statistics. Anything can break tomorrow, or outlive its owner. Even in one of the most extreme environments we can reach, life expectancy is impossible to guess. Opportunity landed on Mars in 2004, and it was expected to last 90 days.

But there’s a difference between surviving a very long time and being designed to. To make something that will survive untold years, you have to know what you’re doing. Assuming money and energy are effectively unlimited—a fair assumption for a super-advanced civilization—some amazing things can be achieved, but they won’t be making iPhones.

Material things

Many things that we use as building materials are prone to decay. In a lot of cases, that’s a feature, not a bug, but making long-term time capsules isn’t one of those cases. Here, decay, decomposition, collapse, and chemical alteration are all very bad things. So most plastics are out, as are wood and other biological products—unless, of course, you’re using some sort of cryogenics. Crossing off all organics might be casting too wide a net, but not by much.

We can look to archaeology for a bit of guidance here. Stone stands the test of time in larger structures, especially in the proper climate. The same goes for (some) metal and glass, and we know that clay tablets can survive millennia. Given proper storage, many of these materials easily get you a thousand years or more of use. Conveniently, most of them are good for data, too, whether that’s in the form of cuneiform tablets or nanoscale fused quartz.

Any artifact made to stand the test of time is going to be made out of something that lasts. That goes for all of its parts, not just the core structure. The longer something needs to last, the simpler it must be, because every additional complexity is one more potential point of failure.

Power

Some artifacts might need to be powered, and that presents a seemingly insurmountable problem. Long-term storage of power is very, very hard right now. Batteries won’t cut it; most of them are lucky to last ten years. For centuries or longer, we have to have something better.

There aren’t a lot of options here. Supercapacitors aren’t that much better than batteries in this regard. Most of the other options for energy storage require complex machinery, and “complex” here should be read as “failure-prone”.

One possibility that seems promising is a radioisotope thermoelectric generator (RTG), like NASA uses in space probes. These use the heat of radioactive decay to create electricity and they work as long as there’s radioactivity in the material you’re using. They’re high-tech, but they don’t require too much in the way of peripheral complexity. They can work, but there’s a trade-off: the longer the RTG needs to run, the less power you’ll get out of it. Few isotopes fit into that sweet spot of half-life and decay energy to make them worthwhile.

Well, if we can’t store the energy we need, can we store a way to make it? As blueprints, it’s easy, but then you’re dependent on the level of technology of those who find the artifact. Almost anything else, however, runs into the complexity problem. There are some promising leads in solar panels that might work, but it’s too early to say how long they would last. Your best bet might actually be a hand crank!

Knowledge

One of the big reasons for an artifact to exist is to provide a cache of knowledge for future generations. If that’s all you need, then you don’t have to worry too much about technology. The fused-quartz glass isn’t that bad an option. If nothing else, it might inspire the discoverers to invent a way to read it. What knowledge to include then becomes the important question.

Scale is the key. What’s the difference between the “knowers” and the “finders”? If it’s too great, the artifact may need to include lots and lots of bootstrapping information. Imagine sending a sort of inverse time capsule to, say, a thousand years ago. (For the sake of argument, we’ll assume you also provide a way to read the data.) People in 1016 aren’t going to understand digital electronics, or the internal combustion engine, or even modern English. Not only do you need to put in the knowledge you want them to have, you also have to provide the knowledge to get them to where it would be usable. A few groups are working on ways to do this whole bootstrap process for potential communication with an alien race, and their work might come in handy here.

Deep time

The longer something must survive, the more likely it won’t. There are just too many variables, too many things we can’t control. This is even more true once you get seriously far into the future. That’s the “ancient aliens” option, and it’s one of the hardest to make work.

The Earth is like a living thing. It moves, it shifts, it convulses. The plates of the crust slide around, and the continents are not fixed in place. The climate changes over the millennia, from Ice Age to warm period and back. Seas rise and fall, rivers change course, and mountains erode. The chances of an artifact surviving on the surface of our world for a million years are quite remote.

On other bodies, it’s hit or miss, almost literally. Most asteroids and moons are geologically dead, and thus fairly safe over these unfathomable timescales, but there’s always the minute possibility of a direct impact. A few unearthly places (Mars and Titan come to mind) have enough in the way of weather to present problems like those on Earth, but the majority of solid rock in the solar system is usable in some fashion.

Deep space, you might think, would be the perfect place for an ancient artifact. If it’s big enough, you could even disguise it as an asteroid or moon. However, space is a hostile place. It’s full of radiation and micrometeorites, both of which could affect an artifact. Voyager 2 has its golden record, but how long will it survive? In theory, forever. In practice, it’ll get hit eventually. Maybe not for a million years, but you never know.

Summing up

Ancient artifacts, whether from aliens or a lost race of humans, work well as a plot device in many stories. Most of the time, you don’t have to worry about how they’re made or how they survived for so long. But when you do, it helps to think about what’s needed to make something like an artifact. In modern times, we’re starting to make some things like this. Voyager 2, the Svalbard Global Seed Vault, and other things can act, in a sense, as our legacy. Ten thousand years from now, no matter what happens, they’ll likely still be around. What else will be?

Let’s make a language – Part 13b: Numerals (Conlangs)

For the first time in this series, not only will we be able to treat Isian and Ardari in the same post, but we’ll actually look at them at the same time. We can do this thanks to the similarity in the way they treat numerals. Sure, there are differences, and we’ll see those as we go, but the highlights don’t change that much from the “simple” Isian to the “complicated” Ardari.

The numerals

First off, both conlangs use a decimal system, like most languages in common use today. Both are based around the number ten, but in slightly different ways. Ardari is a more “pure” decimal language, although it has a little bit of vigesimal contamination; Isian, on the other hand, likes to work with hundreds for larger numbers. Although that may sound odd, think about how we do it in English: a million is a thousand thousands, a billion a thousand millions, and so on.

Before we get to the meaty grammar bits, here’s a table of numeral words in both conlangs. It shows all numerals up to twenty, all the multiples of ten up to a hundred, and a few selections to illustrate the numbers in between.

Number Isian Ardari
1 yan jan
2 naw wegh
3 choy dwas
4 khas fèll
5 gen nibys
6 hod sald
7 sowad chiz
8 nicul ghòt
9 pir ang
10 pol kyän
11 poloyan vänja
12 polonaw braj
13 polochoy kyävidas
14 polokhas kyävèll
15 pologen kyuni
16 polohod kyävisald
17 polosowad kyävichiz
18 polonicul kyävijòt
19 polopir kyäveng
20 nopolic darand
21 nopoloyan darandvi jan
22 nopolonaw darandvi wegh
30 choypolic dwaskyän
33 choypolochoy dwaskyänvi dwas
40 khaspolic wedarand
50 gempolic byskyän
60 hobolic dwasrand
70 subolic chiskyän
80 nilpolic fèldarand
90 pirpolic änkyän
100 cambor grus

In both languages, the default form of a numeral is as an adjective. For Ardari, this requires adjective inflection for the first four, including changing for the gender of their head nouns. On the Isian side, every number but yan “one” will have a plural head noun, but there is otherwise nothing to worry about.

We can use numerals directly as nouns in Ardari, just like any adjective, but we can’t in Isian, since it doesn’t allow adjectives without head nouns. Instead, we can use the “dummy” noun at: naw at “two things”. (For “one”, we’d use the singular yan a.)

Creating higher numbers in Ardari is, surprisingly, fairly straightforward. As you can see in the table above, numbers like 21 are constructed using the linking conjunction -vi, which appears on everything but the last noun or adjective in the phrase. Thus, darandvi jan is literally “twenty and one”. This pattern extends throughout the system: 123 is grusvi darandvi dwas.

In Isian, things get a little hairier. Up to 109, you take the “tens” numeral, strip off the final -ic, add on a linking -o-, and add the “ones” numeral: nopolic “twenty” plus yan “one” equals nopoloyan “twenty-one”. Past that, you have to make a phrase like polopir cambor at wa nilpolochoy “1,983”, but this takes you all the way to 9,999.

For positively huge numbers, you need more numerals. Isian has two native higher powers: jagor “ten thousand” and ilicor “million”, which can be used just like cambor “hundred”. As an example, the large number 1,048,576 would be represented in Isian by the mouthful ilicor at wa khas jagor at wa nilpologen cambor at wa subolohod. Yes, our way looks more compact, but imagine writing it out.

Ardari instead has separate words for each power of ten up to a million: ulyad “thousand”, minyir “ten thousand”, ovòd “hundred thousand”, and akrèz “million”; these can be “stacked” into a -vi phrase with the others. Our same example in the paragraph above, 1,048,576, then becomes akrèzvi fèll minyirvi ghòt ulyadvi nibys grusvi chiskyänvi sald. (As a shorter alternative, one can simply recite the digits in order, putting yvi before the last: jan zu fèll ghòt nibys chiz yvi sald.)

That last example shows the Ardari word for zero, zu. Isian has one, too: anca. However, it has an added wrinkle in that it doesn’t work the same way as the other numerals. To say “zero” as a noun, instead of using anca at “zero things”, you say anocal, the Isian word for “nothing”.

Our number is up

That’s all there is to it for counting numerals in our conlangs. They’re fairly simple, mostly because I stuck to a decimal number system. If you want to use something more “exotic”, like base-12, well, have fun with that. I’ve tried, and it’s a lot harder than it looks. Still, the “dozenal” people don’t seem to mind. Also, there’s a lot of grammar stuff I could have added, and we haven’t covered ordinal numbers, but those can come later. We can count in our languages now, and that’s good enough for the time being.

Thoughts on Vulkan

As I write this (February 17), we’re two days removed from the initial release of the Vulkan API. A lot has been written across the Internet about what this means for games, gamers, and game developers, so I thought I’d add my two cents.

I’ve been watching the progress of Vulkan with interest as both user and programmer, and on a “minority” platform (Linux). For both reasons, Vulkan should be making me ecstatic, but it really isn’t. I’m not trying to be the wet blanket here, but everything I see about Vulkan is written in such a gushing tone that I feel the need to provide a counterweight.

What is it?

First off, the rundown. Vulkan is basically the “next generation” of OpenGL. OpenGL, of course, is the 3D technology that powers everything that isn’t Windows, as well as quite a few games on Windows. Vulkan is intended to be a lower-level—and thus faster—API that achieves its speed by being closer to the metal. It’s supposed to be a better fit for the actual hardware of a GPU, rather than the higher-level state machine of OpenGL. Oh, and it’s cross-platform, unlike DirectX.

As of 2/17, there’s only one game out there that can use Vulkan: The Talos Principle. Drivers are similarly scarce. AMD’s are alpha-quality on Windows and nonexistent on Linux, nVidia only has an old beta for Linux, but much better Windows support, and Intel is, well, Intel. Hurray for competition.

Why it’s good

The general rule in programming is that the higher in the “stack” you go, the slower you get. High-level languages like JavaScript, Python, and Ruby are all dreadfully slow when compared to the lower-level C and C++. And assembly is the fastest you can get, because it’s the closest thing to a native language. For GPUs, the same thing is true. OpenGL is fairly high up in the stack, and it shows.

Vulkan was made to fit in a lower level. It has better support for multithreading, multicore programming. Shaders are faster. Everything about it was made to speed things up while remaining stable and supported. In essence, the purpose is to put everyone on a level playing field everywhere except the GPU. To make the OS irrelevant to graphics.

That’s a good thing. I say that not only because I use Linux, not only because I’d like more games for it. I say that as someone who loves the idea of computers in general and as gaming machines. Anything that makes things better while keeping the PC open is a win for everybody. DirectX might be the best API ever invented (I’ve heard people say it is), but if you’re using something other than Windows or an Xbox, it might as well not exist. OpenGL works just about everywhere there’s graphics. If Vulkan can do the same, then there’s no question that it’s good.

Why it’s not

But it won’t. That’s the problem. Vulkan ultimately derives from AMD’s Mantle API, which was mostly made for the Xbox One and PS4, to give them a much-needed power boost. The PC wasn’t exactly an afterthought, but it doesn’t seem like it was ever going to be the main focus of Mantle. Now, that console-oriented nature probably got washed away in the transition to Vulkan, but it causes a ripple effect, meaning that…

Vulkan doesn’t work everywhere.

Yeah, I said it. Currently, it requires some serious hardware support, and it’s mostly limited to the latest couple of generations of GPU. Intel only makes integrated graphics, and some of those can use it, but you know how that goes. For the GTX line, you need at least a 6-series, and then only the best of them. AMD has the widest support, as you’d expect, but it’s full of holes. On Linux, the R9 290 won’t be able to use Vulkan, because it uses the wrong driver (radeonsi instead of amdgpu).

And that brings me to my problem. For AMD’s APU integrated graphics, you have to have at least the Kaveri chipset, because that’s when they started putting in the GCN stuff that Vulkan requires. Kaveri came out in early 2014, a mere two years ago. It was supposed to release in late 2013, but delays crept in. Since I built my current PC for Christmas 2013, I’m out of luck, unless I want to buy a new video card.

But there’s no good choice for that right now, not on Linux. Do I get something from nVidia, where I’m stuck with proprietary drivers, and I can’t even upgrade the kernel without worrying that they’ll crash? Or do I buy AMD, the same company that got me into this mess in the first place? Sure, they have better open-source drivers, but who’s to say that they’ll actually work? You can ask the 290 owners what they think about that one.

The churn

So, for now, I’m on the outside looking in when it comes to Vulkan. But I can see the benefit in that. I get to watch while all the early adopters work out the kinks.

Vulkan isn’t going to take over the world in a night, or a month, or even a year. There are just too many people out there with computers that can’t use it. It’ll take some time before that critical mass is reached, when there are enough Vulkan-capable PCs out there to make it worthwhile to dump OpenGL. (DirectX isn’t really a factor here. It’s tied to Windows, and to a specific Windows version. I don’t care if DX12 is the Second Coming, it’s not going to make me get Windows 10.)

Game engines can start supporting Vulkan right now. Quite a few of them are, like Valve’s Source Engine. As an alternate code path, as an optimization used if possible, it’s fine. As a replacement for the OpenGL rendering system of an engine? Not a chance. Not yet.

Give it some time. Give the Khronos Group a couple of versions to fix the inevitable bugs. Give the world a few years to cycle through their current—underpowered or unsupported—computers or GPUs. When we get to that point, you might be able to see Vulkan reach its full potential. 2020 is a nice year, I think. It’s four years into the future, so that’s a couple of generations of graphics cards and about one upgrade cycle for most people and time for a new set of consoles. If Vulkan hasn’t taken off by then, it probably never will. But it will, eventually.

Leap Day special

Do you know what today is? Yesterday was February 28, so some badly-written programs might think it’s the 1st of March, but it’s not. Yep, it’s every programmers worst nightmare: Leap Day. You’re not getting a “code” post on Monday because I can’t read a calendar. Today’s special, so this is a special post.

Dates are hard. There’s no getting around that. Every part of our calendar seems like it was made specifically to drive programmers insane. Most years have 365 days, but every fourth one has 366. Well, unless it’s a century year, then it’s back to 365. Except every 400 years, like in the year 2000—those are leap years again. Y2K wasn’t as bad as it could’ve been, true, but there were quite a few hiccups. (Thanks to JavaScript weirdness, those never really stopped. Long after millennial fever died down, I saw a website reporting the current year as “19108”!) But 2000 was a leap year, and that surprised older software almost as much as actually being in the year 2000.

It gets worse. How long is a month? The answer: it depends. Weeks are always 7 days, thankfully, but you can’t divide 365 or 366 into 7 without a remainder. You’ll always have one or two days left over. And a day isn’t necessarily 24 hours, thanks to DST. Topping it all off, you can’t even assume that there are 60 seconds in a minute, because leap seconds. (That one is subject to change, supposedly. I’ll believe it when I see it.)

That’s just for the calendar we use today in the Western world. Add in everything else involving dates, and you have a recipe for disaster. The days in a month are numbered consecutively, right? Wrong! If you’re using, say, the Jewish calendar, you can’t even guarantee that two years have the same number of months. The Islamic calendar once depended on the sighting of the moon. The Maya had a calendar with 5 or 6 days that weren’t part of any month. Even our own Gregorian calendar doesn’t have a year zero. At some point, you’re left wondering how society itself has made it this far.

I’ll leave it to sites like The Daily WTF for illustrated examples of date handling gone wrong. God knows there are enough to choose from. (Dates in Java are especially horrendous, I can say from experience.) Honestly, I’d have to say that it’s more amazing to see date handling done right. By “right”, I mean handling all the corner cases: leap seconds, leap years, week numbers, time zones, DST adjustments, etc. ISO has a standard for dates, but…good luck with that.

So don’t be surprised to see a few things break today. And if you’re not a programmer, don’t feel bad. Sometimes, it feels like we can’t figure this out any better than you. That’s never more true than on this day.

Let’s make a language – Part 13a: Numerals (Intro)

After learning how to speak, counting is one of the first things children tend to figure out, for obvious reasons. And language is set up to facilitate learning how to count, simply because it’s such an important part of our existence as human beings. The familiar “one, two, three” of English has its counterparts around the world, though each language has its own way of using them.

These numerals will be our focus today. (Note that we can’t really call them numbers in a linguistic context, because we’re already using the term “number” for the singular/plural distinction.) Specifically, we’ll look at how different languages count with their numerals; in math terms, these will be the cardinal numbers. In a later post, we can add in the ordinal numbers (like “first” and “third”), fractions, quantities, measurements, and all that other good stuff. For now, let’s talk about counting.

Oh, and since numerals lie at a kind of intersection of linguistics and mathematics, it’ll help if you’re familiar with a few concepts from math. While we won’t be going into things like positional number systems—I’ll save that for a post about writing systems, far into the future—the concept of powers will be important. More information shouldn’t be that hard to find on the Internet, so I’ll leave that in your capable hands.

Count the ways

How a language counts is highly dependent on its culture. Remember that counting and numeral words predate by far the invention of writing. Now think about how you can count if you can’t write. One of the best ways is by using parts of your body. After all, it’s always with you, unlike a collection of stones or some other preliterate method. Thus, bodily terms often pop up in the context of numerals.

In fact, that’s one of the simplest methods of creating numerals: just start numbering parts of your body. A few languages from Pacific islands still use this today, and it’s entirely possible that it’s how all ancestor languages did it. Words for the fingers of one hand usually cover 1-4, with the thumb standing for 5. After that, it depends on the language. Six could be represented by the word for the palm or wrist, and larger numbers by points further up the arm. In this way, you can continue down the opposite arm, to its hand, and then on to the rest of the body.

Once you need to work with larger numbers, however, you’ll want a better way of creating them. The “pointing” method is inefficient—you need to remember each point on the body in order—and there are only so many body parts. This is fine for a hunter-gatherer society, and many of those have a very small selection of numerals (anywhere from one to five), using a word for “many” for anything higher. But we “advanced” peoples do need to refer to greater quantities. The solution, then, is to use a smaller set of numerals and construct larger ones from that. That’s how we do it in English: “twenty-five” is nothing more than “twenty” plus “five”.

For our language, the key number is 10. Every number up to this one has its own numeral, while larger ones are mostly derived. The only exceptions are words like “hundred” and “thousand” which, incidentally enough, represent higher powers of 10. Thus, we can say that English uses base-10 counting—or decimal, if you prefer fancier words.

At the base

Every language with a system of numeral words is going to have a numerical base for that system. Which number is used as the base really has a lot to do with the history of the language and how its people traditionally counted. Not every number is appropriate as a base; Douglas Adams once said that nobody makes jokes in base-13, and I can state with confidence that nobody counts in it, either. Why? Because 13 is awkward. It’s a prime number with essentially no connection to any part of the body. Since counting probably originated with body parts, there’s no reason for a culture to ever develop base-13 counting. Other numbers, though, are quite suitable.

  • Decimal (base-10) counting is, far and away, the most common in the world. Look at your hands, and you’ll see why. (Unless, of course, you don’t have ten fingers.) Counting in decimal is just the finger counting most of us grew up with, and decimal systems tend to have new words for higher powers of 10. In English, we’ve got “hundred” and “thousand”, and these are pretty common in other decimal languages. For “ten thousand”, we don’t have a specific native word, but Japanese (man) and Ancient Greek (myrioi) do; the latter is where we get the word “myriad”.

  • Vigesimal (base-20) is not quite as widespread as decimal, but it has plenty of supporters. A few European languages use something like base-20 up to a certain point—one hundred, in fact—where they switch to full decimal. But a “true” vigesimal system, using powers of 20 instead of 10 (and thus having separate words for 400, 8,000, etc.), can be found in Nahautl (Aztec) and Maya, as well as Dzongkha, in Bhutan. Like decimal, vigesimal most likely derives from counting, but here it would be the fingers and the toes.

  • Quinary (base-5) turns up here and there, particularly in the Pacific and Australia. Again, it comes from counting, but this time with only one hand. It’s far more common for 5 to be a “sub-base” in a greater decimal system; in other words, 10 can be “two fives”, but 20 is more likely to be “two tens”. The alternative, where the core terms are for 5, 25, 125, and so on, doesn’t seem to occur, but there’s no reason why it can’t.

  • Duodecimal (base-12) doesn’t appear to have an obvious body correlation, but it actually does. Using the thumb of one hand, count the finger bones on that hand. Each finger has three of them, and you’ve got four non-thumb fingers: 3 × 4 = 12. There are a few languages out there that use duodecimal numerals (including Tolkien’s Quenya), but base-12 is more common in arithmetic contexts, where its multiple factors sometimes make it easier to use than decimal. Even in English, though, we have the “dozen” (12) and “gross” (144).

  • Other numbers are almost never used as the “primary” base in a language, but a few can be found as “auxiliary” bases. Base-60 (sexagesimal), like our minutes and seconds, is entirely possible, but it will likely be accompanied by decimal or duodecimal sub-bases. Some languages of Papua New Guinea and thereabouts use a quaternary (base-4) system or, far more rarely, a senary or base-6 system. Octal (base-8) can work with finger counting if you use the spaces between your fingers, and a couple of cultures do this. And, of course, it’s easy to imagine an AI using octal, hexadecimal (base-16), or plain binary (base-2).

Word problems

In general, numerals up to the primary base are all going to be different, as in English “one” through “ten”. A few powers of the base will also have their own words, but this will be dependent on how often the speakers of a language need those higher numbers. “Hundred” and “thousand” suffice for many older cultures, but the Mayans could count up to the alau, 206 or 64 million, China has native terms up to 1014 (a hundred trillion), and the Vedas have lots of terms for absurdly large numerals.

No matter what the “end” of the scale, most of the numbers in between will be somehow derived. Again, the more often numbers are used, the more likely they’ll acquire specific terms, but special forms are common for multiples of the base up to its square (100 in decimal, 400 in vigesimal, and so on), like our “twenty” or “eighty”. Intermediate numbers will tend to be made from these building blocks: multiples and powers of the base. How they’re combined is up to the language, but the English phrasing, for once, is a pretty good guide.

Some languages work with a secondary base, and these may affect the way numeral words work. Twelve and twenty can almost be considered sub-bases for English with words like “dozen” and the peculiar method of constructing numbers in the teens. Twenty is a stronger force in other European languages, though. French is an example here, with 80 being quatre-vingts, literally “four twenties”. In contrast, a full vigesimal system can function just fine with the numeral for twelve derived as “ten and two”, using 10 as a sub-base, although I’m not aware of an example. Any factor can also work as a sub-base, especially in base-20, where 4 and 5 both work, or base-60, where you can use 6 and 10.

Irregularity is everywhere in natural languages, and that includes numerals. There always seem to be a few outliers that don’t fit the pattern. English has “eleven” and “twelve”, of course; it gets them from Germanic, as do many of its cousins. Spanish, among others, has veinte for 20, whereas other multiples of ten are constructed fairly regularly from their “ones” (treinte, etc.). Other examples abound.

Fitting in

How numeral words fit into a language is also a major variable. Sometimes, they’re a separate part of speech. Or they can be adjectives. Or nouns. Or some combination of all three. If they’re adjectives or nouns, then they may or may not participate in the usual grammar. Latin, for instance, requires small numerals (up to four) to be inflected, but everything larger is largely fixed in form. English lets numerals act as adjectives or nouns, as needed, and some dialects allow nouns following adjectival numerals to ignore grammatical number (“two foot of rope”, “eight head of cattle”). It’s really a mess most everywhere.

For a conlang, it’s going to come down to the necessities. Auxlangs, as always, need to be simple, logical, and reasonable, so it’s best not to get too crazy, and this extends to all aspects of numerals. You’re not going to get many followers if you make them start counting by dozens! (Confession time. I did this for a non-auxlang over ten years ago, and I still forget it uses duodecimal sometimes! Imagine how that would be for a language intended to be spoken.)

Fictional languages get a little bit of a pass. Here, it’s okay to go wild, as long as you know what you’re doing. Non-decimal bases are everywhere in conlangs, even in “professional” ones like Tolkien’s. With non-humans, you get that much more rope to hang yourself with. Four-fingered aliens (or cartoon characters) would be more likely to reckon in an octal system than a decimal one. Depending on how their digits are made, you could also make a case for base-6 or base-9, by analogy with Earthly octal and duodecimal finger counting. Advanced races will be more likely to have a sophisticated system of higher powers, like our billion, trillion, etc. And so on.

More than any other part of this series, numerals are a part of a culture. If you’re making a conlang without a culture—as in an auxlang—then think of who the speakers will be, and copy them. Otherwise, you might need to consider some of the aspects of your fictional speakers. How would they count? How would they think of numbers? Then you can start making your own.

Godot Engine 2.0 released

Finally!

I’ve been saying for a while now that I think Godot is one of the best game engines around for indie developers. It’s open source, it’s free, you never have to worry about royalties—all it really needed was a bit more polish. Well, version 2.0 is out, and that brings some of that much-needed polish. Downloads and changelogs are at the link above, but I’ll pick a few of the improvements that stand out to me.

Scenes

Godot is, for lack of a better term, a scene-based game engine. Scenes are the core construct, and the engine has always been built around making them easy yet powerful. With 2.0, that’s now even more true.

Thanks to the new additions to scene instancing, Godot scenes got even better. Now, every scene in a Godot game is, to put it in Unity terms, a prefab. If you’ve used Unity, you know how helpful prefabs can be; Godot gives them to you for free. Basically, every instance of a scene can be edited in any way. All of its child nodes, including sub-scenes, are there for the changing.

It gets better, because now scenes can be inherited, too. The obvious use for this is a “base” object that is slightly altered to quickly create others. Enemies with subtle AI or animation changes, for example, or palette-swapped pickups. But I’m sure you can find plenty of other ways inheritance can help you. I mean, it wouldn’t be used so much in programming if you couldn’t.

The editor

Without the editor, Godot would be nothing more than Yet Another Engine. But it does have the editor, and that’s one of its biggest draws. The new version gives the editor a major overhaul, adding tons of new features. It’ll take some time to work out how—and how much—they help, but it’s hard to imagine that they won’t.

The most important, from my view, are multiple scene editing and the new Script view. Working with Godot, one of the biggest pains was the constant need to switch between scenes. They’re the central component of your engine, but you can only have one of them open at a time? No more, and that change alone will probably double your productivity.

The dissociation of the script editor from the scene editor turns Godot into more of an IDE. That will make it seem more familiar to people coming from code-heavy engines, for one thing. But it also means that we can keep multiple scripts open across scene changes. Again, the time-consuming context switch when editing was one of my main gripes with Godot’s editor. Now it’s gone.

Live editing

This one deserves its own section. Live editing is, simply put, the ability to edit your game while it’s running. I’ll have to try it out to see how well it works, but if it does, this is pretty huge. Especially in the later stages of development, fine-tuning can take forever if you’re constantly going through the edit-compile-run cycle. If Godot can take even some of that pain away…wow.

Combine this with the improvements to the debugger, including a video RAM view and collision/navigation debugging, and it gets even better. Oh, and if you’re working on a newer Android game, you can even have live editing on the device.

The announcement at the Godot homepage has a video of live editing in action. I suggest watching it.

The rest

Godot version 2.0 is a massive update. Those features I mentioned are only the best parts, and there are a lot of minor additions and changes. Some of them are of…questionable benefit, in my opinion (I’m not sold on heatmaps in the list of open scripts, for instance, and why not use JSON for your scene’s text format, like everyone else?), but those are far outweighed by the undeniable improvements.

I’ve said it before, and I’ll say it again. If you’re an indie game dev, especially if you’re focusing on 2D games, you owe it to yourself to check out Godot. It really is one of the best around for that niche. And it’s not like it’ll cost you anything.