Building aliens: environment

Everything that lives lives somewhere. All organisms exist in an environment of some sort. It may not necessarily be what we think of when we hear the word “environment”, but that’s merely our human bias creeping in. Animals live in a specific environment. So do plants. So do extremophile bacteria, though theirs and ours have essentially nothing in common. Aliens, too, will be found in a certain environment, but which one is very dependent on their evolution.

The nature of Nature

For a long time, scientists and philosophers wrestled with the question of how much an organism’s environment affects its life, the so-called “Nature vs. Nurture” debate. We know now that there is no debate, that both have an impact, but let’s focus on the Nature half for now.

We, as humans, live mostly in temperate and tropical climates with moderate to heavy rainfall. We’re adapted to a fairly narrow band of temperatures, but our technology—clothes, air conditioning, etc.—augments our ability to survive and thrive in more hostile environments. Indeed, technology has let us travel to nearly lifeless regions, such as deep, dry deserts like the Atacama, the frozen wastes of interior Antarctica, and that most deadly environment of all: space.

But puny little us can’t live in such places. Not by ourselves. Other organisms are the same way, and they don’t have the benefit of advanced life-support machinery. So most of them are stuck where they are. Look through history, and you’ll see numerous accounts of wild animals (and indigenous people!) being captured and returned to an explorer’s homeland, where they promptly die.

Now, evolution’s very premise, natural selection, says that the most successful organisms are those best adapted to their environment. Thus, for an alien species, you want to know where it lives, because that will play a role in determining how viable your alien is. An aquatic animal isn’t going to survive very long in rain-shadow desert. Jungle trees won’t grow at 60° latitude. And the list goes on.

Components of an environment

A few factors go into describing the kind of life that can exist in a specific environment, or biome. Most of these boil down to getting the things life needs to perform its ultimate goals: survival and reproduction. For instance, all kinds of life require some form of energy. Plants get it from sunlight and photosynthesis, while animals instead eat things. The environment serves as a kind of backdrop, but it’s also an integral part of an organism’s survival, which is why life’s goals are better suited by becoming more adapted.

On a more useful level, however, we can look at a biome as an area having the following characteristics in about the same quantities:

  1. Temperature: Most species can only live effectively at a certain temperature. Too low, and things start to freeze; too high, and they boil. On Earth, of course, water is the primary limiting factor for temperature, though truly alien (i.e., not water-based) life will be constricted to somewhere near the range of its preferred chemical. (Not to say that freezing temperatures are an absolute barrier to life; penguins live just fine in subzero temps, for example.)

  2. Sunlight: This is the “energy” component I mentioned earlier. Assuming we’re dealing with a surface-dweller, sunlight is likely going to be the main type of incoming energy. That’s especially true for plants or other autotrophs, organisms which produce their own food. As any horticulturist knows, most plants are also highly adapted to a certain amount of sunlight. They’ll bloom only when the day is long enough, for example, or they’ll die if the nights grow too long, even if the temperature stays just fine.

  3. Proximity to water: I was going to label this as “precipitation”, but that turns out to be too specific. Water (or whatever your aliens use) is a vital substance. Every species requires it, and many absolutely must have a certain amount of it. If they, like plants, can’t move, then they must rely on water coming to them. That can fall from the sky as precipitation, or it can come across land in the form of tidal pools, or just about any other way you can think of.

  4. Predators and prey: If you remember old science classes, you know about the food chain. Well, that’s something all life has to worry about, if you’ll pardon the anthropomorphizing. Predators adapt to the presence of certain kinds of prey, and vice versa. Take one away, and things go out of whack. Species can overrun the land or go extinct.

Humans get away with a lot in this. Once again, that’s because of our intelligence and technology, and it’s reasonable to assume that a sapient alien race would overcome their own obstacles in much the same way. But everything else has to limp along without the benefits of higher thinking, so other species must adapt to their environment, rather than, essentially, bringing their own with them.

Great upheaval

All environments are constantly in flux. Climate changes, from season to season or millennium to millennium. Rainfall patterns shift, oceanic currents move, and that’s before you get into anything that may be caused by humanity. Then there are “transient” changes in environment, from wildfires to hurricanes to asteroid impacts. These can outright destroy entire habitats, entire biomes, but so can the slower, more gradual shifts. Those just give more warning.

When the environment changes beyond the bounds of a species, one of two things can happen. That species can adapt, or it will die. History and prehistory are littered with examples of the latter, from dodos to dire wolves. Adaptation, on the other hand, can often give rise to entirely new species, distinct from the old. (For an example, take any extant organism, because that’s how evolution works.)

An alien race will have its own history of environmental upheaval, entirely different from anything on Earth. A different series of major impacts, larger tidal effects from a bigger moon, massive solar flares…and that’s just the astronomical effects. Aliens will be the result of their own Mother Nature.

That’s where they become different. Even if they’re your standard, boring carbon-based lifeforms, even if their “animal” kingdom looks suspiciously like an alternate-color version of ours, they can still be inhuman. On Earth, one branch of the mammalian tree gave rise to primates, some of which got bigger brains. On another world, it could have been the equivalent of reptiles instead. Or birds. Or plants, but I’m not exactly sure how that’d work. One thing’s for sure, though: they’ll live somewhere.

On sign languages

Conlangs are, well, constructed languages. For the vast majority of people, a language is written and spoken, so creators of conlangs naturally gravitate towards those forms. But there is another kind of language that does not involve the written word, does not require speech. I’m talking, of course, about sign language.

Signing is not limited to the deaf. We all use body language and gesture every day, from waving goodbye to blowing kisses to holding out a thumbs-up or peace sign. Some of these signs are codified as part of a culture, and a few can be quite specific to a subculture, such as the “hook ’em horns” gesture that is a common symbol of the University of Texas.

Another example of non-deaf signing is the hand signals used by, for example, military and police units. These can be so complex that they become their own little jargon. They’re not full sign languages, but they work a bit like a pidgin, taking in only the minimum amount of vocabulary required for communication.

It’s only within the community of the hearing-impaired that sign language comes into its own, because we’re talking about a large subset of the population with few other options for that communication necessary for human civilization. But what a job they have done. American Sign Language is a complex, fully-formed language, one that is taught in schools, one learned by children the same as any spoken tongue.

Conlangs come in

So speaking with ones body is not only entirely possible, but it’s also an integral part of speaking for many people. (The whole part, for some.) Where does the art of constructing languages come in? Can we make a sign conlang?

Of course we can. ASL was constructed, as are many of the other world sign languages. All of them have a relatively short history, in fact, especially when compared to the antiquity of some natural languages. But there are a few major caveats.

First, sign languages are much more difficult to comprehend, at least for those of us who have never used one. Imagine trying to develop a conlang when you can’t speak any natlang. You won’t get very far. It’s the same way for a non-signer who would want to create a sign language. Only by knowing at least one language (preferably more) can you begin to understand what’s possible, what’s likely, and what’s common.

Second, sign languages are extremely hard to describe in print. ASL has transcription schemes, but they’re…not exactly optimal. Your best bet for detailing a sign conlang might actually be videos.

Finally, a non-spoken, non-written language will necessarily have a much smaller audience. Few Americans know ASL on even the most rudimentary level. I certainly don’t, despite decades of alphabet handouts from local charities and a vain attempt by my Spanish teacher in high school to use signing as a mnemonic device. Fewer still would want to learn a sign language with even less use. (Conlangers in general, on the other hand, would probably be as intrigued as for any new project.)

Limits

If you do want to try your hand at a sign conlang, I’m not sure how helpful I can be. I’ll try to give a few pointers, but remember what I said above: I’m not the best person to ask.

One thing to keep in mind is that the human body has limits. Also, the eye might be the most important organ for signing. A sign that can’t be seen is no better than a word you don’t speak. Similarly, it’s visual perception that will determine how subtle a signing movement can be. This is broadly analogous to the difference in phonemes, but it’s not exactly the same.

Something else to think about: signing involves more than the hands. Yes, the position and orientation of the hands and fingers are a fair bit of information, but sign languages use much more than that. They also involve, among other things, facial expressions and posture. A wink or a nod could as easily be a “phoneme” of sign as an outstretched thumb.

Grammar is another area where signing can feel strange to those used to spoken language. ASL, for example, doesn’t follow normal English grammar to the letter. And there’s no real reason why it should. It’s a different language, after all. And the “3D” nature of sign opens up many possibilities that wouldn’t work in linear speech. Again, it’s really hard to give more specific advice here, but if you do learn a sign language, you’ll see what I’m saying. (Ugh. Sorry about the pun.)

Celebrating differences

Conlanging is art. Just as some artists work with paint and canvas, others sculpture or verse, not all conlangers have to be tied to the spoken and written varieties of language. It’s okay to be different, and sign languages are certainly different.

Software internals: associative arrays and hashing

We’ve already seen a couple of options for storing sequences of data in this series. Early on, we looked at your basic array, for example, and then we later studied linked lists. Both of those are good, solid data structures. Linear arrays are fast and cheap; linked lists are great for storing an arbitrary amount of data for easy retrieval. There’s a reason these two form the backbone of programming.

They do have their downsides, however. Linear arrays and lists both suffer from a problem with indexing. See, a piece of data in an array is accessed by its index, a number you have to keep track of if you ever want to see that data again. With linked lists, thanks to their dynamic structure, you don’t even get that. Finding a value in a list, if you don’t already know what you’re looking for, is practically impossible.

By association

What we need is a structure that gives us the flexibility of a list but the indexing capabilities of an array. Oh, and can we have an array whose indexes are keys we decide, instead of just numbers? Sure, and I bet you want a pony, too. But seriously, we can do this. In fact, your programming language of choice most likely already does it, and you just never knew. JavaScript calls them “objects”, Python “dictionaries”, C++ “maps”, but computer science generally refers to this structure as the associative array. “Associative”, because it associates one value with another. “Array” tells you that it functions more like a linear array, in terms of accessing data, than a list. But instead of computer-assigned numbers, the indexes for an associative array can be just about anything: integers (of your choosing), strings, even whole objects.

The main component of an associative array is the “mapping” between keys (the indices used to access the data) and the values (the actual data you’re storing in the array). One simple way to create this mapping is by making a list of key-value pairs, like so:

class AssociativeArray {
    constructor() {
        // Pairs are objects with 2 attributes
        // k: key, v: value
        this.pairs = []
    }

    get(key) {
        let value = null;
        for (let p of this.pairs) {
            if (p.k === key) {
                value = p.v;
                break;
            }
        }

        return value;
    }

    insert(key, value) {
        if (!this.get(key)) {
            // Add value if key not already present
            this.pairs.push({k: key, v: value});
        } else {
            // *Update* value if key present
            this.pairs = this.pairs.map(
                p => (p.k === key) ?
                    {k: key, v: value} :
                    p
            );
        }
    }

    remove(key) {
        this.pairs = this.pairs.filter(p => (p.k !== key));
    }
}

(Note that I’m being all fancy and using ES6, or ES2015 if you prefer. I’ve also cheated a bit and used builtin Array functions for some operations. That also has the dubious benefit of making the whole thing look more FP-friendly.)

This does work. We could create a new instance of our AssociativeArray class, call it a, and add values to it like this: a.insert("foo", 42). Retrieving them is as easy as a.get("foo"), with a null result meaning no key was found. (If you want to store null values, you’d have to change this to throw an exception or something, which is probably better in the long run, anyway.)

The problem is, it’s not very fast. Our simple associative array is nothing more than a glorified linked list that just so happens to store two values at each node instead of one. For small amounts of data, that’s perfectly fine. If we need to store a lot of information and look it up later, then this naive approach won’t work. We can do better, but it’ll take some thought.

Making a hash of things

What if, instead of directly using the key value, we computed another value from it and used that instead? That might sound silly at first, but think about it. The problem with arbitrary keys is that they’re, well, arbitrary. We have no control over what crazy ideas users will come up with. But if we could create an easy-to-use number from a key, that effectively transforms our associative array into something closer to the familiar linear array.

That’s the thinking behind the hash table, one of the more popular “back ends” for associative arrays. Using a special function (the hash function), the table computes a simple number (the hash value or just hash) from the key. The hash function is chosen so that its output isn’t entirely predictable—though, of course, the same input will always return the same hash—which distributes the possible hash values in a random-looking fashion.

Hashes are usually the size of an integer, such as 32 or 64 bits. Obviously, using these directly is not an option for any realistic array, so some sacrifices have to be made. Most hash tables take a hybrid array/list approach. The hash “space” is divided into a number of buckets, chosen based on needs of space and processor time. Each bucket is a list (possibly even something like what we wrote above), and which bucket a value is placed in is determined by its hash. One easy option here is a simple modulus operation: a hash table with 256 buckets, for instance, would put a key-value pair into the bucket corresponding to the last byte (hash % 256) of its hash.

In code, we might end up with something like this:

class HashTable {
    constructor() {
        this._numBuckets = 256;
        this._buckets = new Array(this._numBuckets);
        this._buckets.fill([]);
    }

    _findInBucket(key, bucket) {
        // Returns element index of pair with specified key
        return this._buckets.findIndex(p => (p.k === key));
    }

    _update(key, value, bucket) {
        for (let i = 0; i < this._buckets[bucket].length; i++) {
            if (this._buckets[bucket][i] === key) {
                this._buckets[bucket][i].v = value;
                break;
            }
        }
    }

    set(key, value) {
        let h = hashCode(key);
        let bucket = h % this._numBuckets;
        let posInBucket = this._findInBucket(key, bucket);

        if (posInBucket === -1) {
            // Not in table, insert
            this._buckets[bucket].push({k: key, v: value});
        } else {
            // Already in table, update
            this._update(key, value, bucket);
        }
    }

    get(key) {
        let h = hashCode(key);
        let bucket = this._buckets[h % this._numBuckets];
        let index = this._findInBucket(key, bucket);

        if (index > -1) {
            return bucket[index].v;
        } else {
            // Key not found
            return null;
        }
    }
}

The function

This code won’t run. That’s not because it’s badly-written. (No, it’s totally because of that, but that’s not the point.) We’ve got an undeclared function stopping us: hashCode(). I’ve saved it for last, as it’s both the hardest and the most important part of a hash table.

A good hash function needs to give us a wide range of values, distributed with little correlation to its input values so as to reduce collisions, or inputs leading to the same hash. For a specific input, it also has to return the same value every time. That means no randomness, but the output needs to “look” random, in that it’s uniformly distributed. With a hash function that does this, the buckets will remain fairly empty; optimally, they’d each only have one entry. The worst-case scenario, on the other hand, puts everything into a single bucket, creating an overly complex linked list with lots of useless overhead.

There are a lot of pre-written hash functions out there, each with its own advantages and disadvantages. Some are general-purpose, while others are specialized for a particular type of data. Rather than walk you through making your own (which is probably a bad idea, for the same reason that making your own RNG is), I’ll let you find one that works for you. Your programming language may already have one: C++, for one, has std::hash.

Incidentally, you may have already seen hash functions “in the wild”. They’re fairly common even outside the world of associative arrays. MD5 and SHA-256, among others, are used as quick checksums for file downloads, as the uniform distribution principle of hashing causes small changes in a key to radically alter the final hash value. It’s also big (and bad) news when collisions are found with a new hash algorithm; very recently—I can’t find the article, or I’d link it here—developers started warning about trusting the “short” hashes used by Git to mark commits, tags, and authors. These are only 32 bits long, instead of the usual 256, so collisions are a lot more likely, and it’s not too hard to pad out a malicious file with enough garbage to give it the same short hash as, say, the latest Linux kernel.

Summing up

Hash tables aren’t the only way to implement associative arrays. For some cases, they’re nowhere near the best. But they do fill the general role of being good enough most of the time. Computing hash codes is the most time-consuming part, with handling collisions and overfull buckets following after that. Unfortunately for higher-level programmers, you don’t have a lot of control over any of these aspects, so optimizing for them is difficult. However, you rarely have to, as the library writers have done most of that work for you. But that’s what this series is about. You may never need to peer into the inner workings of an associative array, but now you’ve got an idea of what you’d find down there.

Summer Reading List 2016: Wrap-up

So it’s Labor Day, and we’ve reached the unofficial end of another summer. Last month, I posted my progress in my Summer Reading List challenge. I had read 2 out of 3 then, and I’ve since finished the third.

Literature

Title: New Atlantis
Author: Sir Francis Bacon
Genre: Fiction/literature
Year: 1627

Yes, you read that year right, this is a work almost four centuries old. You can find it at Project Gutenberg if you want to read it for yourself. That’s where I got it, and I’m glad I looked it up. Genre-wise, I’m not sure what to call it, so I went with the catchall of “literature”.

New Atlantis is what we’d call a short novella today, but that’s mainly because it was never truly finished. It’s also an incredibly interesting text for its vision. Written like many old stories purporting to be travelers’ tales, it describes the utopian land of Bensalem, supposedly located somewhere out in the Pacific. The inhabitants of that land are far advanced (compared to the 17th century) and living in a veritable paradise of wisdom and knowledge.

By my personal standards, however, it reads more like a dystopia: despite professing a very progressive separation of church and state, for example, Bensalem is hopelessly rooted in Christianity, to the point where even the Jews living there (the narrator meets one) lie somewhere in the “Jews For Jesus” range. The whole place seems to be governed in a very authoritarian manner, where societal norms are given force of law—or the other way around. Yes, Bacon describes a nation better than any he knew, but I would take modern America, with all its flaws, over the mythical New Atlantis every time.

But people today rarely look at those parts of the text. Instead, they’re more focused on what the scientists of Bensalem have done, and this is described in some detail at the very end of the work. Bacon’s goal here is to overwhelm us with the fantastic creations, but they read like a laundry list of the last hundred years. If you read it right, you can find airplanes, lasers, telephones, and all kinds of other things in there, all predicted centuries ago. And that is the real value of the book. It’s further proof that earlier ages did not lack for imagination; their relatively unadvanced state was through no fault of their minds. As an author myself, I find that information invaluable.

Next year?

I had fun with this whole thing. I read something I never would have otherwise, and I pushed myself outside my normal areas of interest. I’m not sure I’m ready to make this a regular, annual occurrence, but it seems like a good idea. I hope you feel the same way.

Alien grammars

When making an “alien” conlang (however you define that), it’s easy to take the phonology half, make it outrageous, and call it a day. But that’s only half the battle. There’s more to a language than its sounds, and if you’re designing a conlang for anything more than naming, you still need to look at the grammar, too.

So how can we make the grammar of a language “feel” otherworldly? As with the sounds, the easiest way is to violate the traditional expectations that we, as speakers of human languages, have developed. To do this, however, we need to know our preconceptions, and we also need to take a look at how grammar really works.

The foundation of grammar

I can’t claim to understand the mental underpinnings of language. I bought a book about the subject years ago, but I’ve never had the chance to read it. What follows comes from articles, other conlangers, and a healthy dose of critical thinking.

Language is a means of communication, but it’s also inextricably linked to cognition, to memory. The human brain is a wonderful memory device (if you discount those pesky problems of fuzzy recollection, dementia, etc.) that works in a fascinating way. At its core, it seems to be primarily an associative memory. In other words, we remember things by their association with what we already know. Our language reflects that. Nouns are things, verbs are actions, adjectives are states or conditions; not for nothing are they all taught by means of pictures and examples. Those build the associations we use to remember them.

Is it possible that an alien intelligence doesn’t work this way? Sure, but that would be so “out there” that I can’t begin to contemplate it. If you want to try, go ahead, but it won’t be easy. On the other hand, that’s one way to get something totally different from what we know. I just wouldn’t want to try and describe it, much less use it.

Moving on to “actual” linguistics, we’re used to the traditional parts of speech, the trinity of noun, verb, and adjective. On top of them, we often toss in adverbs, articles, prepositions, and the like, but those aren’t quite as “core” as the big three. Indeed, many languages get by just fine without articles, including Latin and Japanese. Adverbs are so nebulously defined that you can find words in any language that fit their category, but there are plenty of examples of languages using adjectives in their place. Prepositions (more generally, adpositions) aren’t entirely necessary, either; most of their function can be replaced by a large set of case markers.

But it seems awfully hard to ditch any of the big three. How would you make a language without verbs, for instance? Like the “pure functional” approach to computer programming, it would appear that nothing could be accomplished, since there’s no way to cause changes. Similarly, a “nounless” conlang couldn’t name anything. For adjectives, it’s not so bad, as state verbs can already take their place in a few natural languages, but it’s difficult to imagine a total lack of them.

That hasn’t stopped industrious conlangers from trying. Languages without verbs/nouns/adjectives are a perennial favorite in any circle. I can’t say I’ve attempted one myself, but I can see how they might work, and any of the three looks very alien to me.

  • Getting rid of adjectives is the easiest. As above, most can be replaced by state verbs. A phrase like “the red door”, for instance, might come out as something like “the door that is red” or “the door it is red”. The difference is that adjectives are often (but not always) marked as if they were nouns, while a state verb like this would instead be conjugated like any other verb in the language.

  • Dropping verbs is much harder. You can look into languages that lack copular verbs for examples here, though the same idea can be extended to most of the “predicating” verbs, like English “to have”, “to become”, etc. Pronouns, case markers, and liberal use of adjectives can take care of most of it, but it’ll definitely feel weird.

  • Throwing out nouns is next to impossible, in my opinion. Not to say you should give up your ambitions, but…I’m not sure I can help you here. A language without nouns may truly be beyond our comprehension. Perhaps it’s the language of some mystical or super-advanced race, or that of a hive mind which has no need for names. I honestly don’t know.

Alternate universal

Much simpler than tossing entire categories of words is just finding new ways to use them. Most (I emphasize this for a reason) languages of the world follow a set of linguistic universals, as laid out by linguist Joseph Greenberg. They don’t follow all of them, mind you, but it’s better than even odds. Some of the more interesting ones include:

  • #3: VSO languages are prepositional. This comes from their “head-first” word order, but it’s easy to invert.
  • #14: In conditional clauses, the conclusion (“then” part) normally follows the condition (“if” part). Even in English, it’s not hard to find counterexamples, if you know where to look. (See what I did there?) But it’s not the usual form. In an alien conlang, it could be.
  • #34: Languages with a dual number must also have a plural; those with a trial (three of something) have a dual and a plural. No real reason this has to be so, not for aliens. I’d like to know how you justify it, though.
  • #38: Any language with case can only have a zero marker for the case representing an intransitive subject—nominative, absolutive, etc. If you’ve got a different way of distinguishing cases, then there’s no reason you have to follow this rule, right?
  • #42: All languages have at least three person and two number distinctions for pronouns. Another one where it’s not too hard to see the “alien” point of view.

Conclusion

Grammar is a huge field, and we’ve barely taken the first step into it. Even if you don’t make something completely outlandish as in the above examples, you can still create an alien grammar out of more mundane building blocks. There are thousands of languages in the world, and many have rare or unique features that could find a home in your alien conlang. A number for “a few”? Sure, it works for the Motuna language of Papua New Guinea. Want a case for things you’re afraid of? A few of the Aboriginal languages of Australia can help you there…if there are any native speakers left alive when you start. The list goes on for just about anything you could think of. All you have to do is look, because, linguistically, aliens are among us.

Godot Engine 2.1

So a new version of one of my favorite game engines came out recently, and I’m just now taking a look at it. (Actually, I’m writing this on the 11th.) If you’ll recall from a couple of months ago, I tried making a game in Godot 2.0, but I couldn’t continue due to an illness. Now, with a new version out, I think I might try again soon. But first, let’s look at what’s in store, and let’s see if Godot is still worthy of the title of Best Free Game Engine.

Asset sharing

Unity’s absolute best feature is the Asset Store. There’s no question about that. It’s got everything you need, and it’s almost possible to make a game just by downloading graphics, sound effects, and pre-written code from there. And other engines (Unreal, etc.) are starting to jump on the same bandwagon.

With version 2.1, Godot can now say it’s joining the ranks. There’s a new Asset Library accessible within the editor, and it’ll eventually work the same as any other. Right now, it’s pretty bare, but I have no doubt it’ll grow as time goes on.

Better plugins

Godot’s editor has a lot of features, but it doesn’t do everything. Developers have always been able to add functionality with plugins (mainly through using the tool keyword in Godot scripts), but 2.1 brings a whole new EditorPlugin API, meaning that these tools can integrate better with the rest of the editor. They can also work with the Asset Library.

The API, like the Asset Library, is a work in progress, so it doesn’t have all the features yet. But give it time.

Editor improvements

If you don’t speak English, Godot 2.1 helps by supporting full internationalization of the interface. Along with that, the developers have added full support for actual fonts, instead of the whole “import TTF to textures” process we used to have to do. This also opens up the possibility of customizing the editor’s fonts, their colors and sizes. And it’s a small step from there to full-on theming, so that’s in, too.

Another nicety is custom keybindings, and that solves one of my bigger gripes. Not that I couldn’t change the bindings, mind you; I rarely do that in programming apps, if only because it makes tutorials harder to follow. No, now I can actually see what the default bindings are. Godot’s documentation was severely lacking in that area, but giving me the option to change now also brings the ability to discover, and that’s always a good thing.

They’ve also added some drag-and-drop stuff that I’ll probably never use, along with context menus, which I certainly will. And then there’s the usual improvements to the script editor, which are necessary when you’re using your own scripting language. (More on that later.)

Animation

Animation in Godot confused me. It must have confused a lot of other people, too, because one of the big new additions is a simpler way of using the AnimatedSprite node for, well, animation. You know, the thing it’s made for. No longer do you have to create an AnimationPlayer and all that, when all you really want to do is say, “Hey, play this one little animation, okay?”

The verdict

The official announcement (linked above) has a few other additions, like new features for live reloading. They’ve also got a link to the full changelog, if you like reading. But I’m content with what I’ve seen so far. Godot is still good, and it looks like it’s only getting better—maybe.

What does the future hold? Well, according to the developers, the next version is 2.2, due by the end of the year. (Yeah, right!) That one’s the first true “feature” release, and what features it’ll have. Do you hate Python? My brother does, so he’s happy to hear that Godot will soon give you not one, but two new options for scripting. One is a “visual” design system like Unreal’s Blueprints, a style that I’ll be writing about soon. The other is massive in its importance: C#. Yep, the same language Unity uses. If that takes off, then look out.

Beyond that, things get murky. They claim they’re already starting on Godot 3.0, and it’ll come out early next year. As it’s centerpiece, it’ll have an entirely new renderer, probably based on Vulkan. And that might be a problem. But I’ll wait and see. Godot is too good to abandon, but I hope it doesn’t abandon me on the road to better things.

Elimination

The six basic principles of responsible government don’t, by themselves, converge on a single system. Instead, it’s best to first look at those regimes they entirely eliminate.

Necessity

Egalitarianism is, in essence, a lack of organized government. Anarchy is a repudiation of it. Neither is well-suited to the needs of a large, diverse state. Human nature is to be social, and that means forming relationships, whether romantic, platonic, friendly, or simply on the basis of mutual acquaintance. Those relationships can easily turn into alliances, recognitions of shared purpose. From there, it is a short step to self-organization, and then to government. Therefore, anarchy can never be more than merely a temporary state.

Purpose

A government that does not protect the lives of its citizens is a failure. One that does not uphold those citizens’ rights is equally lacking, though the nature and quantity of those rights can be argued. It is clear, however, that some systems of rule are entirely unsuitable. Those predicated on the absence of individuality—Leninist communism, for instance—cannot be considered acceptable for governing a free people. Likewise, those which ignore fundamental human rights—theocracies being only the most familiar example—must not be seen as viable. But even democracy is not infallible, as the tyranny of the majority can be used to strip rights from the minority. Good government, in this sense, is far more than a question of who rules. It also must take into account how those who rule protect those who do not.

Evolution

Nothing in this world is without change, and that includes society. Social mores shift over generations, but a rigid government can fail because it fails to adapt to these seismic shifts. To prevent this, a state must give some allowance to the possibility of radical changes to its structure, to its core tenets. Those that do not, those that remain fixed, are doomed to fall. Again, theocracy, with its strict insistence on dogma and received wisdom, is the perfect illustration. But a theocracy can adapt by reading and interpreting its scriptures in a new light, while a strongly segmented, highly conservative aristocracy may instead resist the natural evolution of culture, leading to failure.

Equality

Every human being is unique, but we all share many things in common. It is easy, common, and perfectly natural to separate humanity into groups based on the presence or absence of a specific factor. However, to institutionalize this separation is to create an imbalance between members of a preferred class and outsiders. Implementing this sort of segregation by intrinsic factors, those we are physically, mentally, or psychologically unable to change, sorts humanity into those who are—by definition—haves and have-nots. This leaves a segment of the population without political power, without the opportunity for redress, and that segment will only seek to find a new outlet for such. Legislative tribalism, in the form of laws motivated by race, religion, sex, or other factors, is a failure of a government to protect (as by the Principle of Purpose) a certain portion of its citizenry. Executive tribalism, as seen in caste systems, aristocracies, communism, and oligarchy, bars this same portion from using its political voice.

Cooperation

Once again, we return to egalitarianism, as it is a prime example of the nature of competition. When every man is for himself, he can accomplish only what is within his own means. A larger conglomeration, however, can achieve greater things. This is because of resource pooling, specialization, and leadership, among other factors, and it is an expected consequence of our social nature. The most striking examples are those grand projects requiring the cooperation of an entire state, but this sort of socialism is inherent in any system of government. That does not require a surrender of all free will, as in Hobbes’ Leviathan, nor is it a condemnation of capitalism. When we accept the role of government, we commit a portion of ourselves to it, hoping that we receive greater benefits in return. It is this equation, in its lack of balance, where the failure of neoliberal technocracy lies. Yet there is equal imbalance in pure objectivism and pure collectivism.

Initial Conditions

The final principle is the most culture-specific, and it is here that one government system—or the idealized notion thereof—is singled out. However, the Constitution itself does not uphold all the ideals stated above. In its original form, it embraced inequality. It made little space for grand-scale cooperation. In accordance with the Principle of Evolution, however, it has changed to reflect the times, the changing beliefs of those it represents. Other founding documents fail a different set of fundamental principles, and in differing ways. They may be suitable as a starting point for deriving a system of government, but few begin so close to the ideal. Wholly unusable, by contrast, are scriptural resources such as the Ten Commandments, as these are defined by their violation of the Principle of Equality.

None of this is to say that these forms of government are invalid. If a people chooses to create for itself a state based on a violation of the Principles, the choice is theirs alone, and it is not for us to assign fault or blame. Those regimes, however, may not endure.

Let’s make a language, part 18b: Geography (Conlangs)

This time around, let’s combine Isian and Ardari into a single post. Why? We won’t be seeing too many new words, as geography is so culture-dependent, and I’m trying to keep our two conlangs fairly neutral in that regard. Thus, the total vocabulary for this topic only comprises 30 or so of the most basic terms, mostly nouns.

Isian

For Isian speakers, the world is sata, and that includes everything from the earth (tirat) to the sea (jadal) to the sky (halac). In other words, all of amicha “nature”. And in the sky are the sida “sun”, nosul “moon”, and hundreds of keyt “stars”, though these only come out at night.

A good place to look at the stars is at the top of a mountain (abrad), but a hill (modas) will do in a pinch. Both of these contrast with the flatter elshar “valleys” and abet “plains”. Another contrast is between the verdant forest (tawetar) and the dry, desolate serkhat “desert”. Isian speakers, naturally, prefer wetter lands, and they especially like bodies of water, from the still fow “lake” to the rushing silche “stream” and ficha “river”.

Water isn’t quite as welcome when it falls from the sky in the form of rain (cabil) or, worse, snow (saf). Speakers of Isian know that rain falls from alboni “clouds”, particularly during a gondo “storm”. Some of those can also bring thunder and lightning (khoshar and segona, respectively), as well as blowing winds (nafi). But that’s all part of the cansun, or “weather”, and the people are used to it.

Natural world
  • earth: tirat
  • moon: nosul
  • nature: amicha
  • planet: apec
  • sea: jadal
  • sky: halac
  • star: key
  • sun: sida
  • world: sata(r)
Geographic features
  • beach: val
  • cave: uto(s)
  • desert: serkhat
  • field: bander
  • forest: tawetar
  • hill: modas
  • island: omis
  • lake: fow
  • plain: abe
  • mountain: abrad
  • river: ficha(s)
  • stream: silche
  • valley: elsha(r)
Weather
  • cloud: albon
  • cold: hul
  • fog: fules
  • hot: hes
  • lightning: segona
  • rain: cabil
  • snow: saf
  • storm: gondo(s)
  • thunder: khoshar
  • to rain: cable
  • to snow: sote
  • weather: cansun
  • wind: naf

Ardari

Ardari is spoken in a similar temperate region, but those who use it as a native tongue are also acquainted with more distant lands. They know of deserts (norga) and high mountains (antövi), even if they rarely see them in person. But they’re much more comfortable around the hills (dyumi) and lakes (oltya) of their homeland. The rolling plains (moki) are often interrupted by patches of forest (tyëtoma), and rivers (dèbla) crisscross the land. Young speakers of Ardari like to visit caves (kabla), but many also dream of faraway beaches (pyar).

That’s all part of the earth, or dyevi. In their minds, this is surrounded by the sea (oska) on the sides and the sky (weli) above. That sky is the home of the sun (chi) and its silvery sister, the moon (duli). These are accompanied by a handful of planets (adwi) and a host of stars (pala), two different sets of night-sky lights, though most can’t tell the difference between them.

The sky, however, is often obscured by clouds (nawra). Sometimes, so is the earth, when fog (nòryd) rolls in. And Ardari has plenty of terms for bad weather (mädròn), from rain (luza) to wind (fawa) to snow (qäsa) and beyond. Storms (korakh) are quite common, and they can become very strong, most often in the spring and summer. Then, the echoes of thunder (kumba) ring out across the land.

Natural world
  • earth: dyevi
  • moon: duli
  • nature: masifi
  • planet: adwi
  • sea: oska
  • sky: weli
  • star: pala
  • sun: chi
  • world: omari
Geographic features
  • beach: pyar
  • cave: kabla
  • desert: norga
  • field: tevri
  • forest: tyëtoma
  • hill: dyumi
  • island: symli
  • lake: oltya
  • mountain: antövi
  • plain: moki
  • river: dèbla
  • stream: zèm
  • valley: pòri
Weather
  • cloud: nawra
  • fog: nòryd
  • lightning: brysis
  • rain: luza
  • snow: qäsa
  • storm: korakh
  • thunder: kumba
  • to rain: luzèlo
  • to snow: qäsèlo
  • weather: mädrön
  • wind: fawa

Moving on

Now that we’ve taken a look at the natural world, we’ve set the stage for its inhabitants. The next two parts will cover terms for flora and fauna, in that order. In other words, we’re going name some plants next time. Not all of them; even I don’t have time for that. But we’ll look at the most important ones. By the end of it, you’ll be able to walk down the produce aisle with confidence.

First glance: C++17, part 3

It’s not all good in C++ land. Over the past two posts, we’ve seen some of the great new features being added in next year’s update to the standard, but there are a few things that just didn’t make the cut. For some, that might be good. For others, it’s a shame.

Concepts

Concepts have been a hot topic among C++ insiders for over a decade. At their core, they’re a kind of addition to the template system that would allow a programmer to specify that a template parameter must meet certain conditions. For example, a parameter must be a type that is comparable or iterable, because the function of the template depends on such behaviors.

The STL already uses concepts behind the scenes, but only as a prosaic description; adding support for them to the language proper has been a goal that keeps receding into the future, like strong AI or fusion power. Some had hoped they’d be ready for C++11, but that obviously didn’t happen. A few held out for C++14, but that came and went, too. And now C++17 has shattered the concept dream yet again. Mostly, that’s because nobody can quite agree on what they should look like and how they should work under the hood. As integral as they will be, these are no small disagreements.

Modules

Most “modern” languages have some sort of module system. In Python, for instance, you can say import numpy, and then NumPy is right there, ready to be used. Java, C#, JavaScript, and many others have similar functionality, often with near-identical syntax.

But C++ doesn’t. It inherited C’s “module” system: header files and the #include directive. But #include relies on the preprocessor, and a lot of people don’t like that. They want something better, not because it’s the hip thing to do, but because it has legitimate benefits over the older method. (Basically, if the C preprocessor would just go away, everyone would be a lot better off. Alas, there are technical reasons why it can’t…yet.)

Modules were to be something like in other languages. The reason they haven’t made the cut for C++17 is because there are two main proposals, neither truly compatible with the other, but both with their supporters. It’s almost a partisan thing, except that the C++ Standards Committee is far more professional than Congress. But until they get their differences sorted out, modules are off the table, and the preprocessor lives (or limps) on.

Coroutines and future.then

These fit together a bit, because they both tie in with the increased focus on concurrency. With multicore systems everywhere, threading and IPC are both more and less important than ever. A system with multiple cores can run more than one bit of code at a time, and that can give us a tremendous boost in speed. But that’s at the cost of increased complexity, as anyone who’s ever tried programming a threaded application can tell you.

C++, since its 2011 Great Leap Forward, has support for concurrency. And, as usual, it gives you more than one way to do it. You have the traditional thread-based approach in std::thread, mutex, etc., but then there’s also the fancier asynchronous set of promise, future, and async.

One thing C++ doesn’t have, however, is the coroutine. A function can’t just pause in the middle and resume where it left off, as done by Python’s yield keyword. But that doesn’t mean there aren’t proposals. Yet again, it’s the case that two varieties exist, and we’re waiting for a consensus. Maybe in 2020.

Related to coroutines is the continuation, something familiar to programmers of Lisp and Scheme. The C++ way to support these is with future.then(), a method on a std::future object that invokes a given function once the future is “ready”, i.e., when it’s done doing whatever it had been created to do. More calls to then() can then (sorry!) be added, creating a whole chain of actions that are done sequentially yet asynchronously.

Why didn’t then() make it? It’s a little hard to say, but it seems that the prevailing opinion is that it needs to be added in the company of other concurrency-related features, possibly including coroutines or Microsoft’s await.

Unified call syntax

From what I’ve read, this one might be the most controversial addition to C++, so it’s no surprise that it was passed over for inclusion in C++17. Right now, there are two ways to call a function in the language. If it’s a free function or some callable object, you write something like f(a, b, c), just like you always have. But member functions are different. With them, the syntax is o.f(a, b, c) for references, o->f(a, b, c) for pointers. But that makes it hard to write generic code that doesn’t care about this distinction.

One option is to extend the member function syntax so that o.f() can fall back on f(o) if the object o doesn’t have a method f. The converse is to let f(o) instead try to call o.f().

The latter form is more familiar to C++ coders. It’s basically how Modern C++’s std::begin and end work. The former, however, is a close match to how languages like Python define methods. Problem is, the two are mutually incompatible, so we have to pick one if we want a unified call syntax.

But do we? The arguments against both proposals make some good points. Either option will make parsing (both by the compiler and in the programmer’s head) much more complex. Argument-dependent lookup is already a difficult problem; this only makes it worse. And the more I think about it, the less I’m sure that we need it.

Reflection

This, on the other hand, would be a godsend. Reflection in Java and C# lets you peer into an object at run-time, dynamically accessing its methods and generally poking around. In C++, that’s pretty much impossible. Thanks to templates, copy elision, proxy classes, binders, and a host of other things, run-time reflection simply cannot be done. That’s unfortunate, but it’s the price we pay for the unrivaled speed and power of a native language.

We could, however, get reflection in the compile-time stage. That’s not beyond the realm of possibility, and it’s far from useless, thanks to template metaprogramming. So a few people have submitted proposals to add compile-time reflection capabilities to C++. None of them made the cut for C++17, though. Granted, they’re still in the early stages, and there are a lot of wrinkles that need ironing out. Well, they’ve got three (or maybe just two) years to do it, so here’s hoping.

And that’s all

C++17 may not be as earth-shattering as C++11 was, but it is a major update to the world’s biggest programming language. (Biggest in sheer size and scope, mind you, not in usage.) And with the new, faster release schedule, it sets the stage for an exciting future. Of course, we’ll have to wait for “C++Next” to see how that holds up, but we’re off to a great start.

Magic and tech: art

Art is another one of those things that makes us human, and in more than one sense: some of the earliest evidence for human habitation comes in the form of artwork such as cave drawings or inscribed shapes on animal bones. As much as I hate to admit it (I failed art class in high school), we are artistic beings.

And art—specifically the visual arts such as painting, sculpture, etc.—has progressed through the ages. It has taken advantage of technological progress. Thus, there’s no reason why it wouldn’t also be affected by the development of magic. Although it may seem odd to consider art and science so intertwined, it’s not really that far out there.

The real way

Art history is practically a restatement of the history of materials. That’s our human nature coming out; almost the first thing we do with a newly developed article of clothing, for instance, is draw on it, or paint it, or dye it. Today, we’ve got fancy synthetics colored in thousands of different hues, but even our ancestors could do some remarkable things. Look at some of those Renaissance paintings if you don’t believe me.

What they had to work with was…not the same as what we use. Many of their paints and dyes were derived from plant or animal products, with a few popular pigments coming from minerals such as ochre. Their instruments were equally primitive. Pencils weren’t invented until comparatively recently, brushes were made from real animal hair (requiring a real animal to provide it), and those fancy feather quills we only use nowadays for weddings and The Price Is Right were once the primary Western tool for writing in ink.

For “3D” artwork, the situation was little better. Today, we have things like CNC mills and techniques to move mountains of metal or marble, but our ancestors made some of the most impressive monuments and structures in the world with little more than hammers and chisels. (In the Americas, they even built pyramids without metal tools. I couldn’t build a pyramid like that in Minecraft!)

Can magic help?

How would magic advance the world of art? Our usual approach of balls of stored elemental energy won’t do much, to be honest, but there is one way they could help, so we’ll get that out of the way first. Lighting has been a problem forever; getting it right is one of the hardest parts of a modern media production. (Supposedly, this is one of the reasons why the next season of Game of Thrones is delayed.) But we’ve already stated that magic can give us better artificial lights. Give them to artists, and you instantly make portraits that much better.

Other improvements are a little less obvious. Many mages will have an easy path to artistry, as the study of magic is as much art as science. It requires observational skills, creativity, and commitment—all the same qualities a good artist needs. And they can use personal spells to aid them. What artist wouldn’t want photographic memory, for example?

The materials will also benefit from the arcane, as we have seen. The earlier advent of chemistry means, among other things, better pigments. Upgraded tools allow for more exquisite and exotic sculpture. With the advanced crucibles and furnaces magic brings, our magical realm might see a boom in the casting of “harder” metals like iron or steel. Magical technology may also bring an increased emphasis on artistic architecture. All in all, the medieval realm will start to look a lot more like the Renaissance, if not more modern.

That’s not even including the entirely different styles of art magic makes possible. Maybe pyrotechnics displays (achieved through fire spells) become popular. Etching via jets of water is a modern invention, but the right system of magic might allow it centuries earlier. Welded sculptures? Why not? You can even posit a “magical” photograph apparatus, moving the whole genre of picture-taking several hundred years into the past. And it’s a small step from recording still images to recording a bunch of still images in succession, then playing them back at full speed, especially if you get a helping hand from a wizard.

Yes, I’m talking about movies. In a society outwardly based on medieval times. It’s a complex problem, but it’s not entirely infeasible. All you really need are two things. First, a projector, which magic can easily provide. (Hint: a magic light and a force-powered motor.) Second, film. That one’s a bit harder, but it only took a few decades for inventors to go from stills to moving pictures. There’s no reason why wizards couldn’t do the same thing, although they may be held up by the need for chemical advances to make a translucent photographic medium.

It’s magic

Magic is already art, but that doesn’t mean it can’t make the lives of artists easier and more interesting. It’s often been asked what a famous artist of the past (e.g., Leonardo da Vinci or Michelangelo) could create if they were given today’s tools. In a magical society, we can come one step closer to answering that question. And that’s with a low-magic setting. Imagine what a sword-and-sorcery mage-artist could accomplish.