Programming in 2016: languages

It’s nearing the end of another year, and that’s the cue for sites the world over to start looking back on the past twelve months. Due to a lack of creative impulse in the code sphere, I’ll be doing the same. So let’s see how the science and art of programming fared in 2016, starting with the advances and changes in programming languages.

JavaScript

JavaScript might not be the biggest language out there, but it’s certainly the one most people have experienced in some form, so it makes sense to start here. And JavaScript has certainly seen some improvements this year. For one thing, it’s got a new version, as the standards guys have moved to the same silly release model that gave us modern Firefox and Chrome. The only things added to ES2016 (what should have been ES7) are an exponent operator and a single new Array method, includes(). Absolutely nothing to get excited over, and the “big” changes, like async, are still in the future…if they’re ever put in at all.

On the other hand, the environment for JavaScript is getting better all the time. You can count on full ES5 support now, and you shouldn’t have too much trouble using ES6 features. Maybe you’ll need a polyfill for those, but that’s a temporary solution. And the one good thing about Windows 10 is Edge, if only because it’s the end of the “old” Internet Explorer and that browser’s awful support for, well, anything.

Outside the browser, Node keeps chugging along. They’ve had some problems there (such as the leftpad debacle), but they’ve got that whole Node/IO.js fork thing worked out. From a political standpoint, it’s not optimal, but the codebase is solid enough. The ecosystem is growing, too, as almost everybody is using some sort of Node/Webkit construction, from Visual Studio Code to Atom to Vivaldi.

As usual, JavaScript’s future looks much brighter than its past. It’s still straining at its own boundaries, but it’s growing. It’s becoming a better language. The two main platforms (browser and Node) have improved by leaps and bounds, and now they’ve become mature. In the next year, they’ll only get better.

C++

C++ had a big year in 2016, but it was all behind the scenes. The real test will come next year, when the C++17 standard comes out, but we already know what it’s going to have in it. What might that be? More of everything, really. I’ve already written about some of the more interesting bits in a three-part series back in August. (Part 1, Part 2, Part 3)

So 2017 looks like it’ll be fun, but what about now? “Modern” C++ is finally getting widespread support, so that’s good. Even those of you stuck on five-year cycles should now be ready for that, and C++14 was an incremental upgrade. On the platform side, it’s C++. The platform is your computer, or your phone, or your refrigerator. It’s the same platform it’s always been. That’s why you would use it over any other language.

C

Microsoft has been doing their schizophrenic love-hate thing with open source for a while now, but 2016 saw the biggest payoff of the “love” side. The open release of .NET Core happened this summer, complete with support for platforms other than Windows. Considering how hard it can be to get MS to even acknowledge such a thing, that’s practically a miracle.

It’s also the beginning of the end for Mono—the third E stands for Extinguish, remember—but that’s not as bad as it sounds. Mono still has its uses, mostly for older .NET code and the things that the MS offering won’t support. Oh, and they also bought Xamarin, whose main reason for existing seemed to be letting you write C# code for mobile devices. But if even some of .NET is open-source, we don’t need that anymore. The community can do it.

C# remains a Microsoft language, though. The MIT license won’t change that. It’ll definitely give non-Windows developers some peace of mind, but you always have to remember who holds the cards here.

Java

Java’s a funny thing. On the one hand, it’s not that bad a platform. On the other, it’s owned by Oracle. They’re still playing the part of Sisyphus, rolling a ball of lawsuits up the hill, only for the courts to send them right back down. But unlike in mythology, there’s the possibility that they might win.

If they do, it’s bad news for everyone, but most of all for Java developers. Who would want to use a language owned by a sue-happy corporation? (And now the C# fanboys can say, “There’s another thing Java ripped off!” Well, okay, at least MS hasn’t sued anybody using C#. Yet.)

But if you can put that out of your mind, Java’s not bad. It’s awful from a security standpoint, and the language sucks, and—well, you get the idea. I still believe it has some upside, though. Scala—or Clojure, if that’s the way you roll—makes using the JVM bearable. Android requires Java coding, even if they desperately want to call it something else.

So Java’s 2016 has been mostly dismal, but look on the bright side: Java 9 is coming soon. And one proposal on the table is “ahead-of-time compilation”. For Java. Just think about that for one second. Maybe I’m misunderstanding, but wasn’t that the exact thing Java was designed to prevent?

The rest

Other languages have had an interesting year, but it’s too much work to list them all. Besides, I haven’t used that many of them. But here’s a few of the highlights:

  • Rust is growing. It’s almost to the point of being usable! (The rewrite of Firefox into Rust will come before that point, however.)
  • Go is still bad, but at least Google isn’t trying to push it (or Dart) on Android devs anymore. I’ll call that an improvement.
  • Python really seems to be disappearing. It’s had some minor releases, but nothing amazing. That’s a good thing, but the glaring flaws of Python 3 haven’t gone away.
  • PHP remains the bad joke of the programming world, but maybe PHP 7 can fix that. Because PHP 6 did so well.
  • Perl 6…exists! Technically, that happened last Christmas, but it’s close enough. You can’t complain about a once-in-a-lifetime occurrence.

We’ll round out the year next week by looking at another area of development: games.

On colonialism

The process of colonization ran for untold millennia before it came to a halt in the past few generations, but colonialism is uniquely tied to the period beginning around 1500, with the first Spanish incursions into the Americas, and ending with the independence movements of British, French, German, and other territories around the end of World War I. That’s about 400 years worth of colonial sensibilities, a fairly large swath of “modern” history affected by the building and upkeep of colonies.

What can we use from that time to build a good story? It’s a little beyond the traditional view of fantasy, but this period has become a significant part of the post-medieval fantasy subgenre. Paul Kearney’s Monarchies of God series and Django Wexler’s The Thousand Names are two excellent works I’ve read in recent years that fit into the “colonial era”; the first is a voyage of exploration to a new world, the latter a native uprising in a faraway imperial holding. These neatly bracket the ends of the era, in fact: colonialism begins with the first attempt at a colony, and it ends when the native—or nativized—population revolts against its distant masters.

Making a colony

The colony itself, obviously, is the central focus of a colonial story. At the beginning, it’s very much a tale of people struggling to tame a hostile environment. The true stories of European settlers coming to America are riveting. They’re full of doubt and faith, strife both with the natives and with each other, desperation and perseverance. Australia, sub-Saharan Africa, and southern Asia all have equally gripping accounts of the trials and tribulations the supposedly advanced Europeans had to endure to make those places their own. Even in the realm of science fiction, one can imagine a story about the first colonists on the moon, Mars, or a planet in a distant solar system to fall along the same lines.

After the initial struggles, the colony is not out of the woods by any means. They’ll have to adapt to their new location, to the sheer distance from their homeland. In 18th-century Australia, for instance, colonists might as well have been on another world, because they about as likely to return to England. The churning waters of the Atlantic meant that the Americas were little better off. India and Indochina were surrounded by hostility, and the antagonism of the natives of Africa is the stuff of legend. Add to that the unfamiliar terrain, the entirely new set of flora and fauna, even the differences in climate—a colony today wouldn’t be a sure thing, and these people managed it as much as 500 years ago!

Eventually, the early turbulence settles, probably after a generation or two. Once the original settlers have died off, you’re left with a population that is truly “native”. That’s where the real fun of colonialism comes in. The home government (or corporation, or whatever) might want to send more colonists, and this will cause a clash between the newcomers and those who have grown up in the colony. Or the colony’s patrons back home might want something to show for their initial outlay; some colonies were established purely for profit, especially in the Far East.

It’s entirely likely for these tensions between the colony’s native inhabitants and their motherland to grow into rebellion or open revolt. It took England’s American colonies a century and a half to reach that point, longer for India and South Africa, but it did happen. Of course, that coincided with an increased liberalism in political thought, part of the Enlightenment that ran through the entire Western world. Without the philosophies of the late 18th century, the cause of American independence (and the Mexican, African, Indian, and others that followed in its wake) might have been delayed by decades.

Our land

There’s a single wild card that makes colonizing into colonialism: the natives. Whether we’re talking about Native Americans, Australian Aborigines, or any other preexisting population, they’ll have something to say about the foreigners landing on their shores, claiming their lands. In our history, we know how that turned out, but it wasn’t always a sure thing.

Australia had a relatively sparse population anyway, but its indigenous inhabitants tended to live in the same general areas that the colonists wanted to take for themselves. They’re the best lands on an otherwise marginal continent, so that’s not surprising. The Americas, on the other hand, may have been peopled to a much larger extent. Upper estimates put the total Native American population in 1491 as high as 100 million. Half that sounds more reasonable, but that’s still a lot more natives than you might think from watching westerns.

We know what happened to most of them, though: they died. Disease and what might be called an early example of ethnic cleansing did them in. The same things tended to have the same effect—devastation—on all the other native populations of the world, but the Americas get top billing, thanks to a combination of factors. One, the US has a lot more global power than Australia or South Africa. Two, the colonization started earlier, so the effects of this interchange of genes, ideas, and disease vectors weren’t understood as well as in the 1700s. And third, the violent persecution of the indigenous peoples didn’t end with colonialism; anti-Indian sentiment ran high for the first century of the United States’ existence. Yes, apartheid lasted longer as an institution, but it was more political than militant.

But enough about that. Let’s get back to colonialism. Anywhere there’s a society, even a tribal one, in place, there’s bound to be friction. The Europeans won everywhere both from the disease factor and because of their relative level of technology. Once the illnesses had run their course, and the surviving native remnants were immune or simply too remote to become infected, the guns and horses utterly outclassed anything they could bring to bear.

It wasn’t always constant warfare and subjugation, though. Many colonies wanted to work with the natives. The reasons for cooperation are obvious: here’s a culture that’s already entrenched. They know the land in a way you never will, and all they want are a few of your high-quality guns or blankets or iron pots. In exchange, they teach you how to live better. And some of the colonists badly needed such lessons. Religious dissidents and petty criminals make poor settlers in the best circumstances, and colonies were far from that. It’s not surprising, then, that so many histories of colonization start with a few years of working in concert with the natives.

The colonial populations always seem to grow faster than the indigenous ones, because they’re not susceptible to the diseases they brought and because they’re often being supplemented by a steady influx of new colonists from the homeland. Thus, it’s almost natural that the settlers start taking more and more land, squeezing out the natives. That’s when the squabbles start. Maybe it begins as a raid here or an assassination there. Eventually, it can become something far greater, as in the cases of King Philip’s War in America and the Zulu wars in Africa. (Sometimes, as with the French and Indian War, it can be helped along by outside forces.)

If all-out war happens, it’s rarely to the sole detriment of the colony. The natives can inflict some serious wounds—the Zulus certainly did—but a colonial nation necessarily has a sizable military backing. It’s often just a matter of time before the inevitable attrition takes its toll.

On the other hand, there’s another way a native population can be effectively destroyed by colonists: marriage. Intermingling between European and American began with the first voyage of Columbus, if not the Viking landings half a millennium earlier. It’s here where those cultural differences can come to the fore. Men taking native wives—even if by force—will have a moderating effect on the persecution of those natives. Some might even abandon their own societies to join those of their spouses, but far more will introduce half-native children into the larger colonial mix. This plays havoc with the casual racism of the period, creating systems of delineation like those in Mexico, but also further blurring the line between the “good” guys and the “bad”.

In the story

For a story set in the age of colonialism, you’ve got plenty of options. Your story could be the founding of a colony, from the first landfall (if not before, looking at the original cause of the migration) to the pivotal event that ensures its survival. With natives added in, that gives you ample opportunity for action, intrigue, and first-contact diplomacy. You can delve into the indigenous culture, possibly through the eyes of a hostage or envoy, or you might turn things around and give the POV of the natives defending their homes from invaders.

The second phase of colonialism, after the initial generation is dead and gone, might be considered the “boom” phase. New settlers are coming in, while existing ones are expanding their families at a high rate. Food and land are in abundance. Here, the tensions between foreign and indigenous are still in play, but then you have the growing class of born-and-raised native sons and daughters. What are their stories? Where do they stand? They may resent the “actual” natives for causing trouble, but equally despise the motherland they’ve never known, who only sees their home as a trading post, a military base, and a source of cheap labor.

If you’re following the American model, it’s not too far to go for the third phase: rebellion. If it’s successful, this is where the colony ends, but colonialism may remain for some time. Most likely, it’ll finally die out with the original rebels or their children, but animosity between “native” and “outsider” won’t go away so easily, even as those labels become less and less meaningful. It may even get worse.

In the end, though, it’s your story. Following the historical trail of cause and effect, however, is a good start towards realism. We know this outcome can happen, because it did. People, even people set in a different world, tend to have the same motivations, the same influences. Barring unforeseen circumstances—magic or aliens, for example—it’s hard to imagine colonialism turning out too differently. It’s human nature.

Let’s make a language, part 21b: Occupations and work (Isian)

Isian, as you’ll recall, is a language whose speakers live in a remote part of our world. They’ve been cut off from modern civilization for a couple of centuries, but they’ve recently been rediscovered. Because of this, they’ve got a lot of native vocabulary to describe work, but some newer concepts require compounds.

In general, work is lodunas, an abstract noun derived from lodu “to work”. But a specific job, career, or occupation goes by bor instead. Most jobs are intended to create (tinte), but some instead destroy (dika), and a select few repair (efri) what is broken.

Workers (lodumi, plural of lodum) can perform many actions, based on their jobs. Some might teach (reshone), others build (oste). Makers of food include bakers (ogami, from oga “to bake”) and simple cooks (pirimi; piri “to cook”). These aren’t the only “domestic” occupations, either. Many Isian speakers, for their jobs, must clean (nolmi), wash (hishi) clothes, sew (seshe), or simply act as servants (dulcami; dulca “to serve”). More important for the town are craftsmen such as totasami (carpenters, literally “wood men”).

Isian is the language of a society that is still very agrarian. Thus, many of its speakers work as farmers (sepami) or just as assistants on a ban “farm”. In cities, however, most working men are instead simple lodumi, day laborers. Women who work are more likely to be reshonemi “teachers” or seshemi—in this context, a better translation might be “seamstresses”.

Finally, the places where people might work can be just as interesting as what they do. Well-to-do Isian speakers might run their own seb “shop” or chedom “inn”. Cooks can work at a restaurant (hamasim, literally “eating house”), though some isimi (“bars” or “pubs”) also serve food. And it remains common for most of the town to gather one day a week at the rishan “market”.

Word List

General terms
  • job: bor
  • poor: umar
  • rich: irdes
  • to borrow: mante
  • to create: tinte
  • to destroy: dika
  • to lend: hente
  • to repair: efri
  • to use: je
  • to work: lodu
  • work: lodunas
Places of work
  • bank: mantalar (from mante + talar)
  • bar (pub): isim
  • farm: ban
  • inn: chedom
  • market: rishan
  • mill: mur
  • restaurant: hamasim (hama “eat” + isim)
  • school: teju
  • shop: seb
Work actions
  • to bake: oga
  • to build: oste
  • to clean: nolmi
  • to cook: piri
  • to dig: daco
  • to drive: foro
  • to fold: efe
  • to grind: harca
  • to guard: holte
  • to hunt: ostani
  • to pour: lu
  • to press: hapa
  • to serve: dulca
  • to sew: seshe
  • to shoot: chaco
  • to sweep: wesa
  • to teach: reshone
  • to tie: ane
  • to wash: hishi
  • to weave: sumbe
Occupations
  • baker: ogam
  • carpenter: totasam (totac “wood” + sam “man”)
  • cooking: pirinas
  • driver: forom
  • farmer: sepam
  • hunter: ostanim
  • hunting: ostanas (ostani + -nas)
  • janitor: wesam or nolmim
  • laborer: lodum
  • miller: mursam (mur + sam “man”)
  • servant: dulcam
  • tailor: seshem
  • teacher: reshonem
  • teaching: reshonas (reshone + -nas)

Wrapping text by code

Very many games require displaying significant amounts of text on the screen. Whether dialogue, signs, backstory “data log” entries, or lists of quests and objectives, text is absolutely a necessity. And we’re not talking about just a word here and there. No, this is serious text, entire sentences at least. More specifically, I want to look at the long strings of text that are commonly shown in dialog boxes or other GUI/HUD elements, not something that might be baked into a texture.

With single words, it’s not hard at all: you just throw the word on the screen at the right position. What game engine doesn’t let you do that? Even my failed attempt at a simple 2D engine a few years ago could manage that much!

It’s when you need to display longer snippets that things get hairy. Your screen is only so wide, so, like any word processor, you have to know when to wrap the text. Now, many engines do this for you, but you may not be using one of those, or you may be writing your own. So this post is designed to give you an idea of what lies ahead, and to provide a starting point for your own efforts.

Ground rules

First, let’s get this out of the way: I’m using a bit of pseudocode here. Actually, it’s JavaScript, but with a lot of references left undefined. Hopefully, you can convert that into your language and platform of choice.

Next, I’m going to assume you’re working in English or another language that uses an alphabetic script. Something like Chinese or Japanese will have its own rules regarding how long text is broken up, and I don’t entirely understand all of those. The same goes for Arabic and other RTL scripts, but they’re much closer to American and European usage, so some of the concepts described here might carry over.

We’ll also ignore hyphenation in this post. It’s a hideously complex topic, first of all, and nobody can quite agree on which words should (or even can) be hyphenated at the end of a line. Besides, what was the last game you saw that used printed-style hyphenation? In the same vein, we’ll be ignoring non-breaking spaces, dashes, and the like.

The last assumption I’m going to make is that we’re working with Unicode strings. That shouldn’t matter too much, though. We won’t be delving into individual bytes—if you do that with Unicode, you’ve gone wrong somewhere, or you’re at a far lower level this post.

Words

On the screen, our string will be divided into lines. Thus, our job is clear: take a string of text, divide it into lines so that no line is longer than the width of our dialog box. Simple, right? So let’s see how we can go about that.

English text is divided first into sentences. These sentences are then divided into words, sometimes with added punctuation. Each word is separated from its neighbors by blank space. It’s that space that lets us wrap our text.

For a first approximation, let’s take the example of fixed-width text. This was common in older games, and it’s still used for a retro feel. (And programmers demand it for editing code.) It’s “fixed” width because each letter, each bit of punctuation, and each space will have the same width. That width can be different from one font to another, but every character in a fixed-width string, even the blank ones, will be as wide as any other.

As we won’t be worrying about hyphenation, every line will have a whole number of words. Thus, our first task is to get those words out of our string:

// Split a string of text into a list of words
function splitWords(str) {
    return str.split(' ');
}

Basically, we’ve split our string up into words, and removed the spaces in the process. Don’t worry about that. We’ll put them back in a moment. First, we need to go on a little digression.

Building with boxes

Do you remember magnetic poetry? Those little kits you could buy at the bookstore (back when bookstores were a thing)? They came with a set of pre-printed words and a bunch of magnetic holders. Some of them even let you cut your own magnets, then slide the bits of laminated paper containing words into them. However you did it, you could then assemble the words into a short, tweet-level string that you could stick to your fridge. Longer words required longer magnets, while something like “a” or “I” took up almost no space at all. If you used one of the magnetic poetry kits that came with a little board, you had to do a bit of shuffling—and maybe some editing—to get the words to fit on a line.

That’s what we’re doing here. We’re taking the words of our text, and we’re assembling them on the screen in exactly the same way. Like the board, the screen is only so wide, so it can accommodate only so much. Any more, and we have to move to the next line. So wrapping text can be thought of like this. (It’s also the beginnings of computerized typesetting. We’re not really doing anything different from TeX; it just does the job a whole lot better.)

So we need to pack our words onto lines of no more than a given width. One way to do that is incrementally, by adding words (and spaces between them) until we run out of space, then moving on to the next line.

// Build lines of fixed-width text from a string
function buildTextLines(str, screenWidth, fontWidth) {
    let allWords = splitWords(str);
    let lines = [];

    let currentLine = "";
    let currentLineWidth = 0;
    let currentWord = "";

    while (allWords.length > 0) {
        currentWord = allWords.shift();

        let currentWordWidth = currentWord.length * fontWidth;

        if (currentLineWidth + currentWordWidth > screenWidth) {
            // Too long; wrap text here
            // Save the line we just finished, and start a new one
            lines.push(currentLine);
            currentLine = "";
            currentLineWidth = 0;
        }

        currentLine += currentWord;
        currentLineWidth += currentWord.length * fontWidth

        // Try to insert a space, unless we're at the edge of the box
        // (width of space is the same as anything else)
        if (!(currentLineWidth + fontWidth > screenWidth)) {
            currentLine += " ";
            currentLineWidth += fontWidth;
        }
    }

    return lines;
}

For fixed width, that’s really all you have to do. You can still treat the text as a sequence of characters that you’re transforming into lines full of space-delimited words. As a bonus, punctuation carries over effortlessly.

Varying the width

The real fun begins when you add in variable width. In printed text, an “m” is bigger than an “n”, which is wider than an “i”. Spaces tend to be fairly narrow, but they can also stretch. That’s how justified text works: extra bits of space are added here and there, where you likely wouldn’t even notice them, so that the far ends of the lines don’t look “ragged”. (That, by the way, is a technical term.) Two words might have an extra pixel or two of space, just to round out the line.

Doing that with simple strings is impossible, because I don’t know of any programming language having a basic string type that accounts for proportional fonts. (Well, TeX does, because that’s its job. That’s beside the point.) So we have to leave the world of strings and move onward. To display variable-width text, we have to start looking at baking it into screen images.

Since every engine and renderer is different, it’s a lot harder to give a code example for this one. Instead, I’ll lay out the steps you’ll need to take to make this “box” model work. (Note that this phrase wasn’t chosen by accident. HTML layout using CSS, as in a browser, uses a similar box-based display technique.)

  1. Using whatever capabilities for font metrics you have, find the minimum width of a space. This will probably be the width of the space character (0x20).

  2. Using this as a guide, break the text into lines—again based on the font metrics—but keep the words for each line in a list. Don’t combine them into a string just yet. Or at all, for that matter.

  3. For each line, calculate how much “stretch” space you’ll need to fill out the width. This will be the extra space left over after adding each word and the minimum-width spaces between them.

  4. Divide the stretch space equally among the inter-word spaces. This is aesthetically pleasing, and it ensures that you’ll have fully justified text.

  5. Render each word into the sprite or object you’re using as a container, putting these stretched spaces between them. You can do this a line at a time or all at once, but the first option may work better for text that can be scrolled.

In effect, this is a programmatic version of the “magnetic poetry” concept, and it’s not too much different from how browsers and typesetting engines actually work. Of course, they are much more sophisticated. So are many game engines. Those designed and intended for displaying text will have much of this work done for you. If you like reading code, dive into something like RenPy.

I hope this brief look whets your appetite. Text-based or text-heavy games require a bit more work for both the developer and the player, but they can be rewarding on a level a fully voiced game could never reach. They are, in a sense, like reading a story, but with a bit of extra visual aid. And games are primarily a visual medium.

Magic and tech: economy

One of the biggest topics of the last decade has been the economy. We’re finally climbing out of the hole the banks dug for us in 2008, and it’s been long enough that most people have taken notice. Employment, income, wages, and benefits are important. So are less obvious subjects like inflation, debt and credit, or mortgages. Even esoteric phrases like “quantitative easing” make the news.

The economy isn’t a modern invention, however. It’s always been there, mostly in the background. From the first trade of goods, from the first hiring of another person to perform a service, the economy has never truly gone away. If anything, it’s only becoming bigger, both in terms of absolute wealth—the average American is “richer” than any medieval king, by some measures, and today’s billionaires would make even Croesus jealous—and sheer scope.

How would magic affect this integral part of our civilization? The answer depends on the boundaries we set for that magic, as we shall see.

Scarcity

Our economy, whether past, present, or foreseeable future, is based on the concept of scarcity. For the vast majority of human history, it was only possible to have one of something. One specific piece of gold, one individual horse, one of a particular acre of land or anything else you can think of. You could have more than one “instance” of each type—a man could own twenty horses, for example—but each individual “thing” was unique. (Today, we can easily spot the friction caused when this notion of scarcity meets the reality of lossless digital copying, the lashing out by those who depend on that scarcity and see it slipping away.)

Some of those things were rarer than others. Gold isn’t very common; gems can be rarer still. Common goods were relatively cheap, while the rare stuff tended to be expensive. And that remains true today. Look at any “limited edition”. They might have nothing more than a little gold-colored trim or an extra logo, but they’ll command double the price, if not more.

Supply and demand

All that only applies to something people want. It’s a natural tendency for rare, desirable goods to climb in value, while those things that become increasingly common tend to also become increasingly worthless. This is the basis of supply and demand. If there’s more of something than is needed, then prices go down; if there’s a shortage relative to demand, then they go up.

Although it’s a fairly modern statement, the concept is a truism throughout history. It’s not just a fundamental idea of capitalism. It’s more a natural function of a “scarcity economy”. And you can apply it to just about anything, assuming all else is equal. A shortage of laborers (say, due to a plague) pushes wages higher, because demand outstrips supply. That’s one of the ultimate killers of feudalism in the Middle Ages, in fact. Its converse—a glut of supply—is the reason why gas prices have been so low in America the past year or so.

Interconnected

Another thing you have to understand about the economy is that it’s all connected. Today, that’s true more than ever; it’s the reason we can talk about globalism, whether we consider it a bringer of utopia or the cause of all the world’s ills. For less advanced societies, the connectivity merely shrinks in scale. There was, for example, no economic connection between Europe and the Americas until the fifteenth century, apart from whatever the Vikings were up to circa 1000. The Black Death had no effect on the economy of the Inca, nor did the collapse of the great Mayan cities cause a recession in Rome. Similarly, Australia was mostly cut off from the “global” economy until shortly before 1800.

Everything else, though, was intertwined. The Silk Road connected Europe and Asia. Arab traders visited Africa for centuries before the Portuguese showed up. Constantinople, later Istanbul, stayed alive because of its position as an economic hub. And like the “contagious” recessions of modern times, one bad event in an important place could reverberate through the known world. A bad crop, a blizzard blocking overland passes, protracted warfare…anything happening somewhere would be felt elsewhere. This was the case despite most people living a very localized lifestyle.

Making magic

In role-playing games, whether video games or the pen-and-paper type, some players make it their mission to break the economy. They find some loophole, such as an easily creatable magic item that sells for far more than its component cost, and the exploit that to make themselves filthy rich. It happens in real life, too, but government tends to be better at regulating such matters than any GM. (The connection between these two acts might make for an interesting study, come to think of it.)

We’re trying for something more general, though, so we don’t have to worry about something as fine-grained as the price of goods. Instead, we can look at the big picture of how an economy can function in the presence of magic. As it turns out, that is very dependent on the type of magic you have at your disposal.

First, let’s assume for a moment that wizards can create things out of thin air. Also, let’s say that it’s not too difficult to do, and it doesn’t require much in the way of training or raw materials. Five minutes of chanting and meditating, and voila! A sword falls at your feet! Something more complex might take more time, and living things can’t be created at all, but crafted goods are almost as easy as a Star Trek replicator.

Well, that destroys any economy based on scarcity. It’s the same problem media companies have with computers: if something can be copied ad infinitum, with no loss in quality, then its unit value quickly drops to zero. Replicating or creating magic, if it’s reasonably widespread, would be like giving everyone a free 3D printer, a full library of shape files, and an unlimited supply of feedstock. Except it’d be even better than that. Need a new sword/axe/carriage/house? Call up the local mage. No materials needed; you’re only paying for his time, the same as what would happen to books, music, and movies without licensing fees and DRM.

So that’s definitely a “broken” economy. Even a single user of such magic breaks things, as he can simply clone the most expensive or valuable items he knows, selling them whenever he needs the cash. Sure, their value will eventually start to drop—supply and demand in action—but he’ll be set for life long before he gets to that point.

It’s the economy, stupid

For our magical kingdom, let’s look at something more low-key. It doesn’t have creation magic. Instead, we have at our disposal a large amount of “automating” magic, as we’ve seen in previous parts. What effect would that have on the economy? Probably the same effect increasing automation has in our real world.

Until very recently, most work was done by hand, occasionally with help from machines that were powered by people, animals, or natural forces. The Industrial Revolution, though, changed all that. Now, thanks to the power of steam (and, later, electricity), machines could do more and more of the work, lightening the load for the actual workers. Fast-forward to today, where some studies claim as many as 40% of jobs can be done entirely automatically. (For labor, we’re actually getting fairly close to “post-scarcity” in many fields, and you can see the strain that’s beginning to cause.)

Magical force and power can easily replace steam and electricity in the above paragraph. The end result won’t change. Thus, as magic becomes more and more important in our fictional realm, its effects stretch to more and more areas of the economy. As discussed in the post about power, this is transforming the workforce. Unskilled labor is less necessary, which means it has a lower demand. Lower demand, without a corresponding decrease in supply, results in lower wages, fewer working hours, fewer jobs overall. We know how that turns out. The whole sordid story can be found in all sorts of novels set in Victorian England or Reconstruction America—Charles Dickens is a good start. Or you can look at modern examples like Detroit or Flint, Michigan, or any steel town of the Midwest.

There is an upside, though. After this initial shock, the economy will adjust. We see that today, as those displaced in their jobs by robots have begun branching out into new careers. Thus, it’s easy to imagine a magical society embracing the “gig economy” we’re seeing in Silicon Valley and other upscale regions, except they’d do it far earlier. You could even posit a planned socialist economy, if the magic works out.

But mages are human, too. They’re subject to need and greed the same as the rest of us. So they might instead become the billionaires of the world. Imagine, for instance, wizards as robber barons, hoarding their techno-magic to use as a lever to extract concessions from their rivals. Or they could simply sell their secrets to the highest bidder, creating something not much different from modern capitalism. If magic has a distinct military bent, then they could become the equivalent of defense contractors. The possibilities are endless. All you have to do is follow the chain of cause and effect.

The economy is huge. It’s probably beyond a single author to create something entirely realistic. But making something that passes the sniff test isn’t that hard. All you have to do is think about why things are the way they are, and how they would change based on the parameters you set. Oh, and you might want to find one of those munchkin-type players who likes to find loopholes; for the economic side, they’re more useful than any editor.

Let’s make a language, part 21a: Occupations and work (Intro)

We all have a job to do, whether it’s an actual career or simply the odd jobs we do around the house. Work is as old as humanity, so it’s not surprising that it is a very important part of a language’s vocabulary. For a conlang, it should be no different.

Working on work

Work is, at its core, about action, about doing things. Thus, many of the words regarding work will be verbs, and many others will likely be derived from those verbs in some way. To be sure, there will be nouns and adjectives that aren’t, but derivation gives us a powerful tool to create new words, and work is a great example of a field where derivation really shines.

Think about “working” verbs. We can cook and clean and teach, among hundreds of others. And when we do those things, in English, we become cooks, cleaners, and teachers. Two out of the three of these use the agent derivation -er, and that pattern is repeated throughout the language: agents are nouns that perform an action, so agents of working verbs naturally represent the “workers”. (Cook is an exception, but not much of one. Ever heard of a cooker? That’s not what you call the occupation in English, but another language could do things differently.) If your conlang has an agent marker, then creating occupational nouns is probably going to be easy and regular. Of course, there can be exceptions, especially once loanwords come into play, e.g., chef.

Another easy derivation takes us to abstract nouns representing the occupation itself. In English, this comes in the gerund form: “working”, “teaching”, etc. Other languages might have their own special cases, though. Note that this is not the same as the adjective form seen in phrases like “a working man”. That one is a different, yet equally simple, derivation; a language can use the same pattern for both, or it can separate them.

If your language has a gender distinction in nouns, then things might become a little more complicated. English has a few cases like these (actor/actress), but political correctness is starting to erase some of these distinctions. Romance languages, by contrast, have a larger, more stable, set of gendered agents. Now, a conlang with gender doesn’t have to have separate occupational terms for masculine and feminine, but it’s an obvious step that many natural languages have taken.

Which work is which?

The breadth of work words is another one of those cultural things that you have to take into account. A primitive society set in Bronze Age Europe isn’t going to have words for “computer” (originally, this was “one who computers”, a word for a person) or “investor”, because such concepts won’t exist. Similarly, a lost Amazon tribe might not have native words for “ironworking” and “blacksmith”, as those would be foreign concepts.

As with plants and animals, “foreign” work will often be spoken of in foreign terms, i.e., loanwords. This isn’t always the case, however. It’s entirely plausible that a language’s speakers will invent new terms for these new jobs. If they’re smart enough, they may even try to translate the meaning of the foreign root. Even if they do borrow the root, they may not import the agent marker with it. Instead, the borrowing can create a whole new paradigm: work verb, occupational agent, abstract occupational noun, and so on.

Irregularity

For naturalistic conlangs, regularity is anathema. With the field of work, there’s ample opportunity to introduce irregularities. The agent derivation doesn’t always have to work, for example—we’ve already seen English cook. Old verbs might be lost, leaving nouns (like carpenter) that don’t seem to fit anymore. Different derivations can be used on different roots, too; we speak of carpentry but also woodworking. And then there’s the oddity of English employee, one of the few instances where the language has a patient derivation to go along with the agent. (The full paradigm of “employ” shows exactly what we’re talking about, in fact. You’ve got the basic agent “employer”, the not-quite-irregular patient noun “employee”, and the abstract “employment”, which doesn’t use the usual participle form. Irregularity all around.)

Next up

In the next two posts, we’ll get a look at some Isian and Ardari working words. Over 50 of them, if you can believe that. Then, the future becomes murkier. We’re nearing the end of another year, so stay tuned for a special announcement regarding upcoming parts of the series.

RPG Town: layout

I love the retro look in games, that 16-bit pixel-art style that has, thankfully, become common once again in indie titles. A Link to the Past is my favorite Zelda game, and I’ve been playing Stardew Valley far too long over the past few months. Maybe it’s the nostalgia talking, but I truly enjoy this style.

One of the main draws of retro graphics is the way an area can be laid out using a tilemap. There are far too many tilemap tutorials out there, and this won’t be one of them. Instead, I’d just like to share a little thing I started recently. I call it RPG Town.

The town

RPG Town is a little village that would be at home in any RPG, action-adventure, or roguelike game that uses a 16-bit graphical style. It’s definitely a pixel-art place, and I’m using a free sprite set I found linked on the excellent OpenGameArt.org to visualize it.

The backstory, such as it is, is deliberately vague. RPG Town can be a home base for a player character, or a little stop along the way, or just anything. It’s home to a couple dozen people, and it’s got that typical over-urbanism common to video game settlements. No real farms here, but you can imagine they’re right off the map. It’s also a coastal place, situated on the mouth of a small river. Thus, fishing is likely a big deal there. There’s an offshore island not too far away; it’s claimed by RPG Town, but the people don’t really use it for much. (Maybe it holds the ruins of a lighthouse, or something more sinister.) Other than that, it’s just your average, quaint little town, set amid fertile plains and dense woodlands, ready for an adventure to come its way.

The making of

Really, RPG Town is an excuse for me to play with Tiled. I’ve never used it for much, but I want to. It’s a wonderful program, great for creating tilemaps of any kind, but especially those for retro-style games. Plus, it’s free, and it runs everywhere, so you don’t have to worry about any of that. Get it. It’s more than worth your time.

Anyway, after starting a new map and setting up the tile set I’m using. My first step was to configure the terrain. The Tiled manual talks about doing this; it’s a huge help, if your tile set is made for it.

Once I had the preliminaries out of the way, it was time to start making a map. So that’s what I did. First, I filled my whole canvas (128×64, by the way, with the tiles 16 pixels square) with a plain grass tile, then “carved out” the water areas. I had already decided RPG Town’s location, so all I had to do was draw the water. Tiled took care of the tile transitions seamlessly, and I ended up with this:

rpgtown-terrain

Tilemaps are best built in layers, from the base (the terrain) up to the more complex parts like houses or signs. Following that philosophy, next come the roads. RPG Town isn’t a big port, but it’s a port, so it’s only natural that it would be a stopping point for a road or two. Those I made as wide cobblestone paths, meeting in the middle in a tiny town square. There are also a few branching side streets, and some stuff that won’t quite make sense yet. Oh, and I added a spot of sandy beach, because who doesn’t like that?

rpgtown-roads

Now that I know where all the roads are running to, it’s time to give the people of RPG Town some places to live and work. I’m not ready to actually plop down buildings yet, so I’ve limited myself at this early stage to simply staking them out. On this last picture, you can see the outlines of where buildings would go. Later on, I’ll build them, but this is enough for now.

rpgtown-layout

To be continued?

I’ll keep playing with RPG Town in my spare time. If I do anything interesting with it, I’ll post about it. One day, there might even be a few virtual people living there.

Building aliens: sentience and sapience

Creating aliens is fun and all, but why do we do it? Mostly, it’s because those aliens are going to have some role in our stories. And what kind of organism plays the biggest role? For most, that would be the intelligent kind.

Sentient aliens are the ultimate goal, thanks to a lifetime of science fiction. Yes, the discovery tomorrow of indisputably alien bacteria on Mars would change the entire world, but we’re all waiting for the Vulcans, the Mandalorians, the asari, or whatever our favorite almost-human race might be.

Mind over matter

It’s hard to say how plausible sentience is. We’ve only got one example of a fully intelligent species: us. Quite a few animals, however, show sophisticated behavior, including dolphins, chimps, octopuses, and so on. Some are so intelligent (relative to the “average” member of the animal kingdom) that authors will draw a line between sentience (in the sense of feeling and experiencing sensation) and sapience (the higher intelligence that humans alone possess). For aliens, where even defining intelligence might be nearly impossible at the start, we’ll keep the two concepts merged.

A sentient alien species remains a member of its home biosphere. We’ll always be evolved from our primate ancestors, no matter what the future holds. It’ll be the same for them. Their species will have its own evolutionary history, with all that entails. (Hint: I’ve spent quite a few posts rambling on about exactly that.) The outcome, however, seems the same: an intelligent, tool-using, society-forming, environment-altering race.

We don’t know much about how higher sentience comes about. We don’t even know what it means to have consciousness! Let’s ignore that minor quibble, though, and toss out some ideas. Clearly, intelligence requires a brain. Even plants have defensive mechanisms activated when they feel pain, but it takes true brainpower to understand what happens when inflicting pain upon another. Sentience, in this case, can be equated with the powers of reasoning, or an ability to follow logical deduction. (Although that opens the door to claiming that half of humanity is not sentient. Reading some Internet comments, I’m not sure I would disagree.)

Other factors go into making an intelligent alien race, too. Fortunately, most of them default to being slightly altered expressions of human nature. Sentient aliens usually speak, for example, except in some of the more “out there” fiction. Even in works like Solaris, however, they still communicate, though maybe not always through direct speech. Now, we know language can evolve—I’m writing in one of them, aren’t I?—but it was long thought that humans were the unique bearers of the trait. Sure, we had things like birdsong and mimicry, but we’re the only ones who actually talk, right? Attempts at teaching language to “lesser” animals have varied in their efficacy, but recent research points to dolphins having at least a rudimentary capacity for speech. That’s good news for aliens, as it’s a step towards disproving the notion that language is distinctly human.

What else do humans do? They form societies. Other animals do, too, from schools of fish to beehives and anthills, but we’ve taken it to new extremes. Sentient aliens probably would do the same. They may not follow our exact trajectory, from primitive scavengers to hunter-gatherers to agrarian city-states to empires and republics, but they would create their own societies, their own cultures. The shapes these would take depend heavily on the species’ “upbringing”. We’re naturally sociable. Our closest animal kin show highly developed social behaviors—Jane Goodall, among others, has made a living off researching exactly that. An alien race, on the other hand, might develop from something else; imagine, for instance, what a society derived from carnivorous, multiple-mating, jungle-dwelling ancestors would look like.

Likewise, the technological advancement won’t be the same for aliens as it was for us. Some of that could be due to basic science. An aquatic species is going to have an awful time crafting metal tools. Beings living on a higher-gravity world, apart from being generally shorter and stouter, might take much longer to reach space, simply because of the higher escape velocity. A species whose planet never experienced an equivalent to the Carboniferous period could be forced far sooner into developing “green” energy.

Differences in advancement can also stem from psychological factors. Humans are altruistic, but not to a fault. We’re basically in the middle of a spectrum. Another race might be more suited to self-sacrifice (and thus potentially more amenable to socialist or communist forms of organization) or far less (therefore more likely to engage in cutthroat capitalism). Racial, sexual, and other distinctions may play a larger or smaller role in their development, and they can also drive an interest in genetics and similar fields.

Even their history has an effect on their general level of technology. How much different, for instance, would our world be if a few centuries of general stagnation in Europe—the Dark Ages—never occurred? What would the effects of “early” gunpowder be? Aliens can be a great place to practice your what-ifs.

The garden of your mind

We are sentient. We are sapient. No matter how you define the terms, no other species on Earth can fit both of them at the same time. That’s what makes us unique. It’s what makes us human.

An alien species might feel the same way. Intelligence looks exceedingly rare, so it’s stretching the bounds of plausibility that a planet could hold two advanced lifeforms at the same time. On the other hand, science fiction is often about looking at just those situations that sit beyond what we know to be possible.

One or many, though, aliens will always be alien to us. They won’t think just like us, any more than they’ll look just like us. Their minds, their desires and cares and instincts and feelings, will be different. For some authors, that’s a chance to explore the human condition. By making aliens reflections of some part of ourselves, they can use them to make a point about us. Avatar, for example, puts its aliens, the Na’vi, in essentially the same role as the “noble savages” of so many old tales. Star Trek has Klingons to explore a warrior culture, Vulcans for cold, unassailable logic, and hundreds of others used for one-off morality plays.

Others use aliens to give a sense of otherworldliness, or to show how small, unimportant, or deluded we humans can be. Aliens might be a billion years older than us, these stories state. They’d be to us what we are to trilobites or coelacanths…or the dinosaurs. Or if you want to take the view of Clarke and others, a sufficiently advanced alien would seem magical, if not divine.

Whatever your sentient aliens do, whatever purpose they serve, they’ll have thoughts. What will they think about?

Borrowing and loanwords

Languages can be a bit…too willing to share. Pretty much every natural language in existence has borrowed something from its neighbors. Some (like English) have gone farther than others (like Icelandic), but you can’t find a single example out there that doesn’t have some borrowing somewhere.

For the conlang creator, this presents a problem. Conlangs, by definition, have no natural neighbors. They have no history. They’re, well, constructed. This means they can’t undergo the same processes of borrowing that a natural language does. For some (particularly auxlangs), that’s a feature, not a bug. But those of us making naturalistic conlangs often want to simulate borrowing. To do that, we have to understand what it is, why it happens, and what it can do for us.

On loan

Most commonly, borrowing is in the form of loanwords, which are exactly what they sound like. Languages can borrow words for all sorts of reasons, and they can then proceed to do terrible things to them. Witness the large number of French loans in English, and the horrified shudders of French speakers when we pronounce them in our Anglicized fashion. Look at how terms from more exotic languages come into English, from chop suey to squaw to Iraqi. Nothing is really safe.

Pronunciations change, because the “borrowing” language might not have the same sounds or allow the same syllables. Meanings can subtly shift in a new direction, as cultural forces act on the word. Grammar puts its own constraints on loanwords, too; languages with case and gender will have to fit new words into these categories, while those without might borrow without understanding those distinctions.

But let’s take a step back and ask ourselves why words get borrowed in the first place. There are a few obvious cases. One, if the borrowing language doesn’t already have a word representing a concept, but a neighbor does, then it doesn’t take a psychic to see what’s going to happen. That’s how a lot of agricultural and zoological terms came about, especially for plants and animals of the Americas and Australia. It’s also how many of Arabic scientific words came into English, such as alcohol and algebra.

Another way loanwords can come about is through sheer force. The classic example is the Norman Conquest, when Anglo-Saxon fell from grace, replaced in prestigious circles by Norman French. Another “conquest” case is Quechua, in the Andes, where Spanish took the place of much of the native vocabulary. And then there’s Japanese, which borrowed a whole system of writing from China, complete with instructions on how to read it; just about every Chinese character got reinterpreted in Japanese, but their original—yet horribly mangled—Chinese pronunciations stuck around.

Third, a relative difference in status, where a foreign language is seen as more “learned” than one’s own, can drive borrowing. That’s one reason why we have so many Latin and Greek loans in English, especially “higher” English. Educated speakers of centuries past looked to those languages for guidance. When they couldn’t find the right word in their native tongue, the first place they’d look was the classics.

Taking more

Words are the most commonly borrowed item in language, but they’re not the only thing that can be taken, and they’re not always taken in isolation. English pronouns, for example, are a curious mix of native terms passed down with only minor changes all the way from Proto-Germanic and beyond—I and me aren’t that much different from their equivalents in most other European languages. But in the third person, the he, she, they, and it, things get weird. Specifically, the plural pronouns they, them, and their are, in fact, borrowed. Imposed, if you prefer, as they seem to be a result of the Viking invasions of England in the tenth and eleventh centuries.

Other bits of grammar can be lifted, but the more complicated they are, the less likely it’s going to happen. There aren’t a lot of examples of languages borrowing case systems. (Getting rid of one already present, however, is a plausible development for a language suddenly spoken by a large number of foreigners, but that’s a different post.) Borrowing of pronoun systems is attested. So is heavy borrowing of numeral words; this one is particularly common among indigenous languages that never needed words for “thousand” and “million” before Westerners arrived on the scene.

As I said above, Japanese went so far as to import a script. So did Korean, Vietnamese, and quite a few other languages in the region. They all took from the same source, Chinese, because of the much higher status they perceived it to have. Others around there instead borrowed from Sanskrit. On our side of the world, you have things like the Cherokee syllabary, although it’s not a “proper” borrowing, as the meanings of symbols weren’t preserved.

One other thing that can be taken isn’t so much a part of grammar as it is a way of thinking about it. As part of its mass importation of Latin and Greek, English picked up the Latin style of word formation. Instead of full compounds, which English had inherited from its German forebears, Latin used a more purely agglutinative style, full of prefixes and suffixes that added shades of meaning. It’s from that borrowing that we get con- and pro-, sub- and super-, ex-, de-, and so many more.

Word of warning

It’s easy to go too far, though—some would say English did long ago. So where do we draw the line? That’s hard to say. For some conlangs, borrowings, if they’re used at all, might need to be restricted to the upper echelons of the vocabulary. The technical, scientific terminology common to the whole world can be used without repercussion. Nobody will call out a conlang set in today’s world for borrowing meter and internet and gigabyte. Similarly, place names are fair game. Beyond that, it’s a matter of style and personal preference. If your conlang really needs a lot of loans, go for it.

There’s one more thing to think about. Borrowings get “nativized” over time, to the point where we no longer consider words like whiskey or raccoon to be loans. It’s only those that are relatively new (karaoke) or visibly foreign (rendezvous) that we take to be imports. Even those quixotic attempts to purge the language of its outside influences miss quite a lot here; you wouldn’t find even the hardiest Anglo-Saxon revivalist wanting to change Shakespeare’s Much Ado About Nothing, but ado is a pre-Norman loanword.

So this gives you an out: some words can be loans, but they were borrowed so long ago that the speakers have all but forgotten where they came from. Any conlang set in Europe, for instance, wouldn’t be wrong in having a lot of Roman-era Latin loans. Asian conlangs would almost be expected to have an ancient crust of Chinese or Sanskrit, or a newer veneer of Arabic.

Whatever you do, it’s an artistic choice. But it’s a choice that can have a profound effect on your conlang’s feel. A few well-placed borrowings give a conlang a sense of belonging to the real world. And if you’re making your own world, then you can create your own networks of linguistic borrowing, based on that world’s history. The principles are the same, even if the names are changed.

Novel Month 2016 – Day 30, late evening

And so it ends. I wanted to put up a bunch of nifty stats and charts and stuff for this, but I had some problems sleeping today, and that threw my schedule all out of whack. So you get a text recap instead.

Today, I finished Chapter 21 and started Chapter 22. We’re now in the run up to the climax for sure. Lots of things are happening, and threads are coming together. If not for those pesky tornadoes last night, I might even be excited.

Now, to the month as a whole. Let me first state that I have enjoyed this month of writing like no other in my life. I’ve done Nanowrimo 6 times now, and I’ve completed at least one half of the challenge in the last 5. (Well, 2013 was a bit…off. The story there ended at about the 49K mark. I’ll still call that a win.) Never had I thought I could write 100,000 words in a month. A few short years ago, I would have laughed at the idea of writing half that.

But now I truly know my limits. 100K is about how much I can write if I considered writing as a full-time job. I won’t call these the best 100,000+ words I’ve ever written, but the point of the challenge, as I understand it, is quantity. Editing and rewriting can come later. Of course, I’ve never been great at that “stream-of-consciousness” writing style. I can’t leave typos and grammatical errors in; I’ll stop to fix them where I notice they’re there. Now I’m confident I can do that while still maintaining an absolutely incredible pace. A Sandersonian pace, if you will. (Brandon Sanderson is one of my favorite authors, and one of the most prolific I’ve ever seen. As much as I’ve written this year, I think he’s still outpaced me!)

Nocturne is not yet done. I’d call it a little over 75%. Maybe 80%, if we’re feeling generous. I think it’s not out of the realm of possibility that I could finish the draft before Christmas. Whether fate will conspire to prevent that remains to be seen, but I’m going to give it my best shot. I shouldn’t have to maintain the torrid pace of the past 30 days to complete it in that time, so I’ll also have the chance to focus on other things.

I’ve had fun this November. Maybe more fun than with “The City and the Hill” (2015), Before I Wake (2014), “Out of the Past” (2013), and Heirs of Divinity (2012) put together. Nocturne, or what I have of it so far, will definitely go on the list of my greatest writing accomplishments.

Normal PPC posts resume Friday, and you can check out my Patreon for more information on what I’m writing and releasing. Remember to subscribe over there. It helps me out a lot. And finally, thank you for taking this journey with me. It’s been a great one, and I can’t wait till next year.

Final Stats

Previous word count: 100,805
This session’s word count: 2,821
Total word count: 103,626
Daily average: 3,454