Thoughts on types

Last week, I talked about an up-and-coming HTML5 game engine. One of the defining features of that engine was that it uses TypeScript, not regular JavaScript, for its coding. TypeScript has its problems (it’s made by Microsoft, for one), but it cuts to the heart of an argument that has raged for decades in programming circles: strong versus weak typing.

First off, here’s a quick refresher. In most programming languages, values have types. These can be simple (an integer, a string of text) or complex (a class with a deep inheritance hierarchy and 50 or so methods), but they’re part of the value’s identity. Variables can have type, too, but different languages handle that in different terms. Some require you to set a variable’s type when it is first defined, and they strictly enforce that type. Others are more lenient: if x holds the value 123, it’s an integer; if you set it to "foo", then it becomes a string. And some languages allow you to mix types in an expression, while others will throw errors the minute you even dare add a string to a number.

A position of strength

I’m of two minds on types. On the one hand, I do think that a “strong” type system, where everything knows what it is and conversions must be explicit, is good for the specific kind of programming where data corruption is an unforgivable sin. The Ada language, one of the most notorious for strict typing, was made that way for a reason: it was intended for use in situations where errors are literally life-threatening.

I also like the idea of a strongly-typed language because it can “write itself” in a sense. That’s one of the things Haskell supporters are always saying, and it’s very reminiscent of the way I solved a lot of test questions in physics class. For example, if you know your answer needs to be a force in newtons (kg m/s²), and you’re given a mass (kg), a velocity (m/s), and a time (s), then it’s pretty obvious what you need to do. The same principle can apply when you’ve got code that returns a type constructed from a number of seemingly unrelated ones: figure out the chain that takes you from A to B. You can’t really do that in, say, JavaScript, because everything can return anything.

And strong types are an extra form of documentation, something sorely lacking in just about every bit of code out there. The types give you an idea of what you’re dealing with. If they’re used right, they can even guide you into using an API properly. Of course, that puts more work on the library developer, which means it’s less likely to actually get done, but it’s a nice thought.

The weak shall inherit

In a “weak” type system, objects can still have types, but variables don’t. That’s the case in JavaScript, where var x (or let x, if you’re lucky enough to get to use ES6) is all you have to go on. Is it a number? A string? A function? The answer: none of the above. It’s a variable. Isn’t that enough?

I can certainly see where it would be. For pure, unadulterated hacking, give me weak typing. Coding goes so much faster when you don’t have to constantly ask yourself what something should be. Scripting languages tend to be weakly-typed, and that’s probably why. When you know what you’re working with, and you don’t have to worry as much about error recovery, maintenance, or things like that, types only get in the way.

Of course, once I do need to think about changing things, a weakly-typed language starts to become more of a hindrance. Look at any large project in JavaScript or Ruby. They’re all a tangled mess of code held together by layers of validation and test suites sometimes bigger than the project itself. It’s…ugly. Worse, it creates a kind of Stockholm Syndrome where the people developing that mess think it’s just fine.

I’m not saying that testing (or even TDD) is a bad thing, mind you. It’s not. But so much of that testing is unnecessary. Guys, we’ve got a tool that can automate a lot of those tests for you. It’s called a compiler.

So, yeah, I like the idea of TypeScript…in theory. As programmers look to use JavaScript in “bigger” settings, they can’t miss the fact that it’s woefully inadequate for them. It was never meant to be anything more than a simple scripting language, and it shows. Modernizing efforts like ES5 and ES6 help, but they don’t—can’t—get rid of JavaScript’s nature as a weakly-typed free-for-all. (How bad is it? Implicit conversions have become accepted idioms. Want to turn n into a number? The “right” way is +n! Making a string is as easy as n+"", and booleans are just !!n.)

That’s not to say strong typing is the way to go, either. Take that too far, and you risk the opposite problem: losing yourself in conversions. A good language, in my opinion, needs a way to enforce types, but it also needs a way to not enforce them. Sometimes, you really do want an “anything”. Java’s Object doesn’t quite work for that, nor does the C answer of void *. C++ is getting any soon, or so they say; that’ll be a step up. (Note: auto in C++ is type inference. That’s a different question, but I personally think it’s an absolute must for a strongly-typed language.) But those should be used only when there’s no other option.

There’s no right answer. This is one of those debates that will last forever, and all I can do is throw in my two cents. But I like to think I have an informed opinion, and that was it. When I’m hacking up something for myself, something that probably won’t be used again once I’m done, I don’t want to be bothered with types; they take too much time. Once I start coding “for real”, I need to start thinking about how that code is going to be used. Then, strong typing saves time, because it means the compiler takes care of what would otherwise be a mound of boilerplate test cases, leaving me more time to work on the core problem.

Maybe it doesn’t work for you, but I hope I’ve at least given you a reason to think about it.

On writing and dialects

I’ve been seriously attempting to write fiction for over five years now, and I’m still learning new things about the craft all the time. One of those things concerns my own style of writing, and it’s the main reason I object to one of the fundamental maxims of creative writing.

Writing itself isn’t the hard part,” the saying goes. To some extent, that’s true. Coming up with a believable, interesting, story with believable, interesting characters is hard. Planning, plotting, characterizing, worldbuilding, all of that is supremely difficult, to the point where the mechanics of writing get lost in the noise. Especially nowadays, when everything is done on a computer, and most “writing” is actually typing on a keyboard, the physical act of writing is a small fraction of the effort that goes into creating a story.

Move one level up, to the words you’re putting on-screen, and things don’t really change all that much. You’re still in the rote mechanics of writing, but now at the level of grammar and syntax. As long as you can touch-type (and you’ll eventually learn how, if you keep at it long enough), writing—typing, if you prefer—the words is almost reflexive. As long as you speak English, putting the right words together comes naturally. Except that it doesn’t, and therein lies my problem.

Southern Man

The reason is simple: when I write a story in “standard” English (for me, that would be General American), I’m not speaking my native language. I’m American, and I’m effectively monolingual, despite a couple of years of Spanish classes in high school and fifteen more of amateur linguistic study. It’s not that I can’t speak or write English, it’s that I’m not used to speaking the standard.

As we say around here, I’m Southern-born and Southern-bred. I’m a child of the South. That’s where I was born, it’s where I live, and it’s probably where I’ll die. And even if you don’t know the first thing about American regional politics, you likely know about the Southern dialect.

It’s not different enough from the rest of the country to really be considered its own language. I can still understand just about any other American speaker, as well as most other English dialects (although those from northern England and parts of Australia sometimes baffle me), and they can likewise understand the vast majority of what I’m saying. But it is different, and it can be startling if you don’t know what to expect. Just like I sometimes struggle to figure out some of the words Jeremy Clarkson is saying, I know that plenty of people would need subtitles for Hatfields & McCoys. (Technically, that’s Appalachian, not Southern, but I’ll get to that in a minute.)

In writing, it doesn’t seem quite so bad, since the pronunciation differences, like the characteristic Southern drawl, don’t show up. But phonology isn’t the only part of a dialect. Words matter, despite what the writing self-help guys say. Y’all, for example, is the quintessential Southern word, yet I don’t think I’ve used it once in any of the stories I’ve written since the start of the decade. Why? Because that would immediately mark the whole work as “dialectal” or, worse, “substandard”. And I don’t think I want that.

Talking the talk

But sticking to the standard—whatever that is for English—means that I have to write at a level I’m not exactly comfortable with. It gets even worse because “Southern” refers to not one single dialect, but a group of them. Where I grew up, which isn’t all that far from where I’m living now, the local speech is closer to Appalachian, the talk of hillbillies living in the mountains, than the “General Southern” of the Deep South area that stretches from Charleston to Jackson. Appalachian has its own speech patterns, its own curious vocabulary, and a few peculiar grammatical constructions that make it a dialect of its own. (And that has slight regional differences, but those need not concern us here.)

So I’m not “going up a level” when I’m writing in standard American English. I’m going up two. I have to raise my standards just to get to what is widely considered the least standard of all the American dialects. Then I need to go from there up to the true literary language. It’s a kind of diglossia, if you think about it. I speak the homespun mix of Southern and Appalachian at home, among friends and family; its how I was raised to talk. For talking to others in the region, I use a more generic Southern, dropping the Appalachianisms while keeping the drawl and the y’all. Again, I learned that by osmosis: listening to people, watching the local news, etc.

Neither my home “idiolect” nor the Southern dialect are written, except in the written emulation of speech. They don’t need to be. That’s not what they’re for. But standard English is different. I don’t hear it spoken around me casually, only formally or in the media. I learned it in school, and I had to learn how it differs from the English I’m used to.

The crux of the problem, then, is this: where is the line between dialect and language? I’ve found that, when you’re writing, it’s a lot closer than you might think. I’m constantly slowed by the internal translation from Southern to General American, and it is not a perfect match. It’s the little things that trip me up, like the past perfect (in my spoken dialect, had went is an acceptable substitution for had gone), -ward versus -wards (Southerners that I’ve heard prefer towards, but most Americans use toward), and serial verbs in the future tense (try to or try and? go get or go and get?). At times, it really is like I’m writing in a different language.

(That’s not even including the Americanisms I find illogical. Like British writers, I consistently keep punctuation out of quotation marks, unless it’s part of the quote. I’m told that this is actually common practice among programmers. That makes sense, because programming languages won’t let you do it the “wrong” way. HTML, unfortunately, explicitly supports “Americanized” closing tags.)

Plain speech

Of course, the creative part of creative writing is always going to be the most important. There’s no denying that. I tend to write in a seat-of-the-pants style, where I don’t plan much in advance, instead letting things happen naturally. (I’ll talk about that in a future post.) But that very style means that I’m often stuck, as I have to stop typing to think of a name or a part of a character’s back-story. The dialectal difference is just one more thing to worry about.

If I were a better writer, I might be able to turn this liability into an advantage. Maybe there’s a market out there for books written in a Southern style, full of colloquialisms and colorful figures of speech. I don’t know, but I doubt I could be the one to pull it off. For now, I’ll stick with the standard, as hard as it is. It’s not art if you don’t suffer, right?

Looking forward to 2016

So, it’s a new year. The slate has been cleaned. We can put 2015 behind us, and look ahead to 2016. From a programming point of view, what does this new year hold? Let’s take a look.

Programming languages

This year should be an exciting one if you like programming languages for their own sake.

  • JavaScript: Most everybody is using a browser capable of most of ECMAScript 5 (ES5). By the end of the year, expect both parts of that to increase. More people are going to have modern capabilities, and the browsers themselves will cover more of the standard. Speaking of standards, I’d look to ES6 support becoming more widespread in browsers, even Microsoft’s.

  • C++: The next revision of the C++ standard, C++17, is still a ways away. (You’re crazy if you’re betting on it actually coming out in 2017. Remember, C++11 was codenamed C++0x for years.) However, we should start seeing parts of it becoming fixed. Ranges are a big thing right now, concepts are coming (finally!), and it looks like C++ might get some sort of compile-time reflection. Things are looking up, but we’re not there yet.

  • Perl: I’m serious. Perl 6 is out. (I think that leaves Star Citizen as the final harbinger of Armageddon.) In the works for a decade and a half, with a set of operators best described by a periodic table, and seemingly designed to be impossible to implement, who can’t love Perl? At the very least, it’ll be fun to write in, and the new, incompatible version will spur a new generation of Perl golfers and other code artists. But I think it might turn out a bit like Python 3. Perl 5 has history, and that’s not going away.

  • The rest: I can easily see Rust gaining a bigger cult following over the next year, especially at places like Github. PHP has their version 7, but the less said about PHP, the better. C# and Java are going to be simmering for another twelve months, at least, and I don’t see much new news coming out of either of them. Ruby will continue its slow slide into irrelevance, probably dragging Python with it. (I wouldn’t mind them taking Haskell along, but I digress.) Newcomers will arise, and I’d say we’re in for another round of “visual coding”. And hey, maybe this will finally be the year for Scala.

Hardware and the like

The big thing on everyone’s lips right now is Vulkan, the official successor to OpenGL. It was supposed to be out in time for Christmas, but it got pushed back. (Funnily enough, the same thing happened two years ago with Kaveri, AMD’s first processor line that could support Vulkan. But I’m not bitter.) Personally, I don’t see much out of Vulkan this year. It’ll be released, and we’ll see a few early, buggy drivers and experimental alphas of games, most of which will be glorified tech demos. I’d give it till 2018 before I start worrying about replacing OpenGL.

Tiny computers are going to get bigger this year, I think. I mean that in a figurative way, of course. The Raspberry Pi 2 is the big name in this field, but you’ve also got the BeagleBone and things like that, not to mention the good old Arduino. However you look at it, it’s a mature area. We’ve moved beyond revolution, now it’s time for evolution. These computers will get more powerful, easier to use, and more ubiquitous. Next Christmas, I can easily see a stick computer being like this year’s quadcopters.

On the other hand, as much as I hate to say it, I’m not holding out a lot of hope for 3-D printing. We’ve been hearing about it for half a decade, and there has definitely been incremental progress. But 2016, in my opinion, is not going to be the year we see inexpensive 3-D printers flying off the shelves. They’ll stay in the background. (The whole “Internet of Things”, however, will only grow, but it’s not intended to be programmable, so it doesn’t help us.)

Libraries, engines, etc.

Look for Unity and Unreal to continue their competition, with a bunch of smaller guys chomping at the bit. Godot, assuming they don’t screw themselves over by switching to Vulkan prematurely, might get a boost as the indie engine of choice. And JavaScript engines have near-infinite upside, especially for mobile coding. Game development in 2016 will be like it was in 2015, but better in every way.

I do think the Node.js fad is dying down, and not a moment too soon. That doesn’t mean Node is done, only that I see people evaluating it for what it is, rather than what it’s advertised as. It’s the same thing as Ruby a few years ago, back in the early days of Rails. Or JavaScript and Angular a couple of years ago, for that matter. Still, Node is a solid platform for a lot of things. It’s not going away, but this is the year that it fades from the spotlight.

The same can be said for the current crop of JS web frameworks. There’s no chance of the whole Internet getting behind a single framework, nor two or even ten. But this is an area where the churn is so great, what’s popular next December hasn’t even been written yet. I can tell you that it’ll be slower, more bloated, and less comprehensible than what’s out there, though.

In the end

For programming, 2016 has a lot to look forward to, and I’ve barely scratched the surface here. (I haven’t even mentioned learning to code, which will get even bigger this coming year.) Whether native or browser, desktop or mobile, it’s a good time to code.

Programming paradigm primer

“Paradigm”, as a word, has a bad reputation. It’s one of those buzzwords that corporate people like to throw out to make themselves sound smart. (They usually fail.) But it has a real meaning, too. Sometimes, “paradigm” is exactly the word you want. Like when you’re talking about programming languages. The alliteration, of course, is just an added bonus.

Since somewhere around the 1960s, there’s been more than one way to write programs, more than one way to view the concepts of a complex piece of software. Some of these have revolutionized the art of programming, while others mostly languish in obscurity. Today, we have about half a dozen of these paradigms with significant followings. They each have their ups and downs, and each has a specialty where it truly shines. So let’s take a look at them.

Now, it’s entirely possible for a programming language to use or encourage only a single paradigm, but it’s far more common for languages to support multiple ways of writing programs. Thanks to one Mr. Turing, we know that essentially all languages are, from a mathematical standpoint, equivalent, so you can create, say, C libraries that use functional programming. But I’m talking about direct support. C doesn’t have native objects (struct doesn’t count), for example, so it’s hard to call it an object-oriented language.

Where it all began

Imperative programming is, at its heart, nothing more than writing out the steps a program should take. Really, that’s all there is to it. They’re executed one after the other, with occasional branching or looping thrown in for added control. Assembly language, obviously, is the original imperative language. It’s a direct translation of the computer’s instruction set and the order in which those instructions are executed. (Out-of-order execution changes the game a bit, but not too much.)

The idea of functions or subroutines doesn’t change the imperative nature of such a program, but it does create the subset of structured or procedural programming languages, which are explicitly designed for the division of code into self-contained blocks that can be reused.

The list of imperative languages includes all the old standbys: C, Fortran, Pascal, etc. Notice how all these are really old? Well, there’s a reason for that. Structured programming dates back decades, and all the important ideas were hashed out long before most of us were born. That’s not to say that we’ve perfected imperative programming. There’s always room for improvement, but we’re far into the realm of diminishing returns.

Today, imperative programming is looked down upon by many. It’s seen as too simple, too dumb. And that’s true, but it’s far from useless. Shell scripts are mostly imperative, and they’re the glue that holds any operating system together. Plenty of server-side code gets by just fine, too. And then there’s all that “legacy” code out there, some of it still in COBOL…

The imperative style has one significant advantage: its simplicity. It’s easy to trace the execution of an imperative program, and they’re usually going to be fast, because they line up well with the computer’s internal methods. (That was C’s original selling point: portable assembly language.) On the other hand, that simplicity is also its biggest weakness. You need to do a lot more work in an imperative language, because they don’t exactly have a lot of features.

Objection!

In the mid-90s, object-oriented programming (OOP) got big. And I do mean big. It was all the rage. Books were written, new languages created, and every coding task was reimagined in terms of objects. Okay, but what does that even mean?

OOP actually dates back much further than you might think, but it only really began to get popular with C++. Then, with Java, it exploded, mainly from marketing and the dot-com bubble. The idea that got so hot was that of objects. Makes sense, huh? It’s right there in the name.

Objects, reduced to their most basic, are data structures that are deeply entwined with code. Each object is its own type, no different from integers or strings, but they can have customized behavior. And you can do things with them. Inheritance is one of them: creating a new type of object (class) that mimics an existing one, but with added functionality. Polymorphism is the other: functions that work differently depending on what type of object they’re acting on. Together, inheritance and polymorphism work to relieve a huge burden on coders, by making it easier to work with different types in the same way.

That’s the gist of it, anyway. OOP, because of its position as the dominant style when so much new blood was entering the field, has a ton of information out there. Design patterns, best practices, you name it. And it worked its way into every programming language that existed 10-15 years ago. C++, Java, C#, and Objective-C are the most used of the “classic” OOP languages today, although every one of them offers other options (including imperative, if you need it). Most scripting-type languages have it bolted on somewhere, such as Python, Perl, and PHP. JavaScript is a bit special, in that it uses a different kind of object-oriented programming, based on prototypes rather than classes, but it’s no less OOP.

OOP, however, has a couple of big disadvantages. One, it can be confusing, especially if you use inheritance and polymorphism to their fullest. It’s not uncommon, even in the standard libraries of Java and C#, to have a class that inherits from another class, which inherits from another, and so on, 10 or more levels deep. And each subclass can add its own functions, which are passed on down the line. There’s a reason why Java and C# are widely regarded as having some of the most complete documentation of any programming language.

The other disadvantage is the cause of why OOP seems to be on the decline. It’s great for code reuse and modeling certain kinds of problems, but it’s a horrible fit for some tasks. Not everything can be boiled down to objects and methods.

What’s your function?

That leads us to the current hotness: functional programming, or FP. The functional fad started as a reaction to overuse of OOP, but (again) its roots go way back.

While OOP tries to reduce everything to objects, functional programming, shockingly enough, models the world as a bunch of functions. Now, “function” in this context doesn’t necessarily mean the same thing as in other types of programming. Usually, for FP, these are mathematical functions: they have one output for every input, no matter what else is happening. The ideal, called pure functional programming, is a program free of side effects, such that it is entirely deterministic. (The problem with that? “Side effects” includes such things as user input, random number generation, and other essentials.)

FP has had its biggest success with languages like Haskell, Scala, and—amazingly enough—JavaScript. But functional, er, functions have spread to C++ and C#, among others. (Python, interestingly, has rejected, or at least deprecated, some functional aspects.)

It’s easy to see why. FP’s biggest strength comes from its mathematical roots. Logically, it’s dead simple. You have functions, functions that act on other functions, functions that work with lists, and so on. All of the basic concepts come straight from math, and mistakes are easily found, because they stick out like a sore thumb.

So why hasn’t it caught on? Why isn’t everybody using functional programming? Well, most people are, just in languages that weren’t entirely designed for it. The core of FP is fairly language-agnostic. You can write functions without side effects in C, for example, it’s just that a lot of people don’t.

But FP isn’t everywhere, and that’s because it’s not really as simple as its proponents like to believe. Like OOP, not everything can be reduced to a network of functions. Anything that requires side effects means we have to break out of the functional world, and that tends to be messy. (Haskell’s method of doing this, the monad, has become legendary in the confusion it causes.) Also, FP code really, really needs a smart interpreter, because its mode of execution is so different from how a computer runs, and because it tends to work at a higher level of abstraction. But interpreters are universally slower than native, relegating most FP code to those higher levels, like the browser.

Your language here

Another programming paradigm that deserves special mention is generic programming. This one’s harder to explain, but it goes something like this: you write functions that accept a set of possible types, then let the compiler figure out what “real” type to use. Unlike OOP, the types don’t have to be related; anything that fits the bill will work.

Generic programming is the idea behind C++ templates and Java or C# generics. It’s also really only used in languages like that, though many languages have “duck-typing”, which works in a similar fashion. It’s certainly powerful; most of the C++ standard library uses templates in some fashion, and that percentage is only going up. But it’s complicated, and you can tie your brain in knots trying to figure out what’s going on. Plus, templates are well-known time sinks for compilers, and they can increase code size by some pretty big factors. Duck-typing, the “lite” form of generic programming, doesn’t have either problem, but it can be awfully slow, and it usually shows up in languages that are already slow, only compounding the problem.

What do I learn?

There’s no one right way to code. If we’ve learned anything in the 50+ years the human race has been doing it, it’s that. From a computer science point of view, functional is the way to go right now. From a business standpoint, it’s OOP all the way, unless you’re looking at older code. Then you’ll be going procedural.

And then there are all those I didn’t mention: reactive, event-driven, actor model, and dozens more. Each has its own merits, its own supporters, and languages built around it.

My best advice is to learn whatever you’re preferred language offers first. Then, once you’re comfortable, move on, and never stop learning. Even if you’ll never use something like Eiffel in a serious context, it has explored an idea that could be useful in the language you do use. (In this case, contract programming.) The same could be said for Erlang, or F#, or Clojure, or whatever tickles your fancy. Just resist the temptation to become a zealot. Nobody likes them.

Now, some paradigms are harder than others, in my opinion. For someone who started with imperative programming, the functional mindset is hard to adjust to. Similarly, OOP isn’t easy if you’re used to Commodore BASIC, and even experienced JavaScript programmers are tripped up by prototypes. (I know this one first-hand.)

That’s why I think it’s good that so many languages are adopting a “multi-paradigm” approach. C++ really led the way in this, but now it’s popping up everywhere among the “lower” languages. If all paradigms (for some suitable value of “all”) are equal, then you can use whatever you want, whenever you want. Use FP for the internals, wrapped by an event-driven layer for I/O, calling OOP or imperative libraries when you need them. Some call it a kitchen-sink approach, but I see programmers as like chefs, and every chef needs a kitchen sink.

On learning to code

Coding is becoming a big thing right now, particularly as an educational tool. Some schools are promoting programming and computer science classes, even a full curriculum that lasts through the entirety of education. And then there are the commercial and political movements such as Code.org and the Hour of Code. It seems that everyone wants children to learn something about computers, beyond just how to use them.

On the other side of the debate are the detractors of the “learn to code” push, who argue that it’s a boondoggle at best. Not everybody can learn how to code, they argue, nor should they. We’re past the point where anyone who wants to use a computer must learn to program it, too.

Both camps have a point, and I can see some merit in either side of the debate. I was one of a lucky few that did have the chance to learn about programming early in school, so I can speak from experience in a way that most others cannot. So here are my thoughts on the matter.

The beauty of the machine

Programming, in my opinion, is an exercise that brings together a number of disparate elements. You need math, obviously, because computer science—the basis for programming—is all math. You also need logic and reason, talents that are in increasingly short supply among our youth. But computer programming is more than these. It’s math, it’s reasoning, it’s problem solving. But it’s also art. Some problems have more than one solution, and some of those are more elegant than others.

At first glance, it seems unreasonable to try to teach coding to children before its prerequisites. True, there are kid-friendly programming environments, like MIT’s Scratch. But these can only take you so far. I started learning BASIC in 3rd grade, at the age of 8, but that was little more than copying snippets of code out of a book and running them, maybe changing a few variables here and there for different effects. And I won’t pretend that that was anywhere near the norm, or that I was. (Incidentally, I was the only one that complained when the teacher—this was a gifted class, so we had the same teacher each year—took programming out of the curriculum.)

My point is, kids need a firm grasp of at least some math before they can hope to understand the intricacies of code. Arithmetic and some concept of algebra are the bare minimum. General computer skills (typing, “computer literacy”, that sort of thing) are also a must. And I’d want some sort of introduction to critical thinking, too, but that should be a mandatory part of schooling, anyway.

I don’t think that very young students (kindergarten through 2nd grade) should be fooling around with anything more than a simple interface to code like Scratch. (Unless they show promise or actively seek the challenge, that is. I’m firmly in favor of more educational freedom.) Actually writing code requires, well, writing. And any sort of abstraction—assembly on a fictitious processor or something like that—probably should wait until middle school.

Nor do I think that coding should be a fixed part of the curriculum. Again, I must agree somewhat with the learn-to-code detractors. Not everyone is going to take to programming, and we shouldn’t force them to. It certainly doesn’t need to be a required course for advancement. The prerequisites of math, critical thinking, writing, etc., however, do need to be taught to—and understood by—every student. Learning to code isn’t the ultimate goal, in my mind. It’s a nice destination, but we need to focus on the journey. We should be striving to make kids smarter, more well-rounded, more rational.

Broad strokes

So, if I had my way, what would I do? That’s hard to say. These posts don’t exactly have a lot of thought put in them. But I’ll give it a shot. This will just be a few ideas, nothing like an integrated, coherent plan. Also, for those outside the US, this is geared towards the American educational system. I’ll leave it to you to convert it to something more familiar.

  • Early years (K-2): The first years of school don’t need coding, per se. Here, we should be teaching the fundamentals of math, writing, science, computer use, typing, and so on. Add in a bit of an introduction to electronics (nothing too detailed, but enough to plant the seed of interest). Near the end, we can introduce the idea of programming, the notion that computers and other digital devices are not black boxes, but machines that we can control.

  • Late elementary (3-5): Starting in 3rd grade (about age 8-9), we can begin actual coding, probably starting with Scratch or something similar. But don’t neglect the other subjects. Use simple games as the main programming projects—kids like games—but also teach how programs can solve problems. And don’t punish students that figure out how to get the computer to do their math homework.

  • Middle school (6-8): Here, as students begin to learn algebra and geometry (in my imaginary educational system, this starts earlier, too), programming can move from the graphical, point-and-click environments to something involving actual code. Python, JavaScript, and C# are some of the better bets, in my opinion. Games should still be an important hook, but more real-world applications can creep in. You can even throw in an introduction to robotics. This is the point where we can introduce programming as a discipline. Computer science then naturally follows, but at a slower pace. Also, design needs to be incorporated sometime around here.

  • High school (9-12): High school should be the culmination of the coding curriculum. The graphical environments are gone, but the games remain. With the higher math taught in these grades, 3D can become an important part of the subject. Computer science also needs to be a major focus, with programming paradigms (object-oriented, functional, and so on) and patterns (Visitor, Factory, etc.) coming into their own. Also, we can begin to teach students more about hardware, robotics, program design, and other aspects beyond just code.

We can’t do it alone

Besides educators, the private sector needs to do its part if ubiquitous programming knowledge is going to be the future. There’s simply no point to teaching everyone how to code if they’ll never be able to use such a skill. Open source code, open hardware, free or low-cost tools, all these are vital to this effort. But the computing world is moving away from all of them. Apple’s iOS costs hundreds of dollars just to start developing. Android is cheaper, but the wide variety of devices means either expensive testing or compromises. Even desktop platforms are moving towards the walled garden.

This platform lockdown is incompatible with the idea of coding as a school subject. After all, what’s the point? Why would I want to learn to code, if the only way I could use that knowledge is by getting a job for a corporation that can afford it? Every other part of education has some reflection in the real world. If we want programming to join that small, elite group, then we must make sure it has a place.

Dragons in fantasy

If there is one thing, one creature, one being that we can point to as the symbol of the fantasy genre, it has to be the dragon. They’re everywhere in fantasy literature. The Hobbit, of course, is an old fantasy story that has come back into vogue in the last few years. More recent books involve dragons as major characters (Steven Erikson’s Malazan series) or as plot points (Daniel Abraham’s appropriately-titled The Dragon’s Path). Movies go through cycles, and dragons are sometimes the “in” subject (the movies based on The Hobbit, but also less recent films like Reign of Fire). Television likes dragons, too, when it has the budget to do them (Game of Thrones, of course). And we can also find these magnificent creatures represented in video games (Drakengard, Skyrim), tabletop RPGs (Dungeons & Dragons—it’s even in the name!), and music (DragonForce).

So what makes dragons so…interesting? It’s not a recent phenomenon; dragon legends go back centuries. They feature in Arthurian legend, Chinese mythology, and Greek epics. They’re everywhere, all throughout history. Something about them fires the imagination, so what is it?

The birth of the dragon

Every ancient culture, it seems, has a mythology involving giant beasts of a kind unknown to modern science. We think of the Greek myths of the Hydra, of course, but it’s only one of many. Even in the Bible, monsters are found: the leviathan and behemoth found in the book of Job, for example. But something like a dragon seems to be found in almost every mythos.

How did this happen? For things like this, there are usually a few possible explanations. One, it could be a borrowing, something that arose in one culture, then spread to its neighbors. That seems plausible, except that New World peoples also have dragon-like supernatural beings, and they had them before Columbus. Another possibility is that the first idea of the dragon was invented in the deep past, before humanity spread to every corner of the globe. But that’s a bit far-fetched. You’d then have to explain how something like that stuck around for 30,000 or so years with so little change, using only art and oral transmission for most of that time.

The third option is, in my opinion, the most reasonable: the idea of dragons arose in a few different places independently, in something like convergent evolution. Each “region” would have its own dragon mythology, where the concept of “dragon” is about the same, while different regions might have wildly different ideas of what they should be.

I would also say that the same should be true for other fantastical creatures—giants, for instance—that pop up around the world. And, in my mind, there’s a perfectly good reason why these same tropes appear everywhere: fossils. We know that there used to be huge animals roaming the earth. Dinosaurs could be enormous, and you could imagine a Bronze Age hunter stumbling upon the fossilized bones of one of them and jumping to conclusions.

Even in recent geological time, it was only the Ice Age that wiped out the mammoths and so many other “megafauna”. (Today’s environmental movement tends to want to blame humans for everything bad, including this, but the evidence can be twisted just about any way you like.) In these cases, we can see the possibility that early human bands did meet these true giants, and they would have told stories about them. In time, those stories, as such stories tend to do, could have become legendary. For dragons, this one doesn’t matter too much, but it’s a point in favor of the idea that ancient peoples saw giant creatures—or their remains—and mythologized them into dragons and giants and everything else.

The nature of the beast

Moving far forward in time, we can see that the modern era’s literature has taken the time-honored myth of the dragon and given it new direction. At some point in the last few decades, authors seem to have decided that dragons must make sense. Sure, that’s completely silly from a mythological point of view, but that’s how it is.

Even in older stories, though, dragons had a purpose. That purpose was different for different stories, as it is today. For many of them, the dragon is a nemesis, an enemy. Sometimes, it’s essentially a force of nature, if not a god in its own right. In a few, dragons are good guys, protectors. Christian cultures in medieval times liked to use the slaying dragon as a symbol for the defeat of paganism. But it’s only relatively recently that the idea of dragons as “people” has become popular. Nowadays, we can find fiction where dragons are represented as magicians, sages, and oracles. A few settings even turn them into another sapient race, with their own civilization, culture, religion, and so on.

The form of dragons also depends a lot on the which mythos we’re talking about. The modern perception of a dragon as a winged, bipedal serpent who breathes fire and hoards gold (in other words, more like the wyvern) is just one possibility. Plenty of cultures have wingless dragons, and most of the “true” dragons have no legs; they’re more like giant snakes. Still, there’s an awful lot of variation, and there’s no single, definitive version of a dragon.

Your own dragon

Dragons in a work of fiction, whether novel or film or game, need to be there for a reason, if you want a coherent story. You don’t have to work out a whole ecological treatise on them, showing their diets, sleep patterns, and reproductive habits—Tolkien’s dragons, for example, were supernatural creations, so they didn’t have to make scientific sense—but you should know why a dragon appears.

If there’s only one of them, there’s probably a reason why. Maybe it’s a demon, or a creation of the gods, or an avatar of chaos. Maybe it’s the sole survivor of its kind, frozen in time for millennia (that’s a big spoiler, but I’m not going to tell you for what). Whatever you come up with, you should be able to justify it with something more than “because it’s there”. The more dragons you have, the more this problem can grow. In the extreme, if they’re everywhere, why aren’t they running things?

More than their reason for existing in the first place, you need to think about their story role. Are they enemies? Are they good or evil? Can they talk? What are they like? Smaug was greedy and haughty, for instance, and it’s a conceit of D&D that dragons are complex beings that are completely misunderstood by us lesser mortals simply because we can’t understand their true motives.

Are there different kinds of dragons? Again we can look at D&D, which has a bewildering assortment even before we include wyverns, lesser drakes, and the like. Of course, a game will need a different notion of role than a novel, and gamers like variation in their enemies, but only the most jaded player would think of a dragon as anything less than a major boss character.

Another thing that’s popular is the idea that dragons can change their form to look human. This might be derived from RPGs, or they might have taken it from an earlier source. However it worked out, a lot of people like the idea of a shapeshifting dragon. (Half the characters in the aforementioned Malazan series seem to be like this, and that’s not the only example in fantasy.) Shapechanging, of course, is an important part of a lot of fantasy, and I might do a post on it later on. It is another interesting possibility, though, if you can get it right.

In a very big way, dragons-as-people is a similar problem as other fantasy races, as well as sci-fi aliens. The challenge here is to make something that feels different, something that isn’t quite human, while still making it believable for the story at hand. If dragons live for 500 years, for example, they will have a different outlook on life and history than we would. If they lay eggs—and who doesn’t like dragon eggs?—they won’t understand the pain and danger of live childbirth, among other things. The ways in which a dragon isn’t like a human are breeding grounds for conflict, both internal and external. All you have to do is follow the notion towards its logical conclusion. You know, just like everything else.

In conclusion, I’d like to say that I do like dragons, when they’re done right. They can be these imposing, alien presences beyond reason or understanding, and that is something I find interesting. But in the wrong hands, they turn into little more than pets or mounts, giant versions of dogs and horses that happen to have scales. Dragons don’t need to be noble or evil, but they should have an impact when you meet one. I mean, you’d feel amazed if you met one in real life, wouldn’t you?

Character alignment

If you’ve ever played or even read about Dungeons & Dragons or similar role-playing games (including derivative RPGs like Pathfinder or even computer games like Nethack), you might have heard of the concept of alignment. It’s a component of a character that, in some cases, can play an important role in defining that character. Depending on the Game Master (GM), alignment can be one more thing to note on a character sheet before forgetting it altogether, or it can be a role-playing straitjacket, a constant presence that urges you towards a particular outcome. Good games, of course, place it somewhere between these two extremes.

The concept also has its uses outside of the particulars of RPGs. Specifically, in the realm of fiction, the notion of alignment can be made to work as an extra “label” for a character. Rather than totally defining the character, pigeonholing him into one of a hew boxes, I find that it works better as a starting point. In a couple of words, we can neatly capture a bit of a character’s essence. It doesn’t always work, and it’s far too coarse for much more than a rough draft, but it can neatly convey the core of a character, giving us a foundation.

First, though, we need to know what alignment actually is. In the “traditional” system, it’s a measure of a character’s nature on two different scales. These each have three possible values; elementary multiplication should tell you that we have nine possibilities. Clearly, this isn’t an exact science, but we don’t need it to be. It’s the first step.

One of the two axes in our alignment graph is the time-honored spectrum of good and evil. A character can be Good, Evil, or Neutral. In a game, these would be quite important, as some magic spells detect Evil or only affect Good characters. Also, some GMs refuse to allow players to play Evil characters. For writing, this distinction by itself matters only in certain kinds of fiction, where “good versus evil” morality is a major theme. Mythic fantasy, for example, is one of these.

The second axis is a little harder to define, even among gamers. The possibilities, again, are threefold: Lawful, Chaotic, or Neutral. Broadly, this is a reflection of a character’s willingness to follow laws, customs, and traditions. In RPGs, it tends to have more severe implications than morality (e.g., D&D barbarians can’t be Lawful), but less severe consequences (few spells, for example, only affect Chaotic characters). In non-gaming fiction, I find the Lawful–Chaotic continuum to be more interesting than the Good–Evil one, but that’s just me.

As I said before, there are nine different alignments. Really, all you do is pick one value from either axis: Lawful Good, Neutral Evil, etc. Each of these affects gameplay and character development, at least if the GM wants it to. And, as it happens, each one covers a nice segment of possible characters in fiction. So, let’s take a look at them.

Lawful Good

We’ll start with Lawful Good (LG). In D&D, paladins must be of this alignment, and “paladin” is a pretty good descriptor of it. Lawful Good is the paragon, the chivalrous knight, the holy saint. It’s Superman. LG characters will be Good with a capital G. They’ll fight evil, then turn the Bad Guys over to the authorities, safe in the knowledge that truth and justice will prevail.

The nicey-niceness of Lawful Good can make for some interesting character dynamics, but they’re almost all centered on situations that force the LG character to make a choice between what is legal and what is morally right. A cop or a knight isn’t supposed to kill innocents, but what happens when inaction causes him to? Is war just, even that waged against evil? Is a mass murderer worth saving? LG, at first, seems one-dimensional; in a way, it is. But there’s definitely a story in there. Something like Isaac Asimov’s “Three Laws of Robotics” works here, as does anything with a strict code of morality and honor.

Some LG characters include Superman, obviously, and Eddard Stark of A Song of Ice and Fire (and look where that got him). Real-world examples are harder to come by; a lot of people think they’re Lawful Good (or they aspire to it), but few can actually uphold the ideal.

Neutral Good

You can be good without being Good, and that’s what this alignment is. Neutral Good (NG) is for those that try their best to do the right thing legally, but who aren’t afraid to take matters into their own hands if necessary (but only then). You’re still a Good Guy, but you don’t keep to the same high standards as Lawful Good, nor do you hold others to those standards.

Neutral Good fits any general “good guys” situation, but it can also be more specific. It’s not the perfect paragon that Lawful Good is. NG characters have flaws. They have suspicions. That makes them feel more “real” than LG white knights. The stories for an NG protagonist are easier to write than those for LG, because there are more possibilities. Any good-and-evil story works, for starters. The old “cop gets fired/taken off the case” also fits Neutral Good.

Truly NG characters are hard to find, but good guys that aren’t obviously Lawful or Chaotic fit right in. Obi-Wan Kenobi is a nice example, as Star Wars places a heavy emphasis on morality. The “everyday heroes” we see on the news are usually NG, too, and that’s a whole class that can work in short stories or a serial drama.

Chaotic Good

I’ll admit, I’m biased. I like Chaotic Good (CG) characters, so I can say the most about them, but I’ll try to restrain myself. CG characters are still good guys. They still fight evil. But they do it alone, following their own moral compass that often—but not always—points towards freedom. If laws get in the way of doing good, then a CG hero ignores them, and he worries about the consequences later.

Chaotic Good is the (supposed) alignment of the vigilante, the friendly rogue, the honorable thief, the freedom fighter working against a tyrannical, oppressive government. It’s the guys that want to do what they believe is right, not what they’re told is right. In fiction, especially modern fantasy and sci-fi, when there are characters that can be described as good, they’re usually Chaotic Good. They’re popular for quite a few reasons: everybody likes the underdog, everyone has an inner rebel, and so on. You have a good guy fighting evil, but also fighting the corruption of The System. The stories practically write themselves.

CG characters are everywhere, especially in movies and TV: Batman is one of the most prominent examples from popular culture of the last decade. But Robin Hood is CG, too. In the real world, CG fairly accurately fits most of the heroes of history, those who chose to do the right thing even knowing what it would cost. (If you’re of a religious bent, you could even make the claim that Jesus was CG. I wouldn’t argue.)

Lawful Neutral

Moving away from the good guys, we come to Lawful Neutral (LN). The best way to describe this alignment, I think, is “order above all”. Following the law (or your code of honor, promises, contracts, etc.) is the most important thing. If others come to harm because of it, that’s not your concern. It’s kind of a cold, calculating style, if you ask me, but there’s good to be had in it, and “the needs of the many outweigh the needs of the few” is completely Lawful Neutral in its sentiment.

LN, in my opinion, is hard to write as a protagonist. Maybe that’s my own Chaotic inclination talking. Still, there are plenty of possibilities. A judge is a perfect example of Lawful Neutral, as are beat cops. (More…experienced cops, as well as most lawyers, probably fall under Lawful Evil.) Political and religious leaders both fall under Lawful Neutral, and offer lots of potential. But I think LN works best as the secondary characters. Not the direct protagonist, but not the antagonists, either.

Lawful Neutral, as I said above, best describes anybody whose purpose is upholding the law without judging it. Those people aren’t likely to be called heroes, but they won’t be villains, either, except in the eyes of anarchists.

True Neutral

The intersection of the two alignment axes is the “Neutral Neutral” point, which is most commonly called True Neutral or simply Neutral (N). Most people, by default, go here. Every child is born Neutral. Every animal incapable of comprehending morality or legality is also True Neutral. But some people are there by choice. Whether they’re amoral, or they strive for total balance, or they’re simply too wishy-washy to take a stand, they stay Neutral.

Neutrality, in and of itself, isn’t that exciting. A double dose can be downright boring. But it works great as a starting point. For an origin story, we can have the protagonist begin as True Neutral, only coming to his final alignment as the story progresses. Characters that choose to be Neutral, on the other hand, are harder to justify. They need a reason, although that itself can be cause for a tale. They can make good “third parties”, too, the alternative to the extremes of Good and Evil. In a particularly dark story, even the best characters might never be more “good” than N.

True Neutral people are everywhere, as the people that have no clear leanings in either direction on either axis. Chosen Neutrals, on the other hand, are a little rarer. It tends to be more common as a quality of a group rather than an individual: Zen Buddhism, Switzerland.

Chaotic Neutral

Seasoned gamers are often wary of Chaotic Neutral (CN), if only because it’s often used as the ultimate “get out of jail free” card of alignment. Some people take CN as saying, “I can do whatever I want.” But that’s not it at all. It’s individualism, freedom above all. Egalitarianism, even anarchy. For Chaotic Neutral, the self rules all. That doesn’t mean you have a license to ignore consequences; on the contrary, CN characters will often run right into them. But they’ll chalk that up as another case of The Man holding them back.

If you don’t consider Chaotic Neutral to be synonymous with Chaotic Stupid, then you have a world of character possibilities. Rebels of all kinds fall under CN. Survivalists fit here, too. Stories with a CN protagonist might be full of reflection, or of fights for freedom. Chaotic Neutral antagonists, by contrast, might stray more into the “do what I want” category. In fiction, the alignment tends to show up more in stories where there isn’t a strong sense of morality, where there are no definite good or bad guys. A dystopic sci-fi novel could easily star a CN protagonist, but a socialist utopia would see them as the villains.

Most of the less…savory sorts of rogues are CN, at least those that aren’t outright evil. Stoners and hippies, anarchists and doomsday preppers, all of these also fit into Chaotic Neutral. As for fictional characters, just about any “anti-hero” works here. The Punisher might be one example.

Lawful Evil

Evil, it might be said, is relative. Lawful Evil (LE) might even be described as contentious. I would personally describe it as tyranny, oppression. The police state in fiction is Lawful Evil, as are the police who uphold it and the politicians who created it. For the LE character, the law is the perfect way to exploit people.

All evil works best for the bad guys, and it takes an amazing writer to pull off an Evil protagonist. LE villains, however, are perfect, especially when the hero is Chaotic Good. Greedy corporations, rogue states, and the Machiavellian schemer are all Lawful Evil, and they all make great bad guys. Like CG, Lawful Evil baddies are downright easy to write, although they’re certainly susceptible to overuse.

LE characters abound, nearly always as antagonists. Almost any “evil empire” of fiction is Lawful Evil. The corrupted churches popular in medieval fantasy fall under this alignment, as well. In reality, too, we can find plenty of LE examples: Hitler, the Inquisition, Dick Cheney, the list goes on.

Neutral Evil

Like Neutral Good, Neutral Evil (NE) fits best into stories where morality is key. But it’s also the best alignment to describe the kind of self-serving evil that marks the sociopath. A character who is NE is probably selfish, certainly not above manipulating others for personal gain, but definitely not insane or destructive. Vindictive, maybe.

Neutral Evil characters tend to fall into a couple of major roles. One is the counterpart to NG: the Bad Guy. This is the type you’ll see in stories of pure good and evil. The second is the true villain, the kind of person who sees everyone around him as a tool to be used and—when no longer required—discarded. It’s an amoral sort of evil, more nuanced than either Lawful or Chaotic, and thus more real. It’s easy to truly hate a Neutral Evil character.

Some of the best antagonists in fiction are NE, but so are some of the most clichéd. The superhero’s nemesis tends to be Neutral Evil, unless he’s a madman or a tyrant; the same is true of the bad guys of action movies. Real-life examples also include many corporate executives (studies claim that as many as 90% of the highest-paid CEOs are sociopaths), quite a few hacking groups (those that are doing it for the money, especially), and likely many of the current Republican presidential candidates (the Democrats tend to be Lawful Evil).

Chaotic Evil

The last of our nine alignments, Chaotic Evil (CE) embraces chaos and madness. It’s the alignment of D&D demons, true, but also psychopaths and terrorists. Pathfinder’s “Strategy Guide” describes CE as “Just wants to watch the world burn”, and that’s a pretty good way of putting it.

For a writer, though, Chaotic Evil is almost a trap. It’s almost too easy. CE characters don’t need motivations, or organization, or even coherent plans. They can act out of impulse, which is certainly interesting, but maybe not the best for characterization. It’s absolutely possible to write a Chaotic Evil villain (though probably impossible to write a believably CE anti-hero), but you have to be careful not to give in to him. You can’t let him take over, because he could do anything. Chaos is inherently unpredictable.

Chaotic Evil is easy to find in fiction. Just look at the Joker, or Jason Voorhees, or every summoned demon and Mad King in fantasy literature. And, unfortunately, it’s far too easy to find CE people in our world’s history: Osama bin Laden, Charles Manson, the Unabomber, and a thousand others along the same lines.

In closing

As I stated above, alignment isn’t the whole of a character. It’s not even a part, really. It’s a guideline, a template to quickly find where a character stands. Saying that a protagonist is Chaotic Good, for instance, is a shorthand way of specifying a number of his qualities. It tells a little about him, his goals, his motivations. It even gives us a hint as to his enemies: Lawful and/or Evil characters and groups, those most distant on either alignment axis.

In some RPGs, acting “out of alignment” is a cardinal sin. It certainly is for player characters like D&D paladins, who have to adhere to a strict moral code. (How strict that code is depends on the GM.) For a fictional character in a story, it’s not so bad, but it can be jarring if it happens suddenly. Given time to develop, on the other hand, it’s a way to show the growth of a character’s morality. Good guys turn bad, lawmen go rogue, but not on a whim.

Again, alignment is not a straitjacket to constrain you, but it can be a writing aid. Sure, it doesn’t fit all sizes. As a lot of gamers will tell you, it’s not even necessary for an RPG. But it’s one more tool at our disposal. This simple three-by-three system lets us visualize, at a glance, a complex web of relationships, and that can be invaluable.

Writing inertia

It’s a well-known maxim that an object at rest tends to stay at rest, while an object in motion tends to stay in motion. This is such an important concept that it has its own name: inertia. But we usually think of it as a scientific idea. Objects have inertia, and they require outside forces to act on them if they are to start or stop moving.

Inertia, though, in a metaphorical sense, isn’t restricted to physical science. People have a kind of inertia, too. It takes an effort to get out of bed in the morning; for some people, this is a lot more effort than others. Athletic types have a hard time relaxing, especially after they’ve passed the apex of their athleticism, while those of us that are more…sedentary have a hard time improving ourselves, simply because it’s so much work.

Writers also have inertia. I know this from personal experience. It takes a big impetus to get me to start writing, whether a post like this, a short story, a novel, or some bit of software. But once I get going, I don’t want to stop. In a sense, it’s like writer’s block, but there’s a bit more to it.

Especially when writing a new piece of fiction (as opposed to a continuation of something I’ve already written), I’ve found it really hard to begin. Once I have the first few paragraphs, the first lines of dialogue, and the barest of setting and plot written down (or typed up), it feels like a dam bursting. The floodgates open, and I can just keep going until I get tired. It’s the same for posts like this. (“Let’s make a language” and the programming-related posts are a lot harder.)

At the start of a new story, I don’t think too much. The hardest part is the opening line, because that requires the most motivation. After that, it’s names. But the text itself, once I get over the first hurdles, seems to flow naturally. Sometimes it’s a trickle, others it’s a torrent, but it’s always there.

In a couple of months, I’ll once again take on the NaNoWriMo (National Novel Writing Month) challenge. Admittedly, I don’t keep to the letter of the rules, but I do keep the original spirit: write a novel of 50,000 words in the month of November. For me, that’s the important aspect. It doesn’t matter that it might be an idea I already had but never started because, as I said, writing inertia means it’s difficult for me to get over that hump and start the story. The timed challenge of NaNoWriMo is the impetus, the force that motivates me.

And I like that outside motivation. It’s why I’ve been “successful”, by my own definition, three out of the four times I’ve tried. In 2010, my first try, I gave up after 10 days and about 8,000 words. Real life interfered in 2011; my grandfather had a stroke on the 3rd of November, and nobody in my extended family got much done that month. Since then, though, I’m essentially 3-for-3: 50,000 words in 2012 (although that was only about a fifth of the whole novel); a complete story at 49,000 words in 2013 (I didn’t feel the need to pad it out); and 50,000 last year (that one’s actually getting released soon, if I have my way). Hopefully, I can make it four in a row.

So that’s really the idea of this post. Inertia is real, writing inertia doubly so. If you’re feeling it, and November seems too far away, find another way. There are a few sites out there with writing prompts, and you can always find a challenge to help focus you on your task. Whatever you do, it’s worth it to start writing. And once you start, you’ll keep going until you have to stop.

Irregularity in language

No natural language in the world is completely and totally regular. We think of English as an extreme of irregularity, and it really is, but all languages have at least some part of their grammar where things don’t always go as planned. And there’s nothing wrong with that. That’s a natural part of a language’s evolution.

Conlangs, on the other hand, are often far too regular. For an auxlang, intended for clear communication, that’s actually a good thing. There, you want regularity, predictability. You want the “clockwork morphology” of Esperanto or Lojban. The problem comes with the artistic conlangs. These, especially those made by novices, can be too predictable. It’s not exactly a big deal—every plural ending in -i isn’t going to break the immersion of a story for the vast majority of people—but it’s a little wart that you might want to do away with.

Count the ways

Irregularity comes in a few different varieties. Mostly, though, they’re all the same: a place where the normal rules of grammar don’t quite work. English is full of these, as everyone knows. Plurals are marked by -s, except when they’re not: geese, oxen, deer, people. Past tense is -ed, except that it sometimes isn’t: go and went. (“Strong” verbs like “get” that change vowels don’t really count, because they are regular, but in their own way.) And let’s not even get started on English orthography.

Some other languages aren’t much better. French has a spelling system that matches its pronunciation in theory only, and Irish looks like a keyboard malfunction. Inflectional grammars are full of oddities, ask any Latin student. Arabic’s broken plurals are just that: broken. Chinese tone patterns change in complex and unpredictable ways, despite tone supposedly being an integral part of a morpheme.

On the other hand, there are a few languages out there that seem to strive for regularity. Turkish is always cited as an example here, the joke being that there’s one irregular verb, and it’s only there so that students will know what to expect when they study other languages.

Conlangs are a sharp contrast. Esperanto’s plurals are always -j. There’s no small class of words marked by -m or anything like that. Again, for the purposes of clarity, that’s a good thing. But it’s not natural.

Phonological irregularity

Irregularity in a language’s phonology happens for a few different reasons. However, because phonology is so central to the character of a language, it can be hard to spot. Here are a few places where it can show up:

  • Borrowing: Especially as English (American English in particular) suffuses every corner of the planet, languages can pick up new words and bring new sounds with them. This did happen in English’s history, as it brought the /ʒ/ sound (“pleasure”, etc.) from French, but a more extreme example is the number of Bantu languages that borrowed click sounds from their Khoisan neighbors.

  • Onomatopoeia: The sounds of nature can be emulated by speech, but there’s not always a perfect correspondence between the two. The “meow” of a cat, for instance, contains a sequence of sounds rare in the rest of English.

  • Register: Slang and colloquialism can create phonological irregularities, although this isn’t all that common. English has “yeah” and “nah”, both with a final /æ/, which appears in no other word.

Grammatical irregularity

This is what most people think of when they consider irregularity in a language. Examples include:

  • Irregular marking: We’ve already seen examples of English plurals and past tense. Pretty much every other natural language has something else to throw in here.

  • Gender differences: I’m not just talking about the weirdness of having the word for “girl” in the neuter gender. The Romance languages also have a curious oddity where some masculine-looking words take a feminine article, as in Spanish la mano.

  • Number differences: This includes all those English words where the plural is the same as the singular, like deer and fish, as well as plural-only nouns like scissors.

  • Borrowing: Loanwords can bring their own grammar with them. What’s the plural of manga or even rendezvous?

Lexical irregularity

Sometimes words just don’t fit. Look at the English verb to be. Present, it’s is or are, past is was or were, and so on. Totally unpredictable. This can happen in any language, and one way is a drift in a word’s meaning.

  • Substitution: One word form can be swapped out for another. This is the case with to be and its varied forms.

  • Meaning changes: Most common in slang, like using “bad” to mean “good”.

  • Useless affixes: “Inflammable means flammable?” The same thing is presently ongoing as “irregardless” becomes more widespread.

  • Archaisms: Old forms can be kept around in fixed phrases. In English, this is most commonly the case with the Bible and Shakespeare, but “to and fro” is still around, too.

Orthographic irregularity

There are spelling bees for English. How many other languages can say that? How many would want to? As a language evolves, its orthography doesn’t necessarily follow, especially in languages where the standard spelling was fixed long ago. Here are a few ways that spelling can drift from pronunciation:

  • Silent letters: English is full of these, French more so. And then there are all those extra silent letters added to make words look more like Latin. Case in point, debt didn’t always have the b; it was added to remind people of debitus. (Silent letters can even be dialectal in nature. I pronounce wh and w differently, but few other Americans do.)

  • Missing letters: Nowhere in English can you have dg followed by a consonant except in the American spelling of words like judgment, where the e that would soften the g is implied. (I lost a spelling bee on this very word, in fact, but that was a long time ago.)

  • Sound changes: These can come from evolution or what seems like sheer perversity. (English gh is a case of the latter, I think.)

  • Borrowing: As phonological understanding has grown, we’ve adopted a kind of “standard” orthography for loanwords, roughly equivalent to Latin, Spanish, or Italian. Problem is, this is nothing at all like the standard orthography already present in English. And don’t even get me started on the attempts at rendering Arabic words into English letters.

In closing

All this is not to say that you should run off and add hundreds of irregular forms to your conlang. Again, if it’s an auxlang, you don’t want that. Even conlangs made for a story should use irregular words only sparingly. But artistic conlangs can gain a lot of flavor and “realism” from having a weird word here and there. It makes things harder to learn, obviously, but it’s the natural thing to do.

Death and remembrance

Early in the morning of August 16 (the day I’m writing this), my stepdad’s mother passed away after a lengthy and increasingly tiresome battle with Alzheimer’s. This post isn’t a eulogy; for various reasons, I don’t feel like I’m the right person for such a job. Instead, I’m using it as a learning experience, as I have the past few years during her slow decline. So this post is about death, a morbid topic in any event. It’s not about the simple fact of death, however, but how a culture perceives that fact.

Weight of history

Burial ceremonies are some of the oldest evidence of true culture and civilization that we have. The idea of burying the dead with mementos even extends across species boundaries: Neanderthal remains have been found with tools. And the dead, our dead, are numerous, as the rising terrain levels in parts of Europe (caused by increasing numbers of burials throughout the ages) can attest. Death’s traditions are evident from the mummies of Egypt and Peru, the mausoleums of medieval Europe or the classical world, and the Terracotta Army of China. All societies have death, and they all must confront it, so let’s see how they do it.

The role of religion

Religion, in a very real sense, is ultimately an attempt to make sense of death’s finality. The most ancient religious practices we know deal with two main topics: the creation of the world, and the existence and form of an afterlife. Every faith has its own way of answering those two core mysteries. Once you wade through all the commandments and prohibitions and stories and revelations, that’s really all you’re left with.

One of the oldest and most enduring ideas is the return to the earth. This one is common in “pagan” beliefs, but it’s also a central concept in the Abrahamic religions of the modern West. “Ashes to ashes, dust to dust,” is one popular variation of the statement. And it fits the biological “circle of life”, too. The body of the deceased does return to the earth (whether in whole or as ashes), and that provides sustenance, allowing new life to bloom.

More organized religion, though, needs more, and that is where we get into the murky waters of the soul. What that is, nobody truly knows, and that’s not even a metaphor: the notion of “soul” is different for different peoples. Is it the essence of humanity that separates us from lower animals? Is it intelligence and self-awareness? A spark of the divine?

In truth, it doesn’t really matter. Once religion offers the idea of a soul that is separate from the body, it must then explain what happens to that soul once the body can no longer support it. Thousands of years worth of theologians have argued that point, up to—and including—starting wars in the name of their own interpretation. The reason they can do that is simple: all the ideas are variations on the same basic theme.

That basic them is thus: people die. That much can’t be argued. What happens next is the realm of God or gods, but it usually follows a general pattern. Souls are judged based on some subset of their actions in life, such as good deeds versus bad, adherence to custom or precept, or general faithfulness. Their form of afterlife then depends on the outcome. “Good” souls (whatever that is decided to mean) are awarded in some way, while “bad” souls are condemned. The harsher faiths make this condemnation last forever, but it’s most often (and more justly, in my opinion) for a period of time proportional to the evils committed in life.

The award, in general, is a second, usually eternal life spent in a utopia, however that would be defined by the religion in question. Christianity, for example, really only specifies that souls in heaven are in the presence of God, but popular thought has transformed that to the life of delights among the clouds that we see portrayed in media; early Church thought was an earthly heaven instead. Islam, popularly, has the “72 eternal virgins” presented to the faithful in heaven. In Norse mythology, valiant souls are allowed to dine with the gods and heroes in Valhalla, but they must then fight the final battle, Ragnarök (which they are destined to lose, strangely enough). In even these three disparate cases, you can see the similarities: the good receive an idyllic life, something they could only dream of in the confines of their body.

Ceremonies of death

Religion, then, tells us what happens to the soul, but there is still the matter of the body. It must be disposed of, and even early cultures understood this. But how do we dispose of something that was once human while retaining the dignity of the person once inhabited it?

Ceremonial burial is the oldest trick in the book, so to speak. It’s one of the markers of intelligence and organization in the archaeological record, and it dates back to long before our idea of civilization. And it’s still practiced on a wide scale today; my stepdad’s mother, the ultimate cause of this post, will be buried in the coming days.

Burial takes different forms for different peoples, but it’s always a ceremony. The dead are often buried with some of their possessions, and this may be the result of some primal belief that they’ll need them in the hereafter. We don’t know for sure about the rites and rituals of ancient cultures, but we can easily imagine that they were not much different from our own. We in the modern world say a few words, remember the deeds of the deceased, lower the body into the ground, leave a marker, and promise to come back soon. Some people have more elaborate shrines, others have only a bare stone inscribed with their name. Some families plant flowers or leave baubles (my cousin, who passed away at the beginning of last year, has a large and frankly gaudy array of such things adorning his grave, including solar-powered lights, wind chimes, and pictures).

Anywhere the dead are buried, it’s pretty much the same. They’re placed in the ground in a special, reserved place (a cemetery). The graves are marked, both for ease of remembrance and as a helpful reminder of where not to bury another. The body is left in some enclosure to protect it from prying eyes, and keepsakes are typically beside it.

Burial isn’t the only option, though, not even in the modern world. Cremation, where the body is burned and rendered into ash, is still popular. (A local scandal some years ago involved a crematorium whose owner was, in fact, dumping the bodies in a pond behind the place and filling the urns with things like cement or ground bones.) Today, cremation is seen as an alternative to burial, but some cultures did (and do) see it or something similar as the primary method of disposing of a person’s earthly remains. The Viking pyre is fixed in our imagination, and television sitcoms almost always have a dead relative’s ashes sitting somewhere vulnerable.

I’ll admit that I don’t see the purpose of cremation. If you believe in the resurrection of souls into their reformed earthly bodies, as in some varieties of Christianity and Judaism, then you’d have to view the idea of burning the body to ash as something akin to blasphemy. On the other hand, I can see the allure. The key component of a cremation is fire, and fire is the ultimate in human tools. The story of human civilization, in a very real sense, is the story of how we have tamed fire. So it’s easy to see how powerful a statement cremation or a funeral pyre can make.

Burying and burning were the two main ways of disposing of remains for the vast majority of humanity’s history. Nowadays, we have a few other options: donating to science, dissection for organs, cryogenic freezing, etc. Notice, though, that these all have a “technological” connotation. Cryogenics is the realm of sci-fi; organ donation is modern medicine. There’s still a ceremony, but the final result is much different.

Closing thoughts

Death in a culture brings together a lot of things: religion, ritual, the idea of family. Even the legal system gets involved these days, because of things like life insurance, death certificates, and the like. It’s more than just the end of life, and there’s a reason why the most powerful, most immersive stories are often those that deal with death in a realisic way. People mourn, they weep, they celebrate the life and times of the deceased.

We have funerals and wakes and obituaries because no man is an island. Everyone is connected, everyone has family and friends. The living are affected by death, and far more than the deceased. We’re the ones who feel it, who have to carry on, and the elaborate ceremonies of death are our oldest, most human way of coping.

We honor the fallen because we knew them in life, and we hope to know them again in an afterlife, whatever form that may take. But, curiously, death has a dichotomy. Religion clashes with ancient tradition, and the two have become nearly inseparable. A couple of days from now, my stepdad might be sitting in the local funeral home’s chapel, listening to a service for his mother that invokes Christ and resurrection and other theology, but he’ll be looking at a casket that is filled with tiny treasures, a way of honoring the dead that has continued, unbroken, for tens of thousands of years. And that is the truth of culture.