On learning to code

Coding is becoming a big thing right now, particularly as an educational tool. Some schools are promoting programming and computer science classes, even a full curriculum that lasts through the entirety of education. And then there are the commercial and political movements such as Code.org and the Hour of Code. It seems that everyone wants children to learn something about computers, beyond just how to use them.

On the other side of the debate are the detractors of the “learn to code” push, who argue that it’s a boondoggle at best. Not everybody can learn how to code, they argue, nor should they. We’re past the point where anyone who wants to use a computer must learn to program it, too.

Both camps have a point, and I can see some merit in either side of the debate. I was one of a lucky few that did have the chance to learn about programming early in school, so I can speak from experience in a way that most others cannot. So here are my thoughts on the matter.

The beauty of the machine

Programming, in my opinion, is an exercise that brings together a number of disparate elements. You need math, obviously, because computer science—the basis for programming—is all math. You also need logic and reason, talents that are in increasingly short supply among our youth. But computer programming is more than these. It’s math, it’s reasoning, it’s problem solving. But it’s also art. Some problems have more than one solution, and some of those are more elegant than others.

At first glance, it seems unreasonable to try to teach coding to children before its prerequisites. True, there are kid-friendly programming environments, like MIT’s Scratch. But these can only take you so far. I started learning BASIC in 3rd grade, at the age of 8, but that was little more than copying snippets of code out of a book and running them, maybe changing a few variables here and there for different effects. And I won’t pretend that that was anywhere near the norm, or that I was. (Incidentally, I was the only one that complained when the teacher—this was a gifted class, so we had the same teacher each year—took programming out of the curriculum.)

My point is, kids need a firm grasp of at least some math before they can hope to understand the intricacies of code. Arithmetic and some concept of algebra are the bare minimum. General computer skills (typing, “computer literacy”, that sort of thing) are also a must. And I’d want some sort of introduction to critical thinking, too, but that should be a mandatory part of schooling, anyway.

I don’t think that very young students (kindergarten through 2nd grade) should be fooling around with anything more than a simple interface to code like Scratch. (Unless they show promise or actively seek the challenge, that is. I’m firmly in favor of more educational freedom.) Actually writing code requires, well, writing. And any sort of abstraction—assembly on a fictitious processor or something like that—probably should wait until middle school.

Nor do I think that coding should be a fixed part of the curriculum. Again, I must agree somewhat with the learn-to-code detractors. Not everyone is going to take to programming, and we shouldn’t force them to. It certainly doesn’t need to be a required course for advancement. The prerequisites of math, critical thinking, writing, etc., however, do need to be taught to—and understood by—every student. Learning to code isn’t the ultimate goal, in my mind. It’s a nice destination, but we need to focus on the journey. We should be striving to make kids smarter, more well-rounded, more rational.

Broad strokes

So, if I had my way, what would I do? That’s hard to say. These posts don’t exactly have a lot of thought put in them. But I’ll give it a shot. This will just be a few ideas, nothing like an integrated, coherent plan. Also, for those outside the US, this is geared towards the American educational system. I’ll leave it to you to convert it to something more familiar.

  • Early years (K-2): The first years of school don’t need coding, per se. Here, we should be teaching the fundamentals of math, writing, science, computer use, typing, and so on. Add in a bit of an introduction to electronics (nothing too detailed, but enough to plant the seed of interest). Near the end, we can introduce the idea of programming, the notion that computers and other digital devices are not black boxes, but machines that we can control.

  • Late elementary (3-5): Starting in 3rd grade (about age 8-9), we can begin actual coding, probably starting with Scratch or something similar. But don’t neglect the other subjects. Use simple games as the main programming projects—kids like games—but also teach how programs can solve problems. And don’t punish students that figure out how to get the computer to do their math homework.

  • Middle school (6-8): Here, as students begin to learn algebra and geometry (in my imaginary educational system, this starts earlier, too), programming can move from the graphical, point-and-click environments to something involving actual code. Python, JavaScript, and C# are some of the better bets, in my opinion. Games should still be an important hook, but more real-world applications can creep in. You can even throw in an introduction to robotics. This is the point where we can introduce programming as a discipline. Computer science then naturally follows, but at a slower pace. Also, design needs to be incorporated sometime around here.

  • High school (9-12): High school should be the culmination of the coding curriculum. The graphical environments are gone, but the games remain. With the higher math taught in these grades, 3D can become an important part of the subject. Computer science also needs to be a major focus, with programming paradigms (object-oriented, functional, and so on) and patterns (Visitor, Factory, etc.) coming into their own. Also, we can begin to teach students more about hardware, robotics, program design, and other aspects beyond just code.

We can’t do it alone

Besides educators, the private sector needs to do its part if ubiquitous programming knowledge is going to be the future. There’s simply no point to teaching everyone how to code if they’ll never be able to use such a skill. Open source code, open hardware, free or low-cost tools, all these are vital to this effort. But the computing world is moving away from all of them. Apple’s iOS costs hundreds of dollars just to start developing. Android is cheaper, but the wide variety of devices means either expensive testing or compromises. Even desktop platforms are moving towards the walled garden.

This platform lockdown is incompatible with the idea of coding as a school subject. After all, what’s the point? Why would I want to learn to code, if the only way I could use that knowledge is by getting a job for a corporation that can afford it? Every other part of education has some reflection in the real world. If we want programming to join that small, elite group, then we must make sure it has a place.

Release: Before I Wake

So I’ve written my first book. Actually, I finished writing it months ago. Editing, cover design, and all that other stuff that the pros get done for them took up the rest of the time. However you want to put it, it’s done. Here’s the page I’ve set up for it on this site. In there, you can find some info about the book, as well as links to anywhere I’ve put it up for sale. As of right now, that’s only Amazon, but I hope to expand the list eventually.

With this release, I’ve also taken the time to do some minor redecorating. Namely, the sidebar. I’ve added two sections over there. One of them has a list of my published works, something that will (hopefully!) grow to be much longer in the coming months and years. Below that is another list for ebooks that aren’t mine. I’m not the only writer in my family, and family sticks together, so I don’t mind giving a little bit of publicity. The first entry is my brother’s debut novella, Angel’s Sin. It’s firmly in the genre of fantasy erotica, and it’s a bit…odd, so be warned. Anyway, that’s another list that will grow in the future.

I won’t claim that Before I Wake is any great story. I like to think of it as the greatest novel I’ve ever written, but there’s only one other competitor for that title, and it’s…not that good. Maybe I’m too hard on myself. Who knows? However it turns out, I’ve discovered that I like to write. So I’m going to keep on doing that. Surely I can’t be the worst author ever.

How I made a book with Markdown and Pandoc

So I’m getting ready to self-publish my first book. I’ll have more detail about that as soon as it’s done; for now, I’m going to talk a little about the behind-the-scenes work. This post really straddles the line between writing and computers, and there will be some technical bits, so be warned.

The tech

I’ll admit it. I don’t like word processors that much. Microsoft Word, LibreOffice Writer, or whatever else is out there (even the old standby: WordPerfect), I don’t really care for them. They have their upsides, true, but they just don’t “fit” me. I suspect two reasons for this. First, I’m a programmer. I’m not afraid of text, and I don’t need shiny buttons and WYSIWYG styling. Second, I can be a bit obsessive. Presented with all the options of a modern word processor, like fonts and colors and borders and a table of contents, I’d spend more time fiddling with options than I would writing! So, when I want to write, I don’t bother with the fancy office apps. I just fire up a text editor (Vim is my personal choice, but I wouldn’t recommend it for you) and get to it.

“But what about formatting?” you may ask. Well, that’s an interesting story. At first, I didn’t even bother with inline formatting. I used the old-school, ad hoc styling familiar to anybody who remembers USENET, IRC, or email conversations. Sure, I could use HTML, just like a web page would, but the tags get in the way, and they’re pretty ugly. So I simply followed a few conventions, namely:

  • Chapter headers are marked by a following line of = or -.
  • A blank line means a paragraph break.
  • Emphasis (italics or text in a foreign language, for example) is indicated by surrounding _.
  • Bold text (when I need it, which is rare) uses *.
  • Scene breaks are made with a line containing multiple * and nothing else. (e.g., * * *)

Anything else—paragraph indentation, true dashes, block quotes, etc.—I’d take care of when it was time to publish. (“I’ll fix it in post.”) Simple, quick, and to the point. As a bonus, the text file is completely readable.

Mark it up

I based this system on email conventions and the style used by Project Gutenberg for their text ebooks. And it worked. I’ve written about 400,000 words this way, and it’s certainly good for getting down to business. But it takes a lot of post-processing, and that’s work. As a programmer, work is something I like to avoid.

Enter Markdown. It’s not much more than a codified set of conventions for representing HTML-like styling in plain text, and it’s little different from what I was already using. Sounds great. Even better, it has tool support! (There’s even a WordPress plugin, which means I can write these posts in Markdown, using Vim, and they come out as HTML for you.)

Markdown is great for its intended purpose, as an HTML replacement. Books need more than that, though; they aren’t just text and formatting. And that’s where the crown jewel comes in: Pandoc. It takes in Markdown text and spits out HTML or EPUB. And EPUB is what I want, because that’s the standard for ebooks (except Kindle, which uses MOBI, but that’s beside the point).

Putting the pieces together

All this together means that I have a complete set of book-making tools without ever touching a word processor, typesetting program, or anything of the sort. It’s not perfect, it’s not fancy, and it certainly isn’t anywhere near professional. But I’m not a professional, am I?

For those wondering, here are the steps:

  1. Write book text in Pandoc-flavored Markdown. (Pandoc has its own Markdown extensions which are absolutely vital, like header identifiers and smart punctuation.)

  2. Write all the other text—copyright, dedication, “About the Author”, and whatever else you need. (“Front matter” and “back matter” are the technical terms.) I put these in separate Markdown files.

  3. Create EPUB metadata file. This contains the author, title, date, and other attributes that ebook readers can use. (Pandoc uses a format called YAML for this, but it also takes XML.)

  4. Make a cover. This one’s the hard part for me, since I have approximately zero artistic talent.

  5. Create stylesheet and add styling. EPUB uses the same CSS styling as HTML web pages, and Pandoc helps you a lot with this. Also, this is where I fix things like chapter headings, drop caps/raised initials, and so on.

  6. Run Pandoc to generate the EPUB. (The command would probably look something like this: pandoc --smart --normalize --toc-depth=1 --epub-stylesheet=<stylesheet file> --epub-cover-image=<cover image> -o <output file> <front matter .md file> <main book text file(s)> <back matter .md file> <metadata .yml or .xml file>)

  7. Open the output file in an ebook reader (Calibre, for me) and take a look.

  8. Repeat steps 5 and 6 until the formatting looks right.

  9. Run KindleGen to make a MOBI file. You only need this if you intend to publish on Amazon’s store. (I do, so I had to do this step.)

  10. Bask in the glory of creating a book! Oh, and upload your book to wherever. That’s probably a good idea, too.

Yeah, there are easier methods. A lot of people seem allergic to the command line; if you’re one of them, this isn’t the way for you. But I’m comfortable in the terminal. As I said, I’m a programmer, so I have to be. The hardest part for me (except the cover) was figuring out the options I needed to make something that looked like a proper ebook.

Even if you don’t use my cobbled-together method of creating an ebook, you still owe it to yourself to check out Pandoc. It’s so much easier, in my opinion, than a word processor or ebook editor. There are even graphical front-ends out there, if that’s what you prefer. But I like working with plain text. It’s easy, it’s readable, and it just works.

Let’s make a language – Part 7b: Adjectives (Isian)

Adjectives in Isian, like in English, aren’t that much of a problem. They don’t have a specific form that marks them out as what they truly are. They don’t change for number like nouns do. They’re really just…there. A few examples of Isian adjectives include wa “big”, hul “cold”, yali “happy”, and almerat “wise”.

As we saw in the last Isian post, the normal word order puts adjectives before nouns, and articles before adjectives. So we can make fuller noun phrases like ta wa talar “a big house” or e yali eshe “the happy girl”. In each case, the order is mostly the same as in English: article, then adjective, then noun.

We can even string adjectives together: es almerat afed sami “the wise old men”. (If you prefer adding commas between adjectives, that’s fine, too. It’s okay to write es almerat, afed sami, but it’s not required.)

Like in English, we can’t use an adjective like this without a noun. It’s not grammatical in Isian to say es almerat. Instead, we have to add an extra word, a: es almerat at “the wise ones”. (At least it has a regular plural form.) After a vowel, it becomes na: ta wa na “a big one”.

We can also use an adjective as a predicate. Here, it follows the copula (tet or one of its conjugations). An example might be en yali “I am happy”.

Isian adjectives also have equivalents to the English comparative and superlative (“-er” and “-est”) forms. As with many suffixes in the language, these vary based on the stem’s ending. For consonant-stems, the comparative is -in and the superlative is -ay. Vowel-stems simply insert a d at the beginning of the suffix to make -din and -dai, respectively. So yali “happy” becomes yalidin “happier* and yaliday “happiest”, while hul “cold” turns into hulin “colder” and hulai “coldest”.

There are a couple of differences, though. First, these suffixes can be used on any adjective; Isian has no counterparts to those English adjectives that require “more” and “most” instead of “-er” and “-est”. (On the plus side, we don’t have to worry about three forms for bil “good”. It’s fully regular: bil, bilin “better”, bilai “best”.)

Second, adjectives that are derived from nouns, like “manly” from “man”, usually can’t take the superlative. We haven’t yet seen any of those (or even how to make them). For these, the comparative serves both purposes.

That’s pretty much all there is to adjectives in Isian, as far as the basics are concerned. Now we can make quite a few more complex phrases and even some nice sentences. There’s still a lot more to come, though.

Isian word list

Not every word that we’ve seen is in this list, but it covers almost all of the “content” words in their base forms, along with a whole bunch of new ones you can try out. Also, words with irregular plurals have their plural suffixes shown in parenthesis, e.g., the plural of tay is tays.

English Isian
air rey
all sota
angry hayka
animal embin
any ese
arm ton
back bes
bad num
beautiful ichi
bed dom
big wa
bird firin
bitter guron
black ocom
blood miroc
blue sush
boat sholas
body har
bone colos
book peran
bottom dolis
boy jed
bread pinda(r)
bright lukha
brother doyan
car choran
cat her
chest sinal
child tay(s)
city eblon
closed noche
cloth waf
cloud albon
cold hul
color echil
correct ochedan
cup deta(s)
daughter sali(r)
day ja
daytime jamet
dim rum
dog hu
door opar
dress lash
drink adwar
dry khen
ear po(s)
earth tirat
egg gi(r)
every alich
eye bis
face fayan
false nanay
father pado(s)
few uni
field bander
finger ilca(s)
fire cay
flower atul
food tema
foot pusca
forest tawetar
friend chaley
front hamat
fruit chil
girl eshe(r)
glass arcol
gold shayad
good bil
grass tisen
green tich
hair pardel
hand fesh
happy yali
hard dosem
hat hop
head gol
heart sir
hill modas
hot hes
house talar
ice yet
island omis
king lakh
knife hasha
lake fow
leaf eta
left kintes
leg dul
light say
long lum
loud otar
man sam
many mime
meat shek
milk mel
moon nosul
mother mati(r)
mountain abrad
mouth ula
name ni
narrow ilcot
net rec
new ekho
nice nim
night chok
nose nun
not an
old afed
open bered
paper palil
peace histil
pen etes
person has
plant dires
poor umar
pot fan
queen lasha(r)
rain cabil
red ray
rich irdes
right estes
river ficha(s)
rock tag
rough okhor
sad nulsa
scent inos
sea jadal
sharp checor
shirt jeda(s)
shoe taf
short (tall) wis
short (long) wis
silent anchen
sister malin
skin kirot
sky halac
small ish
smooth fu
snow saf
soft ashel
son sor
sound polon
sour garit
star key
sun sida
sweet lishe
sword seca
table mico
tail hame
tall wad
thick gus
thin tin
to allow likha
to ask oca
to be tet
to begin nawe
to blow furu
to build oste
to burn becre
to buy tochi
to catch sokhe
to come cosa
to cook piri
to cry acho
to cut sipe
to dance danteri
to die nayda
to do te
to drink jesa
to eat hama
to end tarki
to enter yoweni
to feel ilsi
to give jimba
to go wasa
to guard holte
to have fana
to hear mawa
to hit icra
to hold otasi
to hunt ostani
to kiss fusa
to know altema
to laugh eya
to learn nate
to like mire
to live liga
to live in dalega
to look at dachere
to look for ediche
to love hame
to make tinte
to plant destera
to play bela
to pray barda
to read lenira
to receive rano
to run hota
to say ki
to see chere
to sell dule
to sing seri
to sit uba
to sleep inama
to smell nore
to speak go
to stand ayba
to taste cheche
to teach reshone
to think tico
to throw bosa
to touch shira
to walk coto
to want doche
to wash hishi
to wear disine
to write roco
tongue dogan
tooth ten
top poy
tree taw
true ferin
ugly agosh
war acros
warm him
water shos
wet shured
white bid
wide pusan
wind naf
wise almerat
woman shes
wood totac
word ur
world sata(r)
wrong noni
year egal
yellow majil
young manir

Off week

I’m not doing a programming post this week. Sorry about that. Normally, I have things scheduled at least a week in advance, and that’s true now. But I’ve still decided to take the week off. Why? Upgrades. Specifically, I’ve been upgrading my main desktop.

I have two PCs, a desktop and a laptop, both running different flavors of Linux. The laptop runs Ubuntu (currently 12.04, because it’s old and I’m not particularly keen on running through the hassle of an Ubuntu upgrade, even an LTS one), and it’s not really a problem. The desktop, though, is where I’ve been bolder. It’s currently running Debian Testing, and that is where the problem lies.

If you don’t know, Debian has a three-way division of versions. There’s Stable, which is just what you’d expect; once it comes out, it’s pretty much fixed, only receiving the security fixes and the occasional backport. Testing—the one I’m using—is more up to date, at the risk of causing possible breakage. And then Unstable is the “raw” form, the absolute newest versions of almost everything, bugs or no bugs.

Packages (applications like LibreOffice or libraries like libpng) start off in Unstable with each new version. If nobody finds any critical bugs after a few days, and there’s no other reason to hold things up, then it moves into Testing. Every couple of years (give or take), Testing is “frozen”, and the new Stable is made from that. It’s a slick process…most of the time.

A few weeks ago, fate conspired to throw a wrench into this well-oiled machine. KDE, the desktop environment that I use on the Debian computer, started a major upgrade, from version 4 to version 5. (There’s a whole big change in branding, but I don’t care about the details. In my mind, it’s KDE 5, and that’s that.) This broke a lot of things, because KDE 5 uses new libraries, new modules, and even a few new applications. So I held off on updating that for a while.

But that’s not all. KDE, like many other parts of a working Linux system, is written in C++. And C++ has had some recent major changes itself, namely the C++11 update to the standard. With C++11 comes a new ABI. This goes all the way down the stack to the compiler, GCC, which implemented the new ABI as part of its own version 5 upgrade. That was a major change that would break more than a few things, so I held off on that update, too.

Earlier this week, though, I decided to take the plunge. The main reason that prompted this was some seemingly unrelated library update that broke the integration between KDE and GTK+ that made certain applications (like Iceweasel, Debian’s “de-branded” Firefox) look horribly out of place.

So I did it. Things broke, but I’ve been able to put most of them back together. KDE 5 is…not too bad, compared to 4. It’s early yet, so I’ll give it a little time before I come to a final decision. But my initial impression is that it’s what Windows 8 was trying to be. Like Windows 8, it changes a lot of things for no good reason, leaving users to find a way to put them back the way they were. But it’s nowhere near as bad as the transition from KDE 3 to 4, from what I’ve heard. It’s the combination of the KDE upgrade and the C++ ABI transition that made things so bad. Now, though, the worst is (hopefully) behind me, and I’ll be able to get back to regular posting next week.

Mars: fantasy and reality

Mars is in the public consciousness right now. The day I’m writing this, in fact, NASA has just announced new findings that indicate flowing water on the Red Planet. Of course, that’s not what most people are thinking about; the average person is thinking of Mars because of the new movie The Martian, a film based on a realistic account of a hypothetical Mars mission from the novel of the same name.

We go through this kind of thing every few years. A while back, it was John Carter. A few years before that, we had Mission to Mars and Red Planet. Go back even further, and you get to Total Recall. It’s not really that Mars is just now appearing on the public’s radar. No, this goes in cycles. The last crop of Martian movies really came about from the runaway success of the Spirit and Opportunity rovers. Those at the turn of the century were inspired by earlier missions like Mars Pathfinder. And The Martian owes at least some of its present hype to Curiosity and Phoenix, the latest generation of planetary landers.

Move outside the world of mainstream film and into written fiction, though, and that’s where you’ll see red. Mars is a fixture of science fiction, especially the “harder” sci-fi that strives for realism and physical accuracy. The reasons for this should be obvious. Mars is relatively close, far nearer to Earth than any other body that could be called a planet. Of the bodies in the solar system besides our own world, it’s probably the least inhospitable, too.

Not necessarily hospitable, mind you, but Mars is the least bad of all our options. I mean, the other candidates look about as habitable as the current Republican hopefuls are electable. Mercury is too hot (mostly) and much too difficult to actually get to. Venus is a greenhouse pressure cooker. Titan is way too cold, and it’s about a billion miles away, to boot. Most everything else is an airless rock or a gas giant, neither of which scream “habitable” to me. No, if you want to send people somewhere useful in the next couple of decades, you’ve got two options: the moon and Mars. And we’ve been to the moon. (Personally, I think we should go back there before heading to Mars, but that seems to be a minority opinion.)

But say you want to write a story about people leaving Earth and venturing out into the solar system. Well, for the same reasons, Mars is an obvious destination. But the role it plays in a fictional story depends on a few factors. The main one of these is the timeframe. When is your story set? In 2050? A hundred years from now? A thousand? In this post, we’ll look at how Mars changes as we move our starting point ahead in time.

The near future

Thanks to political posturing and the general anti-intellectual tendencies of Americans in the last generation, manned spaceflight has taken a backseat to essentially everything else. As of right now, the US doesn’t even have a manned craft, and the only one on the drawing board—the Orion capsule—is intentionally doomed to failure through budget cuts and appropriations adjustments. The rest of the world isn’t much better. Russia has the Soyuz, but it’s only really useful for low-Earth orbit. China doesn’t have much, and they aren’t sharing, anyway. Private companies like SpaceX are trying, but it’s a long, hard road.

So, barring a reason for a Mars rush, the nearest future (say, the next 15-20 years) has our planetary neighbor as a goal rather than a place. It’s up there, and it’s a target, but not one we can hit anytime soon. The problem is, that doesn’t make for a very interesting story.

Move up to the middle of this century, starting around 2040, and even conservative estimates give us the first manned mission to Mars. Now, Mars becomes like the moon in the 1960s, a destination, a place to be conquered. We can have stories about the first astronauts to make the long trip, the first to blaze the trail through interplanetary space.

With current technology, it’ll take a few months to get from Earth to Mars. The best times happen once every couple of years; any other time would increase the travel duration dramatically. The best analogy for this is the early transoceanic voyages. You have people stuck in a confined space together for a very long time, going to a place that few (or none) have ever visited, with a low probability of survival. Returning early isn’t an option, and returning at all might be nearly impossible. They will run low on food, they will get sick, they will fight. Psychology, not science, can take center stage for a lot of this kind of story. A trip to Mars can become a character study.

The landing—assuming they survive—moves science and exploration back to the fore. It won’t be the same as the Apollo program. The vagaries of orbital mechanics mean that the first Mars missions won’t be able to pack up and leave after mere hours, as Apollo 11 did. Instead, they’ll be stuck for weeks, even months. That’s plenty of time to get the lay of the land, to do proper scientific experiments, to explore from ground level, and maybe even to find evidence of Martian life.

The middle

In the second half of this century, assuming the first trips are successful, we can envision the second stage of Mars exploration. This is what we should have had for the moon around 1980; the most optimistic projections from days gone by (Zubrin’s Mars Direct, for example) put it on Mars around the present day. Here, we’ve moved into a semi-permanent or permanent presence on Mars for scientific purposes, a bit like Antarctica today. Shortly after that, it’s not hard to envision the first true colonists.

Both of these groups will face the same troubles. Stories set in this time would be of building, expanding, and learning to live together. Mars is actively hostile to humans, and this stage sees it becoming a source of environmental conflict, an outside pressure acting against the protagonists. Antarctica, again, is a good analogy, but so are the stories of the first Europeans to settle in America.

The trip to Mars won’t get any shorter (barring leaps in propulsion technology), so it’s still like crossing the Atlantic a few centuries ago. The transportation will likely be a bit roomier, although it might also carry more people, offsetting the additional capacity. The psychological implications exist as before, but it’s reasonable to gloss over them in a story that doesn’t want to focus on them.

On the Red Planet itself, interpersonal conflicts can develop. Disasters—the Martian dust storm is a popular one—can strike. If there is native life in your version of Mars, then studying it becomes a priority. (Protecting it or even destroying it can also be a theme.) And, in a space opera setting, this can be the perfect time to inject an alien artifact into the mix.

Generally speaking, the second stage of Mars exploration, as a human outpost with a continued presence, is the first step in a kind of literary terraforming. By making Mars a setting, rather than a destination, the journey is made less important, and the world becomes the focus.

A century of settlement

Assuming our somewhat optimistic timeline, the 22nd century would be the time of the land grab. Propulsion or other advances at home make the interplanetary trip cheaper, safer, and more accessible. As a result, more people have the ability to venture forth. Our analogy is now America, whether the early days of colonization in the 17th century or the westward push of manifest destiny in the 19th.

In this time, as Mars becomes a more permanent human settlement, a new crop of plot hooks emerges. Social sciences become important once again. Religion and government, including self-government, would be on everyone’s minds. Offshoot towns might spring up.

And then we get to the harder sciences, particularly biology. Once people are living significant portions of their lives on a different planet, they’ll be growing their own food. They’ll be dying, their bodies the first to be buried in Martian soil. And they’ll be reproducing.

Evolution will affect every living thing born on Mars, and we simply don’t know how. The lower gravity, the higher radiation, the protective enclosure necessary for survival, how will these changes affect a child? It won’t be an immediate change, for sure, but the second or third generation to be born on Mars might not be able to visit the birthplace of humanity. Human beings would truly split into two races—a distinction that would go far beyond mere black and white—and the word Martian would take on a new meaning.

Mars remains just as hostile as before, but it’s a known danger now. It’s the wilderness. It’s a whole world awaiting human eyes and boots.

Deeper and deeper

As time goes by, and as Mars becomes more and more inhabited, the natural conclusion is that we would try to make it more habitable. In other words, terraforming. That’s been a presence in science fiction for decades; one of the classics is Kim Stanley Robinson’s Mars trilogy, starting with Red Mars.

In the far future, call it about 200 years from now, Mars can truly begin to become a second planet for humanity. At this point, people would live their whole lives there, never once leaving. Towns and cities could expand, and an ultimate goal might arise: planetary independence.

But the terraforming is the big deal in this late time. Even the best guesses make this a millennia-long process, but the first steps can begin once enough people want them to. Thickening the atmosphere, raising the worldwide temperature, getting water to flow in more than the salty tears NASA announced on September 28, these will all take longer than a human lifetime, even granting extensive life-lengthening processes that might be available to future medicine.

For stories set in this time, Mars can again become a backdrop, the set upon which your story will take place. The later the setting, the more Earth-like the world becomes, and the less important it is that you’re on Mars.

The problems these people would face are the same as always. Racial tensions between Earthlings and Martians. The perils of travel in a still-hostile land. The scientific implications of changing an entire world. Everything to do with building a new society. And the list goes on, limited only by your imagination.

Look up

Through the failings of our leaders, the dream of Mars has been delayed. But all is not lost. We can go there in our minds, in the visuals of film, the words of fiction. What we might find when we arrive, no one can say. The future is what we make of it, and that is never more true than when you’re writing a story set in it.

Let’s make a language – Part 7a: Adjectives (Intro)

We’ve talked about nouns. We’ve talked about verbs. That’s two of the main three parts of speech present in most languages, which leaves only one, and that one is the subject of this post.

Adjectives are describing words. They tell us something about a noun, such as its color (“red”), its shape (“round”), or its mood (“angry”). In theory, that’s pretty much all there is to the adjective, but we can’t stop there.

A brief introduction

Just about every language has adjectives. (Most of those that claim they don’t are merely cleverly disguising them.) And most languages have a few different sorts of adjectives. The main kind—probably the most interesting—is the attributive adjective. That’s the one that modifies a noun or noun phrase to add detail: “the red door”, “a big deal”. We’ll be seeing a lot of these.

Predicate adjectives don’t directly modify a noun phrase. Instead, they function as a “predicate”, basically like the object to a verb, as in English “the child is happy“, “that man is tall“. We’ll talk more about them a little later, because they can be quite special.

Most of the other types besides these two aren’t quite as important, but they serve to show that adjectives are flighty. Some languages let them act like nouns (the canonical English example is the biblical quote “the meek shall inherit the earth”). Some treat them like verbs, a more extreme variant of the predicate adjective where it’s the adjective itself that is marked for tense and concord and all the other verb stuff. Adjectives can even have their own phrases, just like nouns and verbs. In this case, other adjectives (or adverbs) modify the main one, further specifying meaning.

So there’s actually a lot more to the humble adjective than meets the eye.


First, we’ll look at the attributive adjectives. Except for the head noun, these will probably be the “meatiest” portions of noun phrases, in terms of how much meaning they provide. Depending on the language, they can go either before or after a noun, as we saw when we looked at word order. English, for example, puts them before, while Spanish likes them to go after the head noun.

In languages with lots of nominal distinction (case, number, gender, etc.), there’s a decision to be made. Do adjectives follow their head nouns in taking markers for these categories? They do in Spanish (la casa grande, las casas grandes), but not in English (“the big house”, “the big houses”). Also, if gender is assigned haphazardly, as it is in so many languages, do adjectives have a “natural” gender, or are there, say, separate masculine and feminine forms? What about articles? Arabic, for example, requires an adjective to take the definite article al- if it modifies a noun with one. Basically, the question can be summed up as, “How much are attributive adjectives like nouns?”

English is near one end of the spectrum. An English adjective has no special plural form; indeed, it doesn’t change much at all. At the other end, we can imagine adjectives that are allowed to completely take the place of nouns, where they are inflected for case and number and everything else, and they function as the heads of noun phrases, perhaps with a suffix or something to remind people of their true nature. Languages like this, in fact, are the norm, and English is more like the exception.


Predicate adjectives (the technical term is actually “predicative”, but I find it a bit clumsy), by contrast, seem more like verbs. In English, as in many languages, they are typically the objects of the copula verb, the equivalent of “to be”. They’re still used to modify a noun, but in a different way.

Again, as with attributives, we can ask, “How verb-like are they?” There’s not too much difference between “the man is eating” and “the man is hungry“, at least as far as word order is concerned, but that’s where the similarities end in English. We can’t have a predicate adjective in the past tense (although we can have a copula in it), but other languages do allow this. For some, predicates are verbs, in essentially every aspect, including agreement markers and other bits of verbal morphology; others allow either option, leaning one way or the other. Strangely enough, the familiar European languages are strict in their avoidance of verbal adjectives, instead preferring copulas.

If a language does permit adjectives to take on the semblance of verbs, then what parts of it? Are they conjugated for tense? Do they have agreement markers? Is every adjective a potential verb, or are only some of them? This last is an interesting notion, as the “split” between verbal predicates and nonverbal ones can be based on any number of factors, a bit like noun gender. A common theme is to allow some adjectives to function as verbs when they represent a temporary state, but require a nonverbal construction when they describe inherent qualities.


Since adjectives describe qualities of a noun, it’s natural to want to compare them. Of course, not all of them can be compared; which ones can is different for different languages. In English, it’s largely a matter of semantics: “most optimum”, among others, is considered incorrect or redundant. But most adjectives are comparable. This isn’t the case with every language, however. Some have only a special set of comparable adjectives, and a few have none at all.

Some languages offer degrees of comparison, like English’s “big/bigger/biggest” or “complex/more complex/most complex”. In these cases, the second of the trio is called the comparative, while the third is the superlative. (I don’t know of any languages that have more than three degrees of comparison, but nothing says it’s impossible. Alien conlangers, take note.)

Looking ahead

Determiners are a special class of word that includes articles (like “a” and “the”), demonstratives (“this” and “that”), possessives (“my”, “his”), and a few other odds and ends. They work a bit like adjectives, and older grammars often considered them a subset. But that has fallen out of fashion, and now they’re their own thing. I mention them here partly as a taste of things to come, and as a good lead-in for next time. I’ll talk much more about them in the next theory post, which covers pronouns, since that’s what they seem most like to me.

At this point, we’re done with the “grind” of conlanging. So far, we’ve covered everything from the sounds of a language, to the formation of words, and the three big grammatical categories of noun, verb, and adjective. Sure, we could delve deeper into any of these, and entire textbooks have been written on all of these topics, but we don’t have to worry about that. We can deal with the details as they arise. There’s plenty more to come—we haven’t even begun to look at pronouns or prepositions or even adverbs—but the hardest part, I feel, is behind us. We’re well on our way. Next, we’ll take a look at adjectives in Isian and Ardari, and you’ll get to see the first true sentences in both conlangs, along with a large selection of vocabulary.

Assembly: the first steps

(Editor’s note: I pretty much gave up on the formatting for this one. Short of changing to a new syntax highlighter, there’s not an awful lot I can do for it, so I just left it as is. I did add an extra apostrophe on one line to stop the thing from thinking it was reading an unclosed string. Sorry for any trouble you might have when reading.)

As I’ve been saying throughout this little series, assembly is the closest we programmers can get to bare metal. On older systems, it was all but necessary to forgo the benefits of a higher-level language, because the speed gains from using assembly outweighed the extra developer time needed to write it. Nowadays, of course, the pendulum has swung quite far in the opposite direction, and assembly is usually only used in those few places where it can produce massive speedups.

But we’re looking at the 6502, a processor that is ancient compared to those of today. And it didn’t have the luxury of high-level languages, except for BASIC, which wasn’t much better than a prettified assembly language. The 6502, before you add in the code stored in a particular system’s ROM, couldn’t even multiply two numbers, much less perform complex string manipulation or operate on data structures.

This post has two code samples, written by myself, that demonstrate two things. First, they show you what assembly looks like, in something more than the little excerpts from last time. Second, they illustrate just how far we’ve come. These aren’t all that great, I’ll admit, and they’re probably not the fastest or smallest subroutines around. But they work for our purposes.

A debug printer

Debugging is a very important part of coding, as any programmer can (or should) agree. Assembly doesn’t give us too much in the way of debugging tools, however. Some assemblers do, and you might get something on your particular machine, but the lowest level doesn’t even have that. So this first snippet prints a byte to the screen in text form.

; Prints the byte in A to the address ($10),Y
; as 2 characters, then a space
    tax          ; save for later
    ; Some assemblers prefer these as "lsr a" instead
    lsr          ; shift A right 4 bits
    lsr          ; this moves the high bits to the bottom
    jsr outb     ; we use a subroutine for each character
    txa          ; reload A
    and #$0F     ; mask out the top 4 bits
    jsr outb     ; now print the bottom 4 bits
    lda #$20     ; $20 = ASCII space
    sta ($10),Y
    adc #$30     ; ASCII codes for digits are $30-$39
    cmp #$39     ; if A > 9, we print a letter, not a digit
    bmi digit
; Comment out this next line if you're using 6502asm.com '
    adc #$07     ; ASCII codes for A-F are $41-$46
digit:           ; either way, we end up here
    sta ($10),Y
    iny          ; move the "cursor" forward

You can call this with JSR printb, and it will do just what the comments say: print the byte in the accumulator. You’d probably want to set $10 and $11 to point to video memory. (On many 6502-based systems, that starts at $0400.)

Now, how does it work? The comments should help you—assembly programming requires good commenting—but here’s the gist. Hexadecimal is the preferred way of writing numbers when using assembly, and each hex digit corresponds to four bits. Thus, our subroutine takes the higher four bits (sometimes called a nibble, and occasionally spelled as nybble) and converts them to their ASCII text representation. Then it does the same thing with the lower four bits.

How does it do that part, though? Well, that’s the mini-subroutine at the end, starting at the label outb. I use the fact that ASCII represents the digits 0-9 as hexadecimal $30-$39. In other words, all you have to do is add $30. For hex A-F, this doesn’t work, because the next ASCII characters are punctuation. That’s what the CMP #$39...BMI digit check is for. The code checks to see if it should print a letter; if so, then it adds a further correction factor to get the right ASCII characters. (Since the online assembler doesn’t support true text output, we should comment out this adjustment; we’re only printing pixels, and these don’t need to be changed.)

This isn’t much, granted. It’s certainly not going to replace printf anytime soon. Then again, printf takes a lot more than 34 bytes. Yes, that’s all the space this whole subroutine needs, although it’s still about 1/2000 of the total memory of a 6502-based computer.

If you’re using the online assembler, you’ll probably want to hold on to this subroutine. Coders using a real machine (or emulation thereof) can use the available ROM routines. On a Commodore 64, for example, you might be able to use JSR $FFD2 instead.

Filling a gap

As I stated above, the 6502 processor can’t multiply. All it can do, as far as arithmetic is concerned, is add and subtract. Let’s fix that.

; Multiplies two 8-bit numbers at $20 and $21
; Result is a 16-bit number stored at $22-$23
; Uses $F0-$F2 as scratch memory
    ldx #$08    ; X holds our counter
    lda #$00    ; clear our result and scratch memory
    sta $22     ; these start at 0
    sta $23
    sta $F1

    lda $20     ; these can be copied
    sta $F0
    lda $21
    sta $F2

    lsr $F2
    bcc next    ; if no carry, skip the addition
    lda $22     ; 16-bit addition
    adc $F0
    sta $22
    lda $23
    adc $F1
    sta $23

    asl $F0     ; 2-byte shift
    rol $F1
    dex         ; if our counter is > 0, repeat
    bne nxbit

This one will be harder to adapt to a true machine, since we use a few bytes of the zero page for “scratch” space. When you only have a single arithmetic register, sacrifices have to be made. On newer or more modern machines, we’d be able to use extra registers to hold our temporary results. (We’d also be more likely to have a built-in multiply instruction, but that’s beside the point.)

The subroutine uses a well-known algorithm, sometimes called peasant multiplication, that actually dates back thousands of years. I’ll let Wikipedia explain the details of the method itself, while I focus on the assembly-specific bits.

Basically, our routine is only useful for multiplying a byte by another byte. The result of this is a 16-bit number, which shouldn’t be too surprising. Of course, we only have an 8-bit register to use, so we need to do some contortions to get things to work, one of the problems of using the 6502. (This is almost like a manual version of what compilers call register spilling.)

What’s most important for illustrative purposes isn’t the algorithm itself, though, but the way we call it. We have to set things up in just the right way, with our values at the precise memory locations; we must adhere to a calling convention. When you use a higher-level language, the compiler takes care of this for you. And when you use assembly to interface with higher-level code (the most common use for it today), it’s something you need to watch.

As an example, take a modern x86 system using the GCC compiler. When you call a C function, the compiler emits a series of instructions to account for the function’s arguments and return value. Arguments are pushed to the stack in a call frame, then the function is called. It accesses those arguments by something like the 6502’s indexed addressing mode, then it does whatever it’s supposed to do, and returns a result either in a register (or two) or at a caller-specified memory location. Then, the caller manipulates the stack pointer—much faster than repeatedly popping from the stack—to remove the call frame, and continues execution.

No matter how it’s done, assembly code that’s intended to connect to higher-level libraries—whether in C or some other language—have to respect that language’s calling conventions. Other languages do, too. That’s what extern "C" is for in C++, and it’s also why many other languages have a foreign function interface, or FFI. In our case, however, we’re writing those libraries, and the 6502 is such a small and simple system, so we can make our own calling conventions. And that’s another reason we need good documentation when coding assembly.

Coming up

We’ll keep going through this wonderful, primitive world a while longer. I’ll touch on data structures, because they have a few interesting implications when working at this low level, but we won’t spend too much time on them. After that, who knows?

Alternate histories

For a lot of people, especially writers and other dreamers, one of the great questions, a question that provokes more thought, debate, and even argument, is “What if?” What if one single part of history was changed? What would be the result? These alternate histories are somewhat popular, as fictional sub-genres go, and they aren’t just limited to the written word. It’s a staple of Star Trek series, for example, to travel into the past or visit the “mirror universe”, either of which involves a specific change that can completely alter the present (their present, mind you, which would be our future).

What-if scenarios are also found in nonfiction works. Look at the history section of your favorite bookstore, digital or physical. You’ll find numerous examples asking things like “What if the D-Day invasion failed?” or (much earlier in the timeline) “What if Alexander had gone west to conquer, instead of east?” Some books focus on a single one of these questions, concocting an elaborate alternative to our known history. Others stuff a number of possibilities in a single work, necessarily giving each of them a less-detailed look.

And altering the course of history is a fun diversion, too. Not only that, but it can make a great story seed. You don’t have to write a novel of historical fiction to use “real” history and change things around a little bit. Plenty of fantasy is little more than a retelling of one part of the Middle Ages, with only the names changed to protect the innocent. Sci-fi also benefits, simply because history, in the broadest strokes, does repeat itself. The actors are different, but the play remains the same.


So, let’s say you do want to construct an alternate timeline. That could easily fill an entire book—there’s an idea—but we’ll stick to the basics in this post. First and foremost, believability is key. Sure, it’s easy to say that the Nazis and Japanese turned the tide in World War II, eventually invading the US and splitting it between them. (World War II, by the way, is a favorite for speculators. I don’t know why.) But there’s more to it than that.

The Butterfly Effect is a well-known idea that can help us think about how changing history can work. As in the case of the butterfly flapping its wings and causing a hurricane, small differences in the initial conditions can grow into much larger repercussions. And the longer the time since the breakaway point, the bigger the changes will be.

I’m writing this on September 21, and some of the recent headlines include the Emmy Awards, the Greek elections, and the Federal Reserve’s decision to hold interest rates, rather than raising them. Change any bit of any of these, and the world today isn’t going to be much different. Go back a few years, however, and divergences grow more numerous, and they have more impact. Obviously, one of the biggest events of the current generation is the World Trade Center attacks in 2001. Get rid of those (as Family Guy did in one of their time-travel episodes), and most of the people alive today would still be here, but the whole world would change around them.

It’s not hard to see how this gets worse as you move the breakaway back in time. Plenty of people—including some that might be reading this—have ancestors that fought in World War II. And plenty of those would be wiped out if a single battle went differently, if a single unit’s fortunes were changed. World War I, the American Civil War (or your local equivalent), and so on, each turning point causes more and more difference in the final outcome. Go back in time to assassinate Genghis Khan before he began his conquests, for instance, and millions of people in the present never would have been born.

Building a history

It’s not just the ways that things would change, or the people that wouldn’t have lived. Those are important parts of an alternate history, but they aren’t the only parts. History is fractal. The deeper you go, the more detail you find. You could spend a lifetime working out the ramifications of a single change, or you could shrug it off and focus on only the highest levels. Either way is acceptable, but they fit different styles.

The rest of this post is going to look at a few different examples of altering history, of changing a single event and watching the ripples in time that it creates. They go in reverse chronological order, and they’re nothing more than the briefest glances. Deeper delving will have to wait for later posts, unless you want to take up the mantle.

Worked example 1: The Nazi nuke

Both ways of looking at alternate timelines, however, require us to follow logical pathways. Let’s look at the tired, old scenario of Germany getting The Bomb in WWII. However it happens, it happens. It’s plausible—the Axis had a lot of scientific talent that defected around that time, including Albert Einstein, Werner von Braun, and Enrico Fermi. It’s not that great a leap to say that the atomic bomb could be pushed up a couple of years.

But what does that do to the world? Well, it obviously gives the Axis an edge in the war; given their leaders’ tendencies, it’s not too much of a stretch to say that such a weapon would have been used, possibly on a large city like London. (In the direst scenario, it’s used on Berlin, to stop the Red Army.) Nuclear weapons would still have the same production problems they had in our 1940s, so we wouldn’t have a Cold War-era “hundreds of nukes ready to launch” situation. At most, we’d have a handful of blasts, most likely on big cities. That would certainly be horrible, but it wouldn’t really affect the outcome of the war that much, only the scale of destruction. The Allies would likely end up with The Bomb, too, whether through parallel development, defections, or espionage. In this case, the Soviets might get it earlier, as well, which might lead to a longer, darker Cold War.

There’s not really a logical path from an earlier, more widespread nuclear weapon to a Nazi invasion of America, though. Russia, yes, although their army would have something to say about that. But invading the US would require a severe increase in manpower and a series of major victories in Europe. (The Japanese, on the other hand, wouldn’t have nearly as much trouble, especially if they could wrap up their problems with China.) The Man in the High Castle is a good story, but we need more than one change to make it happen.

Worked example 2: The South shall rise

Another what-if that’s popular with American authors involves the Civil War. Specifically, what if the South, the Confederacy, had fought the Union to a stalemate, or even won? On the surface, this one doesn’t have as much military impact, although we’d need to tweak the manpower and supply numbers in favor of our new victors. (Maybe France offered their help or something.) Economically and socially, however, there’s a lot of fertile ground for change.

Clearly, the first and most obvious difference would be that, in 1865 Dixie, slavery would still exist. That was, after all, the main reason for the war in the first place. So we can accept that as a given, but that doesn’t necessarily mean it would be the case 150 years later. Slavery started out as an economic measure as much as a racial one. Plantations, especially those growing cotton, needed a vast amount of labor. Slaves were seen as the cheapest and simplest way of filling that need. The racial aspects only came later.

Even by the end of the Civil War, however, the Industrial Revolution was coming into full force. Steam engines were already there, and railroads were growing all around. It’s not too far-fetched to see the South investing into machinery, especially if it turns out to be a better, more efficient, less rebellious method of harvesting. It’s natural—for a Yankee, anyway—to think of Southerners as backwards rednecks, but an independent Confederacy could conceivably be quite advanced in this specific area. (There are problems with this line of reasoning, I’ll admit. One of those is that the kind of cotton grown in the South isn’t as amenable to machine harvesting as others. Still, any automation would cut down on the number of slaves needed.)

The states of the Confederacy depended on agriculture, and that wouldn’t change much. Landowners would be reluctant to give up their slaves—Southerners, as I know from personal experience, tend to be conservative—but it’s possible that they could be wooed by the economic factors. The more farming can be automated, the less sense it makes for servile labor. Remember, even though slaves didn’t have to be paid, they did have costs: housing, for example. (Conversely, slavery can still exist if the economic factors don’t add up in favor of automation. We can see the same thing today, with low-wage, illegal immigrant labor, a common “problem” in the South.)

Socially, of course, the ramifications of a Confederate victory would be much more important. It’s very easy to imagine the racism of slavery coming to the fore, even if automation ends the practice itself. That part might not change much from our own history, except in the timing. Persecuted, separated, or disfavored minorities are easy to find in the modern world, and their experiences can be a good guide here. Not just the obvious examples—the Palestinians, the Kurds, and the natives of America and Australia—but those less noteworthy, like the Chechens or even the Ainu. Revolt and rebellion might become common, even to the point of developing autonomous regions.

This might even be more likely, given the way the Confederacy was made. It was intended to be a weak national government with strong member states, more like the EU than the US. That setup, as anyone familiar with modern Europe will attest, almost nurtures the idea of secession. It’s definitely within the realm of possibility that the Confederate states would break up even further, maybe even to the point of individual nations, and a “black” state might splinter off from this. If you look closely, you can see that the US became much more centralized after the Civil War, giving more and more power to the federal government. The Confederates might have to do that, too, which would smack of betrayal.

Worked example 3: Gibbon’s nightmare

One of the other big “change the course of history” events is the fall of the Roman Empire, and that will be our last example today. How we prevent such a collapse isn’t obvious. Stopping the barbarian hordes from sacking Rome really only buys time; the whole system was hopelessly corrupt already. For the sake of argument, let’s say that we found the single turning-point that will stop the whole house of cards from falling. What does this do to history?

Well, put simply, it wrecks it. The Western world of the last fifteen hundred years is a direct result of the Romans and their fall. Now, we can salvage a lot by deciding that the ultimate event merely shifted power away from Rome, into the Eastern (Byzantine) Empire centered on Constantinople. That helps a lot, since the Goths and Vandals and Franks and whatnot mostly respected the authority of the Byzantines, at least in the beginning. Doing it like this might delay the inevitable, but it’s not the fun choice. Instead, let’s see what happens if the Roman Empire as a whole remains intact. Decadent, perhaps, and corrupt at every level, but whole. What happens next?

If we can presume some way of keeping it together over centuries, down to the present day, then we have a years-long project for a team of writers, because almost every aspect of life would be different. The Romans had a slave economy (see above for how that plays out), a republican government, and some pretty advanced technology, especially compared to their immediate successors. We can’t assume that all of this would carry down through the centuries, though. Even the Empire went through its regressive times. The modern world might be 400 years more advanced, but it’s no less likely that development would be retarded by a hundred or more years. The Romans liked war, and war is a great driver of technology, but you eventually run out of people to fight, and a successful empire requires empire-building. And a Pax Romana can lead to stagnation.

But the Dark Ages wouldn’t have happened, not like they really did. The spread of Islam might have been stopped early on, or simply contained in Arabia, but that would have also prevented their own advances in mathematics and other sciences. The Mongol invasions could have been stopped by imperial armies, or they could have been the ruin of Rome on a millennium-long delay. Exploration might not have happened at the same pace, although expeditions to the Orient would be an eventual necessity. (It gets really fun if you posit that China becomes a superpower in the same timeline. You could even have a medieval-era Cold War.)

Today’s world, in this scenario, would be different in every way, especially in the West. Medieval Europe was held together by the Christian Church. Our hypothetical Romans would have that, sure, but also the threat of empire to go with it. Instead of the patchwork of nation-states that marked the Middle Ages, you would have a hegemony. There might be no need for the Crusades, but also no need for the great spiritual works iconic of the Renaissance. And how would political theory grow in an eternal empire? It likely wouldn’t; it’s only when people can see different states with different systems of government that such things come about. If everybody is part of The One Empire, what use is there in imagining another way of doing things?

I could go on, but I won’t. This is a well without a bottom, and it only gets deeper as you fall further. It’s the Abyss, and it can and will stare back at you. One of my current writing projects involves something like an alternate timeline—basically, it’s a planet where Native Americans were allowed to develop without European influence—and it has taken me down roads I’ve never dreamed of traveling. Even after spending hundreds of hours thinking about it, I still don’t feel like I’ve done more than scratch the surface. But that’s worldbuilding for you.

Let’s make a language – Part 6b: Word order (Conlangs)

After the rather long post last time, you’ll be happy to know that describing the word order for our two conlangs is actually quite simple. Of course, a real grammar for a language would need to go into excruciating detail, but we’re just sketching things out at this stage. We can fill in exceptions to the rules as they come. And, if you’re making a natural-looking conlang, then they will come.


The sentence level is where Isian and Ardari diverge the most. Isian is an SVO language, like English; subjects go before the verb, while objects go after. So we might have e sam cheres ta hu “the man saw a dog”. (By the way, this is a complete sentence, but we’ll ignore punctuation and capitalization for the time being.) For intransitive sentences, the order is simply SV: es tays ade eya “the children are laughing”. Oblique arguments, when we eventually see them, will immediately follow the verb.

Ardari is a little different. Instead of SVO, this language is SOV, although it’s not quite as attached to its ordering as Isian. Most sentences, however, will end with a verb; those that don’t will generally have a good reason not to. Using the same example above, we have konatö rhasan ivitad “the man saw a dog”. Intransitives are usually the same SV as Isian: sèdar jejses “the children are laughing”. We can change things around a little, though. An Ardari speaker would understand you if you said rhasan konatö ivitad, although he might wonder what was so important about the dog.

Verb phrases

There’s not too much to verb phrases in either of our conlangs, mostly because we haven’t talked much about them. Still, I’ll assume you know enough about English grammar to follow along.

For Isian, calling it “order” might be too much. Adverbs and auxiliary verbs will come before the head verb, but oblique clauses will follow it. This is pretty familiar to English speakers, and—with a few exceptions that will pop up later—Isian verb phrases are going to look a lot like their English counterparts.

Ardari might seem a little bit more complicated, but it’s really just unusual compared to what you know. The general rule for Ardari verb phrases (and the other types of phrases, for the most part) is simple: the head goes last. This is basically an extension to the SOV sentence order, carried throughout the language, and it’s common in SOV languages. (Look at Japanese for a good example.) So adverbs and oblique clauses and all the rest will all come before the main verb.

Noun phrases

Because of all the different possibilities, there’s no easy way of describing noun phrase order. For Isian, it’s actually quite complex, and almost entirely fixed, again like English. The basic order is this:

  • Determiners come first. These can be articles, numerals, or demonstratives. (We’ll meet these last two in a later post.)
  • Next are adjectives, which can also be phrases in their own right.
  • Complement clauses come next. These are hard to explain, so it’s best to wait until later.
  • Attributive words are next. This type of noun is what creates English compounds like “boat house”.
  • After these comes the head noun, buried in the middle of things.
  • After the head, some nouns can have an infinitive or subjunctive phrase added in here.
  • Prepositional phrases are next.
  • Lastly, we have the relative clauses.

That’s a lot, but few noun phrases are going to have all of these. Most will get by with a noun, maybe an adjective or two, and possibly a relative or prepositional phrase.

Ardari isn’t nearly as bad. Once again, the head is final, and this means the noun. Everything else comes before it, in this order:

  • Demonstratives and numerals come first. (Ardari doesn’t have articles, remember.)
  • Attributive adjectives and nouns are next, along with a few types of oblique phrases that we’ll mention as they come up.
  • Relative, complement, postpositional, adjectival, and other complex clauses, go after these.
  • The head noun goes here, and this is technically the end of the noun phrase.
  • Some adverb clauses that modify nouns can appear after the head, but these are rare.

For the most part, the order doesn’t matter so much in Ardari, as long as each phrase is self-contained. Since it’s easy to tell when a phrase ends (when it gets to the head noun/verb/adjective/whatever), we can mix things up without worry. The above is the most “natural” order, the one that our fictitious Ardari speakers will use by default.


Isian has prepositions, and they work just like those in English. Ardari, on the other hand, uses post-positions, which follow their noun phrases, again another example of its head-final nature. (The “head” of a prepositional phrase is the preposition itself, not the head noun.) We’ll definitely see a lot of both of these in the coming weeks.

Everything else

All the other possible types of phrase will be dealt with in time. For Ardari, the general rule of “head goes at the end” carries through most of them. Isian is more varied, but it will usually stick to something approximating English norms.

Looking ahead

Next up is adjectives, which will give us a way to make much more interesting sentences in both our fledgling conlangs. We’ll also get quite a bit more vocabulary, and possibly our first full translations. (We’ll see about that one. They may be left as exercises for the reader.)

Beyond that, things will start to become less structured. With the linguistic trinity of noun-verb-adjective out of the way, the whole world of language opens up. Think of everything so far as the tutorial mission. Soon, we’ll enter the open sandbox.