Holidays: reality and fantasy

Today, for me, marks the winter solstice. (Officially, it happens just before 5AM tomorrow morning, going by UTC time. I’m in the US Eastern Time Zone, which is 5 hours behind that, so it’s a few minutes before midnight locally.) As the days grow shorter and the year runs out, thoughts naturally turn towards the holidays, of which there are so many right now. Christmas, of course, is only a few days away. Hanukkah isn’t too far behind us. New Year’s Day is on the horizon, bringing 2015 to a close. And that’s not counting the not-so-holy holidays this time of year, like Pearl Harbor Day (and the birthday of one of my uncles) back on the 7th or Boxing Day (and the birthday of a different uncle) on the 26th.

Indeed, in our modern, Western calendar, every month is chock full of holidays. (Except August, much to my brother’s delight; it’s totally bare, so his birthday is all by itself.) But that’s one culture, in one time, and nothing says that everybody has the same holidays. It’s common knowledge that Jews and Muslims don’t celebrate Christmas, for example, while Thanksgiving is an American tradition with no counterpart across the Atlantic. Many countries celebrate Independence Day, but only the USA has it on the Fourth of July.

And what about fictional cultures? What holidays do they have? Tolkien’s hobbits were good English folk, and they essentially used our calendar and our holidays, just with the Christianity filed off. That’s good enough for a lot of stories, but we might want to go deeper. To do that, we need to understand the origins of holidays.

For every season

For a “traditional” pre-industrial society, whether agrarian or hunter-gatherer, life is sustained directly by the earth itself. Food comes from nature, and it is the single most important facet of life. And food follows the seasons, whether the growing seasons of plants or the mating or hibernating or migrating seasons of animals. Life, living, is governed by the calendar. That’s where most of our traditional holidays come from. As it turns out, they might have different names, but almost every culture has a similar set.

Imagine an analog clock face. Now, imagine that this represents the year. Summer, the season with the highest temperatures, can go at the top, with the solstice at the 12 o’clock position. Winter, conversely, will be the low point: 6 o’clock. The spring and autumn equinoxes then fit in at 9 and 3, respectively. And time passes like this in its eternal cycles. Simple, right? Each of those four points I identified are important markers in the year that are recognized by most cultures. (Tropical cultures are a bit of an exception, since they don’t have the most obvious distinction of the seasons, the changing length of the night. But they can still tell the seasons by patterns in rainfall, winds, and the natural behavior of plants and animals.)

For a lot of places in the temperate zones, the spring (vernal) equinox marks the point in the year when temperatures are warm enough to make planting viable. In the same way, the autumnal equinox is a good sign that cold weather is moving in soon, and it’s time to start thinking about harvests and preparing for winter. Since temperate locales tend to show a big difference between hot and cold seasons, this is a very important part of the calendar. Freezing weather kills many plants, including most of those a pre-industrial society depends on for food. Planting too early and harvesting too late are both very real dangers that can, at the worst, lead to widespread famine. (Look up the Year Without a Summer for a fairly recent example of this.)

In a similar vein, the solstices are milestones in the calendar. Among older cultures, the winter solstice has been historically more important, whether as a time to look forward to the spring ahead or to celebrate the passing year. Summer, in temperate regions, is a relative time of plenty already, so it gets less attention. Besides, no one who lives a pastoral life looks forward to the lean times of winter.

So, for many cultures that haven’t reached the Industrial Age (where advances in technology allow food yields to increase faster than the population), these four times are some of the most likely suspects for holidays. And we can add to them four more: the midpoints between each pair. On our imaginary clock, those are at 1:30, 4:30, 7:30, and 10:30; on the calendar, they’re around the beginning of February, May, August, and November. Indeed, some calendars—the Celtic calendar is one example—use those to determine the seasons, while our familiar equinoxes and solstices become their midpoints.

Altogether, then, we have eight days that make obvious sense for agrarian holidays. On our calendar, roughly, they are: February 1, March 20, May 1, June 21, August 1, September 23, November 1, and December 21. And true enough, the Western world has seasons for just about all of them:

  • Early February: Groundhog Day is a modern spectacle that hearkens back to actual folk wisdom regarding the coming of spring. The Christian feast day of Candlemas probably replaced many of those “pagan” traditions. And America’s bloodsport of choice has its biggest day around this time, too: the Super Bowl.

  • Late March: Essentially everybody celebrates the first of spring. (If you’re a Celt, then that was in the last section, as Imbolc. Otherwise, it’s probably right here.) Most of the European rituals were subsumed into Easter, but the pagan origins are still evident. Look elsewhere in the world, though, and you’ll find planting holidays and end-of-winter feasts aplenty.

  • Early May: By the middle of spring, lots of flowers are blooming, and that’s the basic idea around these holidays. Nowadays, May Day celebrates workers in industrialized countries, but the floral connection still exists. The US has never really been a big May Day place, so Mother’s Day pops up here. It’s not a traditional festival-type holiday, though, so we’ll get to it later. The Celts, by the way, started counting summer here, calling it Beltane.

  • Late June: Again, we don’t really have a lot going on this time of year, but that wasn’t always the case. Midsummer was celebrated by plenty of cultures, and it’s a very big thing in northern Europe to this day. Christianity appropriated it as St. John’s Day, but find somebody in America who knows that. Of course, we have the nearby Fourth of July, so it’s understandable. Anyway, midsummer holidays tend to celebrate the long days, maybe even with bonfires that try to further drive back the night.

  • Early August: By August, summer is starting to run out, and fall is approaching. The earliest harvests start around this time, and the traditional Anglo-Saxon calendar marks August 1 as a “first harvest” festival for wheat crops, called Lammas (Lughnasa by the Celts). The timing doesn’t work everywhere, nor does it work for every crop, so not everybody has a harvest holiday around here, although they’ll have one somewhere.

  • Late September: Traditional harvest festivals tend to fall around the first of autumn. In other words, right here. The Harvest Moon is the full moon closest to the equinox, and its light can be seen as a blessing to those working the fields, giving them a little extra to see by. Harvest, of course, is a time of hard work, but also of feasting. Before modern food storage techniques, people had to eat what they could, lest it go to waste.

  • Early November: Celts have Samhain, Christians have All Saints’ Day, and children have Halloween. These are all connected, as the Church took over the pagan festival, then the people took over the holy feast. Some other cultures have something here, but this one isn’t that big a time to celebrate, as it means that winter is coming. Maybe if you’re a Stark…

  • Late December: In modern times, we’d see it as ending the year with a bang. For a lot of people (not just Christians, for that matter), Christmas is the holiday. But it has its pagan origins, too: traditional Yule and Roman Saturnalia. All of them have the same general idea, though. A feast to get through the long winter nights, a time to look forward to spring, a day to reflect on the year that was and the year that soon will be, all of that fits this time of year. So does gift-giving, that most popular of Christmas traditions. What better time to give to those in need, if not the shortest day of the year?

Getting religion

So that’s it for the agrarian calendar. Add religion to the mix, and things get hairy. For Christianity, it’s mostly simple, as the Church subsumed the pagan holidays into its own, sometimes only by changing their names. They did add some of their own, like Ash Wednesday or the feast of the Assumption, that don’t match up to the seasons. Judaism and Islam, which keep their own calendars, have their own holidays, like Hanukkah and Ramadan, and the same would be true even for fictional religions.

Here, it’s hard to give guidelines. Religious observances that aren’t anniversaries of known events can fall anywhere in the year. They can even be movable, and not in obvious ways: calculations of the date of Easter drove centuries of Christian astronomy. And those that are annual commemorations don’t necessarily need any connection to the actual date the event happened. After all, there isn’t even Biblical evidence that Jesus was born in December. (That he was crucified in spring is pretty solidly confirmed, however.)

My best advice is to think about the religion. What days are most important? Those will likely be the ones most celebrated. Then look at the rest of the calendar. People like feasts, but they don’t want too many too soon. That gets expensive. So the next most celebrated holidays will likely be those far from other holidays. It’s not an exact science—it doesn’t explain the American August drought—but it’s a good start.

Also, if your story involves a polytheistic religion, think about the different gods and their functions. Gods of agriculture and nature are going to be more tied to the seasons. Death and winter are often linked, for obvious reasons, so a death god might have a holiday in or near winter. Spring is seen as a time of love, fire goes with summer, and I’m sure you can find other relations.

Inventions

As states become more centralized, especially once industrialization comes about, the nature of holidays begins to change. Sure, the usual suspects are still there: harvest feasts, planting festivals, summer bonfires and winter gifts. But these are increasingly accompanied by a new set of holidays, and we should spend some time on them.

Many of our “secondary” holidays originally had a religious significance, largely stemming from the Catholic saints’ days. Valentine’s Day is one of these, though it also falls on the day of a Roman feast (Lupercalia) that had many of the same romantic connotations. Saint Patrick’s Day is another, but it’s also a “nationalist” holiday, with its strong Irish connection. For these, as for Christmas and Halloween, it’s a case of the secular overtaking the religious. Likewise, Thanksgiving originally had some religious overtones, but these are all but forgotten.

Other holidays are directly nationalist, and these obviously depend on the country. But they all have in common the idea of commemorating a person or group. In the US, for example, we have holidays to honor Christopher Columbus, Martin Luther King Jr., veterans (originally of World War I, but later expanded to all of them), mothers, fathers, workers, and presidents. The specifics will differ, but a fictitious country would likely have its own set of honored people. This would depend on history, societal norms, technological advancement, and the circumstances around the formation of that country, all of which are good topics for future posts.

Elsewhere

On other planets, the seasons still work the same way. A terrestrial planet with a year like Earth’s will have a natural calendar like Earth’s. The names and dates will be changed, but the broad outline will remain the same.

We don’t even know what kind of life can arise on less-familiar worlds, but it stands to reason that they’d have similar ideas about the calendar. Of course, around a red M star, a habitable world’s year only lasts a few weeks, so things will likely break down at this extreme. At the other end of the spectrum, habitable planets around F stars might have years 3 or more times that of ours, meaning longer, more extreme seasons. More holidays would appear in a longer calendar like this, if only to break up the monotony.

Now, a society spanning multiple worlds has a conundrum. Most of the holidays, at first, would be those of the homeworld. But colonies would soon become like nations on Earth, each developing their own set of observances (for the same reasons, no less). Almost all of these would be purely local, but some would rise in prominence, as St. Patrick’s Day has done here.

Conclusion

However you do it, holidays add flavor to a world. They’re an important part of life. They have been for thousands of years, and they will be as long as we continue to observe them.

Most of a culture’s holidays are going to come from its roots, and each will have a story. Some are religious, others entirely dependent on the whims of the seasons. A few started out as movements for political or social change, or to honor the leaders of such. And today, every day of the year has been claimed in the name of some organization. (My own birthday of October 16, for instance, is Boss’s Day, which would be great if I had employees. It’s also World Food Day and World Anesthesia Day, because of historical anniversaries.)

As I said before, most stories won’t need this level of detail. But it can find a place in worldbuilding, and it’s always good to have the answers to the kinds of questions you never thought to ask. So, consider this a gift. And whichever holiday you happen to be celebrating over the next week or so, I hope you enjoy it.

Let’s make a language – Part 10a: Relative clauses (Intro)

This time around, we’re going to look at what I think is one of the more confusing bits of a language: the relative clause. That is, it’s confusing in theory, as in understanding how it works. Implementing it in a conlang turns out to be a bit easier, but it marks a kind of turning point, in that we’re moving out of the simple grammatical concepts (like plurals or the past tense) and into the more complex world of phrase-level grammar. I guess you could even say this starts “Part 2” of the series.

It’s all relative

Relative clauses. What they are is right there in the name. First of all, they’re clauses, meaning that they are essentially self-contained phrases that have all the necessary parts to state a fact about something. (This is different from the prepositional phrases from last time, which can only really add information to an existing clause.) And second, relative clauses are relative. In other words, the new facts they provide somehow relate to something else.

That’s a grammatical definition, anyway. In English, what relative clauses are is fairly obvious: they are the phrases that we use to put more meaning in a sentence. (Conveniently enough, that last sentence ended with one.) Unlike an adjective phrase, a relative clause works more like a whole new sentence embedded in an existing one, except that one part of it refers directly to something in that existing sentence.

For example, let’s take a simple sentence with a relative clause: The nice man who lives next door has three big dogs. Okay, I’m not the best on example sentences, but it’ll work, and it doesn’t use any other grammar we haven’t already seen in the series. So what have we got? Well, let’s break it down. Starting at the end, we have a predicate phrase, has three big dogs, which contains a verb (has) and a noun phrase (three big dogs).

Those are nothing new, so we’ll ignore them completely and focus on the first half of the sentence, the subject phrase: the nice man who lives next door. Clearly, man is the head of this phrase, and the and nice are an article and adjective, respectively. And that leaves who lives next door, which is our relative clause. It’s formed almost like it could be a sentence, with its own verb (lives) and everything, but the subject is all wrong for that.

In a sense, we’ve combined two sentences. We have the main statement, the nice man has three big dogs, but we also want to clarify some things about this nice man, so we have another sentence, he lives next door. In both cases, we’re talking about the same man, and that’s the “hook” that lets us put the relative clause into the original statement.

Subjects and objects

In our example, the subject of the “outer” clause was the same as that of the relative clause. That doesn’t always have to be the case. In English, as in many languages, it’s possible to switch things around. You can also have, for instance, objects with attached relative clauses: I talked to the man who lives next door. Inside the relative clause, the man is still the subject, but on the outside, he’s the direct object. Similarly, the “relativized” part can be the object of its own clause (the man that I saw yesterday), or part of a possessive or other construction (the man whose dogs are always barking).

English, admittedly, makes all this a little unclear. In cased languages, it’s a lot easier to keep track of everything, and that’s one of the grammatical pedant’s arguments in still using whom for relativized objects. (Their opinion on sentences ending in a preposition comes into play here, too, because that comes from the different ways relative clauses are made in English and Latin. In Latin, you can’t “split” the preposition from the relative pronoun, like you can in English.)

Not every language allows all types of nouns to be relativized, though. There are a few different roles available, and it seems to be a linguistic universal that they fall into a naturally order, called the accessibility hierarchy:

  1. Subject
  2. Direct object
  3. Indirect object
  4. Oblique argument (English prepositional phrases)
  5. Genitive (where English would use whose)
  6. Comparative object (such as the people I am older than)

As the theory goes, if any one of these can’t be relativized, then nothing lower on the list can, either. In other words, any language that allows relative clauses at all is going to allow them for subjects, while one that doesn’t let you use them for objects won’t let you for genitives, either. English offers the full range, as do most of the “big” world languages, but that’s not always the case.

Those languages that don’t let you use the full hierarchy often have some way of accomplishing the same thing. There might be a special verbal voice (like the applicative or the antipassive), for example.

Other ways

Now, just because our language creates relative clauses a certain way by default (using a relative pronoun like “who”, “whose”, or “which”), doesn’t mean that’s the only way to do it. And that’s where things can get interesting. Indeed, English itself gives you a few options:

First off, you don’t actually need a relative pronoun; you can get by without one: the girls I saw outside has a relative clause with no pronoun. This one pops up in a lot of languages, and linguists refer to it as a gap strategy. The “gap” refers to where the relativized part of the sentence would go.

Second, you can use the gap strategy with a kind of linking particle, sometimes called a complementizer. This one is also an option in English, using that: the girls that I saw outside. This is common throughout the world, and it has become a kind of catch-all in English, much to the despair of some.

Both of the above alternatives carry less overt information than the relative pronoun typical of European languages. But we can go in the other direction, as well. We can add a resumptive pronoun, which is a regular personal pronoun that appears where the relativized argument would, as if we could say the girls that I saw them outside. (This one can appear in spoken English, when a speaker gets too bogged down in relative clauses. It’s happened to me plenty of times.) The resumptive pronoun can also be moved to the front of the relative clause in some languages, but that doesn’t change the core “method”.

A relative clause can also show up as some other kind of clause. Turkish has a nominalizing construction that would render our example as something like the girls of my seeing outside. Some languages of East Asia, conversely, use a genitive construction that would come out closer to the girls of I saw outside. Constructs like these tend to work best when independent words denote such meanings, rather than case affixes. If a language has a passive voice, that’s another possibility for relative clauses. Our running example in this section makes this one more cumbersome, but you might get something like the girls seen outside.

Finally, a few languages dispense with relativized clauses altogether. These have internally-headed clauses, where the clause that would be relative is simply inserted into the main sentence as-is. Using our original example sentence, this might come out in a literal translation as the nice man lives next door has three big dogs. (This kind shows up a little bit more often among the native languages of the Americas, but also in a few scattered locations elsewhere in the world.) A similar option is found in Hindi, which uses a reduced correlative word or phrase in the main clause, almost the opposite of the “relative pronoun” process of English. Our example sentence might be literally translated in this case as which nice man lives next door, that man has three big dogs.

Seeing all your relatives

That just about covers the largest category of relative clause, but it’s not the only one. We can also have something called a free relative clause, which isn’t really relative to much of anything. Wikipedia’s English example is I like what I see, which nicely illustrates this. Languages don’t have to allow this one at all, but many of them do; a different wording might be I like that which I see, which sounds very stilted in English, but more explicitly shows the grammar involved.

Another clause that “sounds” relative is the adverbial clause, something like when I came home. I mean, it looks like it’s a relative clause, right? It’s got when, and that’s in the same class as who and where, isn’t it? But it isn’t, not really. It is, however, the topic for the next part of the series, so we’ll leave that discussion for later.

Wireless woes

I’m really not trying to get out of posting Wednesday stuff. It’s just that fate seems to be conspiring against me. Last week, I was so sick I couldn’t get out of bed. This time around, the illness has moved to my wireless router.

It started a couple of weeks ago, when my trusty old RT-N16 started acting up. Wireless speeds became unbearably slow, and latency was horrible. (We’re talking a 200 ms ping to the next room.) So, after eliminating all other potential sources of trouble, I bought a new router: an RT-AC66, also by Asus.

That, apparently, was a mistake. Don’t get me wrong, I’m sure the AC66 is a great router. But there’s a big problem, and it turns out that it’s the same problem that the N16 had.

After some research, I think I’ve pinpointed the problem: the wireless chipset. Both routers use essentially the same one for the 2.4 GHz band. And both of them use essentially the same firmware, thus the same driver for that chipset. That driver, however, doesn’t exactly work.

On various forums, Asus support people have said that a fix is in the works. They’ve said this in posts dating back to 2013, in fact. Yet no fix has solved all the problems.

So I’ve got a few options. I could try alternate firmware (DD-WRT, for example). That’s a road fraught with peril, as you may know, and there’s no guarantee it would really help. So, I’m going with option 2: get a third router. This one is a TP-LINK Archer C7. It’s a different manufacturer and a different chipset. That, of course, will mean a different set of problems, but (hopefully) some that are fixable.

The changing of the seasons

Winter is coming. It’s not just a catchy motto from Game of Thrones, you know. No, winter really is on its way, as the seasons move on their eternal cycle. And this change from fall to winter can make you wonder. We know why the seasons change: our planet’s tilt, combined with its movement around the sun. But what does that truly mean? And, from a worldbuilding perspective, does it have to be that way? Well, let’s take a look.

Reason for the season

The Earth is tilted on its axis. Anybody past about the third grade knows that, and it’s patently obvious just by looking at the sky at different points in the year. Right now, our world has somewhere in the vicinity of 23° of axial tilt, and that’s a fairly stable number. It hasn’t changed much at all in written history, and only within about a degree either way throughout all of human existence. In the distant past (millions of years ago), there were periods where it was much higher or lower, but things are much more settled in this modern era.

Now, the axis doesn’t move, at least on scales of a single year. (We’ll ignore precession and other effects for the moment, as they tend to work on much larger periods of time.) What does that mean for us? Only that different parts of the world will get more sunlight at different times of the year. And that’s what causes the seasons to change.

Summer, of course, is when your part of the world gets the most direct sunlight, and that happens when your half of the world points more towards the sun. Winter is the exact opposite, and it’s on the other side of the year. Spring and fall (autumn, if you prefer) are in the middle, when the planet’s tilt is roughly perpendicular to the sun’s rays. But the Earth has two hemispheres: northern and southern. They can’t both be pointed at the sun, thus the complementary seasons that make Christmas a summertime holiday in Australia.

Tropical highs and lows

There’s a lot more to it than that, though. Because of the Earth’s tilt of about 23°, we can divide the world into a few sections. First, we have the tropics, the area around the equator, from the Tropic of Cancer in the north, to the Tropic of Capricorn in the south. Coincidentally enough, these lines are at exactly the latitude equal to the axial tilt. (It’s not a coincidence at all; it’s the whole reason why they exist.) Every point in the tropics will have the sun directly overhead at some time in the year.

The polar regions are also defined by the tilt. The Arctic and Antarctic Circles are at a latitude of about 67°, or as far from the pole (90°) as the axial tilt, or in math terms: $90° – a$. Everywhere in a polar region will have a time when the sun is at the nadir, and a day where it doesn’t rise at all. But it will also have days where the sun doesn’t set, giving us the “midnight sun” of Alaska and Scandinavia.

In between the polar and tropical regions lie the temperate zones. In these, the sun will never be directly overhead or directly below, and it will rise and set every day. And it’s here that seasonal variation has the most visible effects.

Day and night

If the Earth wasn’t tilted, there wouldn’t be any seasons. Every night would be 12 hours long, no matter where you were. But we don’t live in that world, we live in one that is tilted. Thus, our nights change in length. At the equinoxes, the lengths of day and night are equal, hence the name. At the solstices, they’re as far apart as can be. In between, there’s a gradual shifting that gives us the feeling that days are growing longer or shorter.

As you get farther from the equator, the variation grows. Thus, at my latitude of around 35° north, I might only get about 9 hours or so of daylight on the winter solstice, but summer nights will also be that short. Up in New York, it might be split 16/8, while London might be 17/7 or 18/6. Helsinki, up near 60°, is going to have some long winter nights, but there will always be a sunrise. Barrow, Alaska and McMurdo Station in Antarctica are both inside the polar region, so they’ll have days without nights, or vice versa.

An added complication

The whole thing would be perfectly symmetrical but for one little detail. Earth’s orbit around the sun isn’t a perfect circle. It’s an ellipse. That ellipse doesn’t move any more than the axis does. (Again, we’re ignoring precession.) As of right now, the perihelion, the point closest to the sun, comes around in January, during the northern winter. Orbital mechanics dictates that the aphelion, then, is six months later.

As anyone who has played Kerbal Space Program knows, things move more slowly at apoapsis. (“Aphelion” is just the apoapsis of something orbiting the sun.) Therefore, since our apoapsis occurs in July, northern summer is a little bit longer than winter, while the southern hemisphere is the other way around. It’s not much of a difference, only about one or two days, so it doesn’t affect the climate that much. But it’s something you may have to keep in mind.

Another world

So all that works for Earth. How about a different planet? How would the seasons work? The answer: about the same. Earth is simply the most convenient example, since we’re already living here. Mars has seasons, too; the Phoenix lander was killed by the rigors of a Martian polar winter. For the rest of the solar system, things get dicey. Jupiter doesn’t have much tilt, for example, while Uranus is practically lying on its side. Mercury has its resonance-lock thing going on, which screws everything up. And moons don’t really work the same way.

But for your ordinary, habitable, terrestrial world, seasons are going to be like Earth’s. Summer and winter, spring and fall, they’re all going to be there. They may be different lengths, based on the planet’s orbital period and eccentricity. The tropical and polar zones may be larger or smaller, if the tilt isn’t our 23°. The division of day and night might scale differently, due to these same factors. But from a scientific point of view, that’s all you have to worry about. The years-long summers and winters of Westeros are scientifically implausible; you need magic to account for them.

Summer is always going to be the hottest part of the year, with the most sunlight and shortest nights. Winter will be the coldest; the sun will hang low in the sky, and its rays will strike more glancing blows on the world. Spring and autumn will both be marked by equinoxes, days when the periods of daylight and darkness are the same length. Spring tends to get warmer as you go through it, while autumn cools down.

In the tropics of your fictional world, there won’t be as much seasonal variation, especially close to the equator. The poles, by contrast, will be marked by long summer days, cold winter nights, and periods of total darkness or everlasting sunshine. In between will be the temperate zones, where civilization tends to flourish. And the southern hemisphere will always be backwards when it comes to the calendar.

But this is all speaking from the view of orbital mechanics. On the ground, there is a lot of room for change. Latitude only determines the kinds of seasons you have, whether tropical, temperate, or polar. A location’s climate is certainly affected by this, but many more factors come into play, so many that I’ll dedicate a future post to them.

Let’s make a language – Part 9b: Prepositional phrases (Conlangs)

Prepositional phrases, despite how important they are to expressing oneself in a language, don’t have all that much grammar. So we can combine both Isian and Ardari into one post, and we’ll even have time to add in a bit about adverbs while we’re at it.

Isian

Isian uses postpositions instead of prepositions, which is a change that might be hard to get used to. When they’re used to modify a noun, they usually follow it. If they’re supposed to modify a verb, then they’ll usually come at the end of a sentence, but not always. Sometimes, they’ll go right after the verb, and this signifies a greater emphasis on the phrase. It’s all in how you want to say it.

Simple nouns or noun phrases are easy to use with a postposition. Just put it after the phrase: e talar iin the house”; sir mi fofrom my heart”. (I’ll show a whole bunch more at the end of the post.)

If we want to add in a bit of action to our phrase, then we have a special verbal marker, cu, that indicates something like an infinitive (“to go”) or a gerund (“going”): cu oca anos “without asking”. It’s not only used with postpositions, and we’ll see it pop up a few times later on.

Adjectives, as we saw a few posts ago, usually can’t occur without a noun in Isian. Well, here’s one of the cases where they can. Using an adjective with the special postposition hi (and only this one; it doesn’t work with others) creates a kind of adverb: ichi “beautiful”, ichi hi “beautifully”.

The postposition hi works with nouns, too: sam hi “manly, like a man”. The English translation shows an article, but Isian doesn’t need (and can’t use) one in this situation.

Ardari

As a head-final language, you’d expect Ardari to have postpositions, too, and you’d be right: tyèketö wiin the house”.

The grammar here isn’t that much different from Isian. Noun phrases in postpositionals work in largely the same way, with one major difference. Remember that Ardari has case for its nouns. What case do we use for a postpositional phrase?

Usually, the accusative is the right answer. But a few postpositions require their nouns to appear in the dative. Some even change meaning based on the case of the noun. For example, wi used with the accusative means “in”, as we in tyèketö wi above. But use it in the dative (tyèkètö wi, note the vowel change), and the meaning becomes “into the house”. It’s a subtle difference, both in form and meaning, but it is indeed a difference.

Using a verb in a postpositional phrase isn’t that hard. The particle ky goes after the (uninflected) verb, and then the postposition goes after that: brin ky vi “while walking”; chin ky nètya “after going”.

Making an adverb out of a noun or phrase uses this same little word, but with the copula verb èll-: kone èll ky “like a man”. (You could say that èll ky is the Ardari adverb marker, but it’s not that simple.) Simple adjectives, on the other hand, can be used directly, so ojet can mean “sweet” or “sweetly”, depending on whether it modifies a noun or a verb: ojeta obla “sweet water”; ojet ajang ky “singing sweetly”.

The list

As promised, here’s a brief list of some of the most common English prepositions and their closest equivalents in Isian and Ardari.

English Isian Ardari
above apay aj
across sos ori
after eb nètya
against ansir eka
around oto òs
at ni äl
before pane jo
behind biso ab
below didal ku
by hoy sy
for ir da
from fo tov
in i wi
in front of ihamo kulyi
into si wi +DAT
of o me
on od oj
onto ores oj +DAT
out of way zho +DAT
through aju tutwi
to/toward es lim
until nobes nyon
with was chès
without anos achèsu

Where “+DAT” appears after a word in the Ardari column, it means that postposition requires a dative noun. Other than that, there’s not much else to say about the table.

Next up

To close out the year, we’ll be looking at relative clauses. Once that’s done, we should have enough of the blanks filled in that 2016 can begin with a bang. Since I write these beforehand, I won’t be taking off for Christmas or New Year’s, because those posts will already be done and waiting.

Sick Day

Since I’ve had a pretty bad cold these last few days (I’m writing this on Monday, but I have no real reason to think I’ll be over it in the next 48 hours), and since I don’t already have posts queued for Wednesday, I’ve decided to take the day off. Sorry ’bout that, but that’s the way it goes sometimes.

I’ve had this Friday’s post written for over a month, so no worries there, and I did have two worldbuilding posts already, so I’m alright for next Monday. It’s just today that got lost in the shuffle. Oh, well. ‘Tis the season, and all that.

Colonization and the New World

It’s common knowledge that the Old World of Europe, Asia, and Africa truly met the New World of the Americas in 1492, when Columbus landed in the Caribbean. Of course, we now know that there was contact before that, such as the Vikings in Newfoundland, about a thousand years ago. But Columbus and those who followed him—Cortés, Pizarro, de Soto, Cabot, and all those other explorers and conquerors Americans learn about in history class—those were ones who truly made lasting contact between the two shores of the Atlantic.

Entire volumes have been written over the last five centuries about the exploration, the conquest, the invasion of the Americas. There’s no need to repeat any of it here. But the subject of the New World is one doesn’t seem to get a lot of exposure in the world of fiction, with the notable exception of science fiction. And I think that’s a shame, because it’s an awfully interesting topic for a story. It’s full of adventure, of gaining knowledge, of conflict and warfare. Especially for American writers (not limited to the United States, but all of North and South America), it’s writing about the legacy we inherited, and it’s odd that we would rather tell stories about the history of the other side of the ocean.

Written by the victors

Of course, one of the main reasons why we don’t write many stories about exploration and colonization is political. We know a lot about the Spaniards and Englishmen and Frenchmen that discovered (using that term loosely) the lands of America. We have written histories of those first conquistadors, of those that came after, and of the later generations that settled in the new lands. We don’t, however, have much of anything from the other side.

A lot of that is due to the way first contact played out. We all know the story. Columbus discovered his Indians (to use his own term), Cortés played them against each other to conquer them, and smallpox decimated them. Those that survived were in no position to tell their tale. Most of them didn’t have a familiar system of writing; most of those written works that did exist were destroyed. And then came centuries of subjugation. Put that all together, and it’s no wonder why we only have one side of the tale of the New World.

But this already suggests story possibilities. We could write from one point of view or the other (or both, for that matter), setting our tale in the time of first contact or shortly after, in the upheaval that followed. This is quite popular in science fiction, where the “New World” is really a whole new world, a planet that was inhabited when we arrived. That’s the premise of Avatar, for example.

Life of a colony

Colonization has existed for millennia, but it’s only since 1492 that it becomes such a central part of world history. The Europeans that moved into the Americas found it filled with wonders and dangers. For the Spanish, the chief problem—aside from the natives—was the climate, as Mexico, Central America, and the Caribbean mostly fall into the tropical belt, far removed from mid-latitude Spain.

The English had it a little better; the east coast of the United States isn’t all that different from England, except that the winters can be harsher. (This was even more the case a few hundred years ago, in the depths of the Little Ice Age.) It’s certainly easier to go from York to New York than Madrid to Managua.

No matter the climate, though, colonists had to adapt. Especially in those times, when a resupply voyage was a long and perilous journey, they had to learn to live off the land. And they did. They learned about the new plants (corn, potatoes, tomatoes, and many more) and animals (bison and llamas, to name the biggest examples), they mapped out river systems and mountain chains. And we have reaped the benefits ever since.

Building a colony can be fun in an interactive setting; Colonization wouldn’t exist otherwise. For a novel or visual work, it’s a little harder to make work, because the idea is that a colony starts out exciting and new, but it needs to become routine. Obviously, if it doesn’t, then that’s a place where we can find a story. Paul Kearney’s Monarchies of God is a great series that has a “settling new lands” sequence. In the science fiction realm of colonizing outer space, you also have such works as Kim Stanley Robinson’s Red Mars (and its colorful sequels).

Terra nullius

Whenever people moved into new land, there was always the possibility that they were the first ones there. It happened about 20,000 years ago in Alaska, about 50,000 in Australia, and less than 1,000 in Hawaii. Even in the Old World, there were firsts, sometimes even in recorded history. Iceland, for example, was uninhabited all the way through Roman times. And in space, everywhere is a first, at least until we find evidence of alien life.

Settling “no man’s land” is different from settling in land that’s already inhabited, and that would show in a story with that setting. There are no outsiders to worry about. All conflict is either internal to the colonists’ population or environmental. That makes for a harder story to write, I think, but one more suited to character drama and the extended nature of books and TV series. It doesn’t have to be entirely without action, though, but something like a natural disaster would be more likely than war.

This is one place where we can—must—draw the distinction between space-based sci-fi and earthly fiction or fantasy. On earth (or a similar fictitious world), we’re not alone. There are animals, plants, pests everywhere we go. We have sources of food and water, but also of disease. In deep space, such as a story about colonizing the asteroid belt, there’s nothing out there. Nothing living, at least. Settlers would have to bring their own food, their own water, their own shelter. They would need to create a closed, controlled ecosystem. But that doesn’t leave much room for the “outside” work of exploration, except as a secondary plot.

Go forth

I’m not ashamed to admit that I could read an entire book about nothing but the early days of a fictional colony, whether in the Americas or on an alien planet. I’ll also admit that I’m not your average reader. Most people want some sort of action, some drama, some reason for being there in the first place. And there’s nothing wrong with that.

But let’s look at that question. Why does the colony exist at all? The Europeans were looking for wealth at first, with things like religious freedom and manifest destiny coming later on. The exploration of space appears to be headed down the same path, with commercial concerns taking center stage, though pure science is another competitor. Even simple living space can be a reason to venture forth. That seems to be the case for the Vikings, and plenty of futuristic stories posit a horribly overcrowded Earth and the need to claim the stars.

Once you have a reason for having a colonial settlement, then you can turn to its nature. The English made villages and towns, the French trading posts. Antarctica isn’t actually settled—by international agreement, it can’t be—but the scientific outposts there point to another possibility. If there are preexisting settlements, like native cities, then there’s the chance that the colonists might move in to one of them instead of setting up their own place. That’s basically what happened to Tenochtitlan, now known as Mexico City.

Colonies are interesting, both in real history and in fiction. They can work as settings in many different genres, including space opera, fantasy, steampunk (especially the settling of the Wild West), and even mystery (we still don’t know what really happened at Roanoke Island). Even just a colonial backdrop can add flavor to a story, giving it an outside pressure, whether by restless natives or the cold emptiness of space. A colony is an island, in a sense, an island in a sea of hostility, fertile ground for one’s imagination.

Let’s make a language – Part 9a: Prepositional phrases (Intro)

We’ve made quite a bit of progress with our languages since the beginning. They’ve got nouns, pronouns, verbs, and adjectives. We’ve given them plenty of grammar to connect these bits and turn them into sentences. But those sentences are pretty bland. And that’s because there’s no real way for our conlangs to express anything but the most basic of semantic relationships: subject and object. To remedy that, we’re going to talk about a new type of phrase, the prepositional phrase.

English, for example

Prepositional phrases are everywhere you look in English. That last sentence, in fact, ended with one: “in English”. This type of phrase isn’t absolutely necessary to the grammatical “core” of a sentence. It’s extra information that fills in the blanks, giving us more detail about a situation. Prepositional phrases are the way we can learn wheres and whens and hows about something. They’re filler, but that’s exactly what we need.

Looking at English—we’ve got a whole post full of examples, so there’s no reason not to—we can see the general structure of a prepositional phrase. It starts with a preposition, naturally enough, and that’s really the head of the phrase. Prepositions are those words like “in”, “of”, or “below”; they can specify position, location, time, possession, and about a hundred other ideas that are impossible (or just cumbersome) to express with only a verb, subject, and object.

Besides the preposition, the rest of the prepositional phrase is, well, a phrase. It’s usually a noun phrase, but English allows plenty of other options, like gerunds (“in passing“) and adverbs (“until recently“). In theory, there’s nothing stopping a language from letting whole sentences be put into prepositional phrases (“after I came home“), but phrases like that are technically a different kind that we’ll see in a future post. For now, we’ll stick with the more noun-like examples.

Changing it up

English isn’t the only (or even the best) example of how to do prepositional phrases. Other languages do things differently. So, let’s take a look at how they do it.

The first thing to say is that this post’s whole premise is a lie. You don’t need prepositions. You don’t have to have a word that precedes a noun phrase to give more information about a sentence. No, it’s actually a bit more common (according to WALS Chapter 85) to put the preposition after the noun phrase. In that case, it’s not really proper to call it a preposition anymore; pre-, after all, means “before”. So linguists call these words postpositions. Japanese, for instance, uses postpositions, and any anime lover knows of の (no). Thus, “preposition” is technically a misnomer, and the more general term (which we won’t use here) is adposition.

Even rarer types exist, too. There’s the inposition, which pops up in a few languages. It goes somewhere in the middle of the noun phrase. The circumposition has bits that go on either side of the phrase, and this (so Wikipedia says) appears in Pashto and Kurdish. It’s sort of like the French phrasing seen in je ne sais pas. And then there are a few oddballs that don’t seem to have any of these, using verbs or case markers or something like that as stand-ins.

What can fit in the “phrase” part of “prepositional phrase” varies from one language to the next, as well. Noun phrases are let in pretty much everywhere. Not every language has adverbs, though, unless you’re of the belief that any word that doesn’t fit in another class is automatically an adverb. Gerunds aren’t ubiquitous, either. And, of course, languages can go the other way, allowing more possibilities than English.

A note on adverbs

We haven’t really discussed adverbs much, and there’s a good reason: nobody can agree on just what an adverb really is. Sure, it’s easy to say that adverbs are to verbs what adjectives are to nouns, but that doesn’t help much. If you look at it that way, then a prepositional phrase can also be an adverbial phrase. Similarly, some languages—Irish, for example—create regular adverbs by means of a prepositional phrase, their equivalent to English “-ly”. On the other hand, it’s entirely possible for a language to give its adjectives the ability to function as adverbs, as in Dutch.

For a conlang, adverbs are a hard thing to get right. It’s easy to be regular, much harder to be naturalistic. Esperanto mostly goes the regular route, with its productive -e suffix, but there are a few exceptions like hodiau “today”. The best advice I can give here is to have a little bit of both. Have a regular way to derive adverbial meaning from adjectives, but also have a few words that aren’t derived.

In conclusion

How your conlang handles prepositions isn’t something I can tell you. I can give you a few pointers, though. First, there’s a tendency for head-final languages (SOV word order, adjectives preceding nouns, etc.) to use postpositions. This is by no means universal, but it’s something to think about.

And the end of that last sentence brings up another point that often gets overlooked. Can you end a sentence with a preposition? You can in English. The only reason it’s considered “bad” is because of the influence of Latin, which doesn’t allow it. Clearly, a postposition can end a sentence, by its very nature, but the waters are murkier for those that come before. When we get to relative clauses in the next part of the series, the question will be more relevant, but it might be good to have an answer when that time comes.

Other things to consider are what types of phrases can be put in a preposition, where they fit in a sentence (before the verb, after it, at the end, or wherever), case marking (some languages have prepositions that require their nouns to go in a specific case, while case-heavy languages can go with a smaller set of prepositions), person marking (a few languages require this on the preposition), and the relation between prepositional and other types of phrases. And, of course, the main question: do you have prepositions at all?

Once you’re through all that, you’ve greatly increased the expressive power of your language. Not only can you tell what happened and who did it, but now you can specify where, when, why, and how.

Programming paradigm primer

“Paradigm”, as a word, has a bad reputation. It’s one of those buzzwords that corporate people like to throw out to make themselves sound smart. (They usually fail.) But it has a real meaning, too. Sometimes, “paradigm” is exactly the word you want. Like when you’re talking about programming languages. The alliteration, of course, is just an added bonus.

Since somewhere around the 1960s, there’s been more than one way to write programs, more than one way to view the concepts of a complex piece of software. Some of these have revolutionized the art of programming, while others mostly languish in obscurity. Today, we have about half a dozen of these paradigms with significant followings. They each have their ups and downs, and each has a specialty where it truly shines. So let’s take a look at them.

Now, it’s entirely possible for a programming language to use or encourage only a single paradigm, but it’s far more common for languages to support multiple ways of writing programs. Thanks to one Mr. Turing, we know that essentially all languages are, from a mathematical standpoint, equivalent, so you can create, say, C libraries that use functional programming. But I’m talking about direct support. C doesn’t have native objects (struct doesn’t count), for example, so it’s hard to call it an object-oriented language.

Where it all began

Imperative programming is, at its heart, nothing more than writing out the steps a program should take. Really, that’s all there is to it. They’re executed one after the other, with occasional branching or looping thrown in for added control. Assembly language, obviously, is the original imperative language. It’s a direct translation of the computer’s instruction set and the order in which those instructions are executed. (Out-of-order execution changes the game a bit, but not too much.)

The idea of functions or subroutines doesn’t change the imperative nature of such a program, but it does create the subset of structured or procedural programming languages, which are explicitly designed for the division of code into self-contained blocks that can be reused.

The list of imperative languages includes all the old standbys: C, Fortran, Pascal, etc. Notice how all these are really old? Well, there’s a reason for that. Structured programming dates back decades, and all the important ideas were hashed out long before most of us were born. That’s not to say that we’ve perfected imperative programming. There’s always room for improvement, but we’re far into the realm of diminishing returns.

Today, imperative programming is looked down upon by many. It’s seen as too simple, too dumb. And that’s true, but it’s far from useless. Shell scripts are mostly imperative, and they’re the glue that holds any operating system together. Plenty of server-side code gets by just fine, too. And then there’s all that “legacy” code out there, some of it still in COBOL…

The imperative style has one significant advantage: its simplicity. It’s easy to trace the execution of an imperative program, and they’re usually going to be fast, because they line up well with the computer’s internal methods. (That was C’s original selling point: portable assembly language.) On the other hand, that simplicity is also its biggest weakness. You need to do a lot more work in an imperative language, because they don’t exactly have a lot of features.

Objection!

In the mid-90s, object-oriented programming (OOP) got big. And I do mean big. It was all the rage. Books were written, new languages created, and every coding task was reimagined in terms of objects. Okay, but what does that even mean?

OOP actually dates back much further than you might think, but it only really began to get popular with C++. Then, with Java, it exploded, mainly from marketing and the dot-com bubble. The idea that got so hot was that of objects. Makes sense, huh? It’s right there in the name.

Objects, reduced to their most basic, are data structures that are deeply entwined with code. Each object is its own type, no different from integers or strings, but they can have customized behavior. And you can do things with them. Inheritance is one of them: creating a new type of object (class) that mimics an existing one, but with added functionality. Polymorphism is the other: functions that work differently depending on what type of object they’re acting on. Together, inheritance and polymorphism work to relieve a huge burden on coders, by making it easier to work with different types in the same way.

That’s the gist of it, anyway. OOP, because of its position as the dominant style when so much new blood was entering the field, has a ton of information out there. Design patterns, best practices, you name it. And it worked its way into every programming language that existed 10-15 years ago. C++, Java, C#, and Objective-C are the most used of the “classic” OOP languages today, although every one of them offers other options (including imperative, if you need it). Most scripting-type languages have it bolted on somewhere, such as Python, Perl, and PHP. JavaScript is a bit special, in that it uses a different kind of object-oriented programming, based on prototypes rather than classes, but it’s no less OOP.

OOP, however, has a couple of big disadvantages. One, it can be confusing, especially if you use inheritance and polymorphism to their fullest. It’s not uncommon, even in the standard libraries of Java and C#, to have a class that inherits from another class, which inherits from another, and so on, 10 or more levels deep. And each subclass can add its own functions, which are passed on down the line. There’s a reason why Java and C# are widely regarded as having some of the most complete documentation of any programming language.

The other disadvantage is the cause of why OOP seems to be on the decline. It’s great for code reuse and modeling certain kinds of problems, but it’s a horrible fit for some tasks. Not everything can be boiled down to objects and methods.

What’s your function?

That leads us to the current hotness: functional programming, or FP. The functional fad started as a reaction to overuse of OOP, but (again) its roots go way back.

While OOP tries to reduce everything to objects, functional programming, shockingly enough, models the world as a bunch of functions. Now, “function” in this context doesn’t necessarily mean the same thing as in other types of programming. Usually, for FP, these are mathematical functions: they have one output for every input, no matter what else is happening. The ideal, called pure functional programming, is a program free of side effects, such that it is entirely deterministic. (The problem with that? “Side effects” includes such things as user input, random number generation, and other essentials.)

FP has had its biggest success with languages like Haskell, Scala, and—amazingly enough—JavaScript. But functional, er, functions have spread to C++ and C#, among others. (Python, interestingly, has rejected, or at least deprecated, some functional aspects.)

It’s easy to see why. FP’s biggest strength comes from its mathematical roots. Logically, it’s dead simple. You have functions, functions that act on other functions, functions that work with lists, and so on. All of the basic concepts come straight from math, and mistakes are easily found, because they stick out like a sore thumb.

So why hasn’t it caught on? Why isn’t everybody using functional programming? Well, most people are, just in languages that weren’t entirely designed for it. The core of FP is fairly language-agnostic. You can write functions without side effects in C, for example, it’s just that a lot of people don’t.

But FP isn’t everywhere, and that’s because it’s not really as simple as its proponents like to believe. Like OOP, not everything can be reduced to a network of functions. Anything that requires side effects means we have to break out of the functional world, and that tends to be messy. (Haskell’s method of doing this, the monad, has become legendary in the confusion it causes.) Also, FP code really, really needs a smart interpreter, because its mode of execution is so different from how a computer runs, and because it tends to work at a higher level of abstraction. But interpreters are universally slower than native, relegating most FP code to those higher levels, like the browser.

Your language here

Another programming paradigm that deserves special mention is generic programming. This one’s harder to explain, but it goes something like this: you write functions that accept a set of possible types, then let the compiler figure out what “real” type to use. Unlike OOP, the types don’t have to be related; anything that fits the bill will work.

Generic programming is the idea behind C++ templates and Java or C# generics. It’s also really only used in languages like that, though many languages have “duck-typing”, which works in a similar fashion. It’s certainly powerful; most of the C++ standard library uses templates in some fashion, and that percentage is only going up. But it’s complicated, and you can tie your brain in knots trying to figure out what’s going on. Plus, templates are well-known time sinks for compilers, and they can increase code size by some pretty big factors. Duck-typing, the “lite” form of generic programming, doesn’t have either problem, but it can be awfully slow, and it usually shows up in languages that are already slow, only compounding the problem.

What do I learn?

There’s no one right way to code. If we’ve learned anything in the 50+ years the human race has been doing it, it’s that. From a computer science point of view, functional is the way to go right now. From a business standpoint, it’s OOP all the way, unless you’re looking at older code. Then you’ll be going procedural.

And then there are all those I didn’t mention: reactive, event-driven, actor model, and dozens more. Each has its own merits, its own supporters, and languages built around it.

My best advice is to learn whatever you’re preferred language offers first. Then, once you’re comfortable, move on, and never stop learning. Even if you’ll never use something like Eiffel in a serious context, it has explored an idea that could be useful in the language you do use. (In this case, contract programming.) The same could be said for Erlang, or F#, or Clojure, or whatever tickles your fancy. Just resist the temptation to become a zealot. Nobody likes them.

Now, some paradigms are harder than others, in my opinion. For someone who started with imperative programming, the functional mindset is hard to adjust to. Similarly, OOP isn’t easy if you’re used to Commodore BASIC, and even experienced JavaScript programmers are tripped up by prototypes. (I know this one first-hand.)

That’s why I think it’s good that so many languages are adopting a “multi-paradigm” approach. C++ really led the way in this, but now it’s popping up everywhere among the “lower” languages. If all paradigms (for some suitable value of “all”) are equal, then you can use whatever you want, whenever you want. Use FP for the internals, wrapped by an event-driven layer for I/O, calling OOP or imperative libraries when you need them. Some call it a kitchen-sink approach, but I see programmers as like chefs, and every chef needs a kitchen sink.

Novel Month 2015 – Day 26, early morning

And…that’s a wrap.

First of all: Happy Thanksgiving! Second of all: it’s done!

That’s right, the story is finished, and with 4 days to spare. Never thought I could do that, but now I know I can. I’d like to do some nice stats and all that other stuff, but I’m done with writing for a while, and I’m going to take a much-needed break. One filled with turkey and dressing and potatoes and pie and all that other wonderful food that only comes around once a year.

Regular posts should start back up next week. December 1 is a Tuesday, and I don’t normally post on that day, so we’ll get back in the swing of things on Wednesday the 2nd. Of course, that’s assuming that these software updates I’ve been putting off for a month don’t go haywire. Debian Testing is a harsh mistress.

This session’s word count: 2,629
Final word count: 54,030