Let’s make a language – Part 9b: Prepositional phrases (Conlangs)

Prepositional phrases, despite how important they are to expressing oneself in a language, don’t have all that much grammar. So we can combine both Isian and Ardari into one post, and we’ll even have time to add in a bit about adverbs while we’re at it.

Isian

Isian uses postpositions instead of prepositions, which is a change that might be hard to get used to. When they’re used to modify a noun, they usually follow it. If they’re supposed to modify a verb, then they’ll usually come at the end of a sentence, but not always. Sometimes, they’ll go right after the verb, and this signifies a greater emphasis on the phrase. It’s all in how you want to say it.

Simple nouns or noun phrases are easy to use with a postposition. Just put it after the phrase: e talar iin the house”; sir mi fofrom my heart”. (I’ll show a whole bunch more at the end of the post.)

If we want to add in a bit of action to our phrase, then we have a special verbal marker, cu, that indicates something like an infinitive (“to go”) or a gerund (“going”): cu oca anos “without asking”. It’s not only used with postpositions, and we’ll see it pop up a few times later on.

Adjectives, as we saw a few posts ago, usually can’t occur without a noun in Isian. Well, here’s one of the cases where they can. Using an adjective with the special postposition hi (and only this one; it doesn’t work with others) creates a kind of adverb: ichi “beautiful”, ichi hi “beautifully”.

The postposition hi works with nouns, too: sam hi “manly, like a man”. The English translation shows an article, but Isian doesn’t need (and can’t use) one in this situation.

Ardari

As a head-final language, you’d expect Ardari to have postpositions, too, and you’d be right: tyèketö wiin the house”.

The grammar here isn’t that much different from Isian. Noun phrases in postpositionals work in largely the same way, with one major difference. Remember that Ardari has case for its nouns. What case do we use for a postpositional phrase?

Usually, the accusative is the right answer. But a few postpositions require their nouns to appear in the dative. Some even change meaning based on the case of the noun. For example, wi used with the accusative means “in”, as we in tyèketö wi above. But use it in the dative (tyèkètö wi, note the vowel change), and the meaning becomes “into the house”. It’s a subtle difference, both in form and meaning, but it is indeed a difference.

Using a verb in a postpositional phrase isn’t that hard. The particle ky goes after the (uninflected) verb, and then the postposition goes after that: brin ky vi “while walking”; chin ky nètya “after going”.

Making an adverb out of a noun or phrase uses this same little word, but with the copula verb èll-: kone èll ky “like a man”. (You could say that èll ky is the Ardari adverb marker, but it’s not that simple.) Simple adjectives, on the other hand, can be used directly, so ojet can mean “sweet” or “sweetly”, depending on whether it modifies a noun or a verb: ojeta obla “sweet water”; ojet ajang ky “singing sweetly”.

The list

As promised, here’s a brief list of some of the most common English prepositions and their closest equivalents in Isian and Ardari.

English Isian Ardari
above apay aj
across sos ori
after eb nètya
against ansir eka
around oto òs
at ni äl
before pane jo
behind biso ab
below didal ku
by hoy sy
for ir da
from fo tov
in i wi
in front of ihamo kulyi
into si wi +DAT
of o me
on od oj
onto ores oj +DAT
out of way zho +DAT
through aju tutwi
to/toward es lim
until nobes nyon
with was chès
without anos achèsu

Where “+DAT” appears after a word in the Ardari column, it means that postposition requires a dative noun. Other than that, there’s not much else to say about the table.

Next up

To close out the year, we’ll be looking at relative clauses. Once that’s done, we should have enough of the blanks filled in that 2016 can begin with a bang. Since I write these beforehand, I won’t be taking off for Christmas or New Year’s, because those posts will already be done and waiting.

Sick Day

Since I’ve had a pretty bad cold these last few days (I’m writing this on Monday, but I have no real reason to think I’ll be over it in the next 48 hours), and since I don’t already have posts queued for Wednesday, I’ve decided to take the day off. Sorry ’bout that, but that’s the way it goes sometimes.

I’ve had this Friday’s post written for over a month, so no worries there, and I did have two worldbuilding posts already, so I’m alright for next Monday. It’s just today that got lost in the shuffle. Oh, well. ‘Tis the season, and all that.

Colonization and the New World

It’s common knowledge that the Old World of Europe, Asia, and Africa truly met the New World of the Americas in 1492, when Columbus landed in the Caribbean. Of course, we now know that there was contact before that, such as the Vikings in Newfoundland, about a thousand years ago. But Columbus and those who followed him—Cortés, Pizarro, de Soto, Cabot, and all those other explorers and conquerors Americans learn about in history class—those were ones who truly made lasting contact between the two shores of the Atlantic.

Entire volumes have been written over the last five centuries about the exploration, the conquest, the invasion of the Americas. There’s no need to repeat any of it here. But the subject of the New World is one doesn’t seem to get a lot of exposure in the world of fiction, with the notable exception of science fiction. And I think that’s a shame, because it’s an awfully interesting topic for a story. It’s full of adventure, of gaining knowledge, of conflict and warfare. Especially for American writers (not limited to the United States, but all of North and South America), it’s writing about the legacy we inherited, and it’s odd that we would rather tell stories about the history of the other side of the ocean.

Written by the victors

Of course, one of the main reasons why we don’t write many stories about exploration and colonization is political. We know a lot about the Spaniards and Englishmen and Frenchmen that discovered (using that term loosely) the lands of America. We have written histories of those first conquistadors, of those that came after, and of the later generations that settled in the new lands. We don’t, however, have much of anything from the other side.

A lot of that is due to the way first contact played out. We all know the story. Columbus discovered his Indians (to use his own term), Cortés played them against each other to conquer them, and smallpox decimated them. Those that survived were in no position to tell their tale. Most of them didn’t have a familiar system of writing; most of those written works that did exist were destroyed. And then came centuries of subjugation. Put that all together, and it’s no wonder why we only have one side of the tale of the New World.

But this already suggests story possibilities. We could write from one point of view or the other (or both, for that matter), setting our tale in the time of first contact or shortly after, in the upheaval that followed. This is quite popular in science fiction, where the “New World” is really a whole new world, a planet that was inhabited when we arrived. That’s the premise of Avatar, for example.

Life of a colony

Colonization has existed for millennia, but it’s only since 1492 that it becomes such a central part of world history. The Europeans that moved into the Americas found it filled with wonders and dangers. For the Spanish, the chief problem—aside from the natives—was the climate, as Mexico, Central America, and the Caribbean mostly fall into the tropical belt, far removed from mid-latitude Spain.

The English had it a little better; the east coast of the United States isn’t all that different from England, except that the winters can be harsher. (This was even more the case a few hundred years ago, in the depths of the Little Ice Age.) It’s certainly easier to go from York to New York than Madrid to Managua.

No matter the climate, though, colonists had to adapt. Especially in those times, when a resupply voyage was a long and perilous journey, they had to learn to live off the land. And they did. They learned about the new plants (corn, potatoes, tomatoes, and many more) and animals (bison and llamas, to name the biggest examples), they mapped out river systems and mountain chains. And we have reaped the benefits ever since.

Building a colony can be fun in an interactive setting; Colonization wouldn’t exist otherwise. For a novel or visual work, it’s a little harder to make work, because the idea is that a colony starts out exciting and new, but it needs to become routine. Obviously, if it doesn’t, then that’s a place where we can find a story. Paul Kearney’s Monarchies of God is a great series that has a “settling new lands” sequence. In the science fiction realm of colonizing outer space, you also have such works as Kim Stanley Robinson’s Red Mars (and its colorful sequels).

Terra nullius

Whenever people moved into new land, there was always the possibility that they were the first ones there. It happened about 20,000 years ago in Alaska, about 50,000 in Australia, and less than 1,000 in Hawaii. Even in the Old World, there were firsts, sometimes even in recorded history. Iceland, for example, was uninhabited all the way through Roman times. And in space, everywhere is a first, at least until we find evidence of alien life.

Settling “no man’s land” is different from settling in land that’s already inhabited, and that would show in a story with that setting. There are no outsiders to worry about. All conflict is either internal to the colonists’ population or environmental. That makes for a harder story to write, I think, but one more suited to character drama and the extended nature of books and TV series. It doesn’t have to be entirely without action, though, but something like a natural disaster would be more likely than war.

This is one place where we can—must—draw the distinction between space-based sci-fi and earthly fiction or fantasy. On earth (or a similar fictitious world), we’re not alone. There are animals, plants, pests everywhere we go. We have sources of food and water, but also of disease. In deep space, such as a story about colonizing the asteroid belt, there’s nothing out there. Nothing living, at least. Settlers would have to bring their own food, their own water, their own shelter. They would need to create a closed, controlled ecosystem. But that doesn’t leave much room for the “outside” work of exploration, except as a secondary plot.

Go forth

I’m not ashamed to admit that I could read an entire book about nothing but the early days of a fictional colony, whether in the Americas or on an alien planet. I’ll also admit that I’m not your average reader. Most people want some sort of action, some drama, some reason for being there in the first place. And there’s nothing wrong with that.

But let’s look at that question. Why does the colony exist at all? The Europeans were looking for wealth at first, with things like religious freedom and manifest destiny coming later on. The exploration of space appears to be headed down the same path, with commercial concerns taking center stage, though pure science is another competitor. Even simple living space can be a reason to venture forth. That seems to be the case for the Vikings, and plenty of futuristic stories posit a horribly overcrowded Earth and the need to claim the stars.

Once you have a reason for having a colonial settlement, then you can turn to its nature. The English made villages and towns, the French trading posts. Antarctica isn’t actually settled—by international agreement, it can’t be—but the scientific outposts there point to another possibility. If there are preexisting settlements, like native cities, then there’s the chance that the colonists might move in to one of them instead of setting up their own place. That’s basically what happened to Tenochtitlan, now known as Mexico City.

Colonies are interesting, both in real history and in fiction. They can work as settings in many different genres, including space opera, fantasy, steampunk (especially the settling of the Wild West), and even mystery (we still don’t know what really happened at Roanoke Island). Even just a colonial backdrop can add flavor to a story, giving it an outside pressure, whether by restless natives or the cold emptiness of space. A colony is an island, in a sense, an island in a sea of hostility, fertile ground for one’s imagination.

Let’s make a language – Part 9a: Prepositional phrases (Intro)

We’ve made quite a bit of progress with our languages since the beginning. They’ve got nouns, pronouns, verbs, and adjectives. We’ve given them plenty of grammar to connect these bits and turn them into sentences. But those sentences are pretty bland. And that’s because there’s no real way for our conlangs to express anything but the most basic of semantic relationships: subject and object. To remedy that, we’re going to talk about a new type of phrase, the prepositional phrase.

English, for example

Prepositional phrases are everywhere you look in English. That last sentence, in fact, ended with one: “in English”. This type of phrase isn’t absolutely necessary to the grammatical “core” of a sentence. It’s extra information that fills in the blanks, giving us more detail about a situation. Prepositional phrases are the way we can learn wheres and whens and hows about something. They’re filler, but that’s exactly what we need.

Looking at English—we’ve got a whole post full of examples, so there’s no reason not to—we can see the general structure of a prepositional phrase. It starts with a preposition, naturally enough, and that’s really the head of the phrase. Prepositions are those words like “in”, “of”, or “below”; they can specify position, location, time, possession, and about a hundred other ideas that are impossible (or just cumbersome) to express with only a verb, subject, and object.

Besides the preposition, the rest of the prepositional phrase is, well, a phrase. It’s usually a noun phrase, but English allows plenty of other options, like gerunds (“in passing“) and adverbs (“until recently“). In theory, there’s nothing stopping a language from letting whole sentences be put into prepositional phrases (“after I came home“), but phrases like that are technically a different kind that we’ll see in a future post. For now, we’ll stick with the more noun-like examples.

Changing it up

English isn’t the only (or even the best) example of how to do prepositional phrases. Other languages do things differently. So, let’s take a look at how they do it.

The first thing to say is that this post’s whole premise is a lie. You don’t need prepositions. You don’t have to have a word that precedes a noun phrase to give more information about a sentence. No, it’s actually a bit more common (according to WALS Chapter 85) to put the preposition after the noun phrase. In that case, it’s not really proper to call it a preposition anymore; pre-, after all, means “before”. So linguists call these words postpositions. Japanese, for instance, uses postpositions, and any anime lover knows of の (no). Thus, “preposition” is technically a misnomer, and the more general term (which we won’t use here) is adposition.

Even rarer types exist, too. There’s the inposition, which pops up in a few languages. It goes somewhere in the middle of the noun phrase. The circumposition has bits that go on either side of the phrase, and this (so Wikipedia says) appears in Pashto and Kurdish. It’s sort of like the French phrasing seen in je ne sais pas. And then there are a few oddballs that don’t seem to have any of these, using verbs or case markers or something like that as stand-ins.

What can fit in the “phrase” part of “prepositional phrase” varies from one language to the next, as well. Noun phrases are let in pretty much everywhere. Not every language has adverbs, though, unless you’re of the belief that any word that doesn’t fit in another class is automatically an adverb. Gerunds aren’t ubiquitous, either. And, of course, languages can go the other way, allowing more possibilities than English.

A note on adverbs

We haven’t really discussed adverbs much, and there’s a good reason: nobody can agree on just what an adverb really is. Sure, it’s easy to say that adverbs are to verbs what adjectives are to nouns, but that doesn’t help much. If you look at it that way, then a prepositional phrase can also be an adverbial phrase. Similarly, some languages—Irish, for example—create regular adverbs by means of a prepositional phrase, their equivalent to English “-ly”. On the other hand, it’s entirely possible for a language to give its adjectives the ability to function as adverbs, as in Dutch.

For a conlang, adverbs are a hard thing to get right. It’s easy to be regular, much harder to be naturalistic. Esperanto mostly goes the regular route, with its productive -e suffix, but there are a few exceptions like hodiau “today”. The best advice I can give here is to have a little bit of both. Have a regular way to derive adverbial meaning from adjectives, but also have a few words that aren’t derived.

In conclusion

How your conlang handles prepositions isn’t something I can tell you. I can give you a few pointers, though. First, there’s a tendency for head-final languages (SOV word order, adjectives preceding nouns, etc.) to use postpositions. This is by no means universal, but it’s something to think about.

And the end of that last sentence brings up another point that often gets overlooked. Can you end a sentence with a preposition? You can in English. The only reason it’s considered “bad” is because of the influence of Latin, which doesn’t allow it. Clearly, a postposition can end a sentence, by its very nature, but the waters are murkier for those that come before. When we get to relative clauses in the next part of the series, the question will be more relevant, but it might be good to have an answer when that time comes.

Other things to consider are what types of phrases can be put in a preposition, where they fit in a sentence (before the verb, after it, at the end, or wherever), case marking (some languages have prepositions that require their nouns to go in a specific case, while case-heavy languages can go with a smaller set of prepositions), person marking (a few languages require this on the preposition), and the relation between prepositional and other types of phrases. And, of course, the main question: do you have prepositions at all?

Once you’re through all that, you’ve greatly increased the expressive power of your language. Not only can you tell what happened and who did it, but now you can specify where, when, why, and how.

Programming paradigm primer

“Paradigm”, as a word, has a bad reputation. It’s one of those buzzwords that corporate people like to throw out to make themselves sound smart. (They usually fail.) But it has a real meaning, too. Sometimes, “paradigm” is exactly the word you want. Like when you’re talking about programming languages. The alliteration, of course, is just an added bonus.

Since somewhere around the 1960s, there’s been more than one way to write programs, more than one way to view the concepts of a complex piece of software. Some of these have revolutionized the art of programming, while others mostly languish in obscurity. Today, we have about half a dozen of these paradigms with significant followings. They each have their ups and downs, and each has a specialty where it truly shines. So let’s take a look at them.

Now, it’s entirely possible for a programming language to use or encourage only a single paradigm, but it’s far more common for languages to support multiple ways of writing programs. Thanks to one Mr. Turing, we know that essentially all languages are, from a mathematical standpoint, equivalent, so you can create, say, C libraries that use functional programming. But I’m talking about direct support. C doesn’t have native objects (struct doesn’t count), for example, so it’s hard to call it an object-oriented language.

Where it all began

Imperative programming is, at its heart, nothing more than writing out the steps a program should take. Really, that’s all there is to it. They’re executed one after the other, with occasional branching or looping thrown in for added control. Assembly language, obviously, is the original imperative language. It’s a direct translation of the computer’s instruction set and the order in which those instructions are executed. (Out-of-order execution changes the game a bit, but not too much.)

The idea of functions or subroutines doesn’t change the imperative nature of such a program, but it does create the subset of structured or procedural programming languages, which are explicitly designed for the division of code into self-contained blocks that can be reused.

The list of imperative languages includes all the old standbys: C, Fortran, Pascal, etc. Notice how all these are really old? Well, there’s a reason for that. Structured programming dates back decades, and all the important ideas were hashed out long before most of us were born. That’s not to say that we’ve perfected imperative programming. There’s always room for improvement, but we’re far into the realm of diminishing returns.

Today, imperative programming is looked down upon by many. It’s seen as too simple, too dumb. And that’s true, but it’s far from useless. Shell scripts are mostly imperative, and they’re the glue that holds any operating system together. Plenty of server-side code gets by just fine, too. And then there’s all that “legacy” code out there, some of it still in COBOL…

The imperative style has one significant advantage: its simplicity. It’s easy to trace the execution of an imperative program, and they’re usually going to be fast, because they line up well with the computer’s internal methods. (That was C’s original selling point: portable assembly language.) On the other hand, that simplicity is also its biggest weakness. You need to do a lot more work in an imperative language, because they don’t exactly have a lot of features.

Objection!

In the mid-90s, object-oriented programming (OOP) got big. And I do mean big. It was all the rage. Books were written, new languages created, and every coding task was reimagined in terms of objects. Okay, but what does that even mean?

OOP actually dates back much further than you might think, but it only really began to get popular with C++. Then, with Java, it exploded, mainly from marketing and the dot-com bubble. The idea that got so hot was that of objects. Makes sense, huh? It’s right there in the name.

Objects, reduced to their most basic, are data structures that are deeply entwined with code. Each object is its own type, no different from integers or strings, but they can have customized behavior. And you can do things with them. Inheritance is one of them: creating a new type of object (class) that mimics an existing one, but with added functionality. Polymorphism is the other: functions that work differently depending on what type of object they’re acting on. Together, inheritance and polymorphism work to relieve a huge burden on coders, by making it easier to work with different types in the same way.

That’s the gist of it, anyway. OOP, because of its position as the dominant style when so much new blood was entering the field, has a ton of information out there. Design patterns, best practices, you name it. And it worked its way into every programming language that existed 10-15 years ago. C++, Java, C#, and Objective-C are the most used of the “classic” OOP languages today, although every one of them offers other options (including imperative, if you need it). Most scripting-type languages have it bolted on somewhere, such as Python, Perl, and PHP. JavaScript is a bit special, in that it uses a different kind of object-oriented programming, based on prototypes rather than classes, but it’s no less OOP.

OOP, however, has a couple of big disadvantages. One, it can be confusing, especially if you use inheritance and polymorphism to their fullest. It’s not uncommon, even in the standard libraries of Java and C#, to have a class that inherits from another class, which inherits from another, and so on, 10 or more levels deep. And each subclass can add its own functions, which are passed on down the line. There’s a reason why Java and C# are widely regarded as having some of the most complete documentation of any programming language.

The other disadvantage is the cause of why OOP seems to be on the decline. It’s great for code reuse and modeling certain kinds of problems, but it’s a horrible fit for some tasks. Not everything can be boiled down to objects and methods.

What’s your function?

That leads us to the current hotness: functional programming, or FP. The functional fad started as a reaction to overuse of OOP, but (again) its roots go way back.

While OOP tries to reduce everything to objects, functional programming, shockingly enough, models the world as a bunch of functions. Now, “function” in this context doesn’t necessarily mean the same thing as in other types of programming. Usually, for FP, these are mathematical functions: they have one output for every input, no matter what else is happening. The ideal, called pure functional programming, is a program free of side effects, such that it is entirely deterministic. (The problem with that? “Side effects” includes such things as user input, random number generation, and other essentials.)

FP has had its biggest success with languages like Haskell, Scala, and—amazingly enough—JavaScript. But functional, er, functions have spread to C++ and C#, among others. (Python, interestingly, has rejected, or at least deprecated, some functional aspects.)

It’s easy to see why. FP’s biggest strength comes from its mathematical roots. Logically, it’s dead simple. You have functions, functions that act on other functions, functions that work with lists, and so on. All of the basic concepts come straight from math, and mistakes are easily found, because they stick out like a sore thumb.

So why hasn’t it caught on? Why isn’t everybody using functional programming? Well, most people are, just in languages that weren’t entirely designed for it. The core of FP is fairly language-agnostic. You can write functions without side effects in C, for example, it’s just that a lot of people don’t.

But FP isn’t everywhere, and that’s because it’s not really as simple as its proponents like to believe. Like OOP, not everything can be reduced to a network of functions. Anything that requires side effects means we have to break out of the functional world, and that tends to be messy. (Haskell’s method of doing this, the monad, has become legendary in the confusion it causes.) Also, FP code really, really needs a smart interpreter, because its mode of execution is so different from how a computer runs, and because it tends to work at a higher level of abstraction. But interpreters are universally slower than native, relegating most FP code to those higher levels, like the browser.

Your language here

Another programming paradigm that deserves special mention is generic programming. This one’s harder to explain, but it goes something like this: you write functions that accept a set of possible types, then let the compiler figure out what “real” type to use. Unlike OOP, the types don’t have to be related; anything that fits the bill will work.

Generic programming is the idea behind C++ templates and Java or C# generics. It’s also really only used in languages like that, though many languages have “duck-typing”, which works in a similar fashion. It’s certainly powerful; most of the C++ standard library uses templates in some fashion, and that percentage is only going up. But it’s complicated, and you can tie your brain in knots trying to figure out what’s going on. Plus, templates are well-known time sinks for compilers, and they can increase code size by some pretty big factors. Duck-typing, the “lite” form of generic programming, doesn’t have either problem, but it can be awfully slow, and it usually shows up in languages that are already slow, only compounding the problem.

What do I learn?

There’s no one right way to code. If we’ve learned anything in the 50+ years the human race has been doing it, it’s that. From a computer science point of view, functional is the way to go right now. From a business standpoint, it’s OOP all the way, unless you’re looking at older code. Then you’ll be going procedural.

And then there are all those I didn’t mention: reactive, event-driven, actor model, and dozens more. Each has its own merits, its own supporters, and languages built around it.

My best advice is to learn whatever you’re preferred language offers first. Then, once you’re comfortable, move on, and never stop learning. Even if you’ll never use something like Eiffel in a serious context, it has explored an idea that could be useful in the language you do use. (In this case, contract programming.) The same could be said for Erlang, or F#, or Clojure, or whatever tickles your fancy. Just resist the temptation to become a zealot. Nobody likes them.

Now, some paradigms are harder than others, in my opinion. For someone who started with imperative programming, the functional mindset is hard to adjust to. Similarly, OOP isn’t easy if you’re used to Commodore BASIC, and even experienced JavaScript programmers are tripped up by prototypes. (I know this one first-hand.)

That’s why I think it’s good that so many languages are adopting a “multi-paradigm” approach. C++ really led the way in this, but now it’s popping up everywhere among the “lower” languages. If all paradigms (for some suitable value of “all”) are equal, then you can use whatever you want, whenever you want. Use FP for the internals, wrapped by an event-driven layer for I/O, calling OOP or imperative libraries when you need them. Some call it a kitchen-sink approach, but I see programmers as like chefs, and every chef needs a kitchen sink.

Novel Month 2015 – Day 26, early morning

And…that’s a wrap.

First of all: Happy Thanksgiving! Second of all: it’s done!

That’s right, the story is finished, and with 4 days to spare. Never thought I could do that, but now I know I can. I’d like to do some nice stats and all that other stuff, but I’m done with writing for a while, and I’m going to take a much-needed break. One filled with turkey and dressing and potatoes and pie and all that other wonderful food that only comes around once a year.

Regular posts should start back up next week. December 1 is a Tuesday, and I don’t normally post on that day, so we’ll get back in the swing of things on Wednesday the 2nd. Of course, that’s assuming that these software updates I’ve been putting off for a month don’t go haywire. Debian Testing is a harsh mistress.

This session’s word count: 2,629
Final word count: 54,030

Novel Month 2015 – Day 25, early morning

After midnight, so the day changes.

All that’s left is the last third of Chapter 8. It’s basically an epilogue, setting things up for Episode III, whenever I decide to write that. (Maybe I’ll start next month.)

I’m going to bed for the night. When I wake up, there’s only one thing left to do: Finish the drill.

This session’s word count: 880
Total word count: 51,401

Novel Month 2015 – Day 24, evening

The first goal, 50,000 words in a month, is done, and with time to spare.

That doesn’t mean this is the end. No, Chapter 8 is only halfway finished, so the ultimate goal of a complete story in a month remains on the table. I plan on writing more tonight, so I’ll be inching closer to that one. I wanted to post this now, though, as proof.

This session’s word count: 844
Total word count: 50,521

Note that this word count is as counted in Vim, the editor I use for writing. It may not be 100% accurate, but I think there’s enough leeway that I can confidently say I have reached 50K.

Novel Month 2015 – Day 23, evening

So close, yet so far.

Chapter 8 is getting there. Call it a third of the way done. It’s the first and only recycled POV of this particular story, the same character as Chapter 1. Not sure why I picked her, but that’s how it happened.

Looking at what I have so far, I’d say about two more full writing days should do it. Of course, real life can intervene, and two “full” days can easily become six “partial” days, so who knows? But look at my word count! So…close…almost…there…

This session’s word count: 2,350
Today’s word count: 49.687

Novel Month 2015 – Day 22, late night

That’s a wrap for Chapter 7, and for November 22. Huh, 22, 7. Almost like…pi. I better stop, before I make myself hungry.

Anyway, it’s coming down to the wire. Eight days, one chapter. 50K shouldn’t be too hard, but bringing the full story to a conclusion might be. We’ll just have to wait and see.

This session’s word count: 998
Total word count: 47,337