Social Liberty: Definitions

First, let us call the fundametnal properties of good government the Principles. These are so central to the idea of Social Liberty that they literally cannot be violated within its framework. A Principle is inviolate by definition, for changing it would change the nature of the space. Principles are thus axiomatic; the six I have chosen define the very concept of Social Liberty.

Second, a Right is a property taken to be inherent to the system. It cannot be abridged, violated, or removed except by another Right or as punishment. Rights in Social Liberty include many of those taken to be inalienable and self-evident, such as freedom of expression, freedom of religion, the right to bear arms, and the right to a trial. A new right can be created, as described by the Principle of Evolution, in a manner dependent upon the system of government.

Third, a Law may be defined as a requirement that must be followed by the citizens of a state. Laws may not be written to directly infringe Rights, but they may define the boundaries of those Rights, and violation of a Law may cause Rights to be lost temporarily. In addition, Laws may be used to clarify cases where two different Rights are in conflict. For example, a Law cannot revoke a Right of Free Speech, but it is allowed to spell out those instances where that Right is overruled by a Right of Privacy.

Fourth, a Responsibility is a duty impressed upon citizens as part of the social contract they form with their government. Laws may bring about Responsibilities, but these may not permanently deprive a citizen of a Right. Responsibilities may also arise logically from Principles; these Responsibilities may violate Laws and limit Rights.

Fifth, a Privilege is subordinate to a Law. It is a lesser right created by the interaction of Laws and Rights, and it may be removed by either. New Rights are rare, but new Privileges may be invented more rapidly, under the Principle of Evolution.

Finally, let us define a Definition as a clarification of the meaning of a word as it is used in the text of a Right, Responsibility, Law, or Privilege.

The Doctrine of Social Liberty is made up of these Principles, Rights, Responsibilities, Laws, and Definitions. They are written in plain language, because understanding the principles of one’s state is necessary for a citizen. Each is given a name for easier reference, as in this example:

Right of Arms — All persons are permitted to own and bear personal arms intended for their own defense.

In later Laws or other text, this can be referred to simply as “the Right of Arms”. For example:

Law of Disallowed Weapons — Any weapon capable of killing multiple people with a single shot, or requiring the operation of more than one person, shall not be considered a personal arm under the Right of Arms.

With these definitions, we are able to construct the structure necessary to create a nation-state that fulfills the goals of Social Liberty.

On ancient times

The medieval era gets a lot of screen time, and for good reason. Medieval Europe has a kind of romantic appeal, with its knights and chivalry and castles, its lack of guns and bombs and cars and planes. It’s our collective nostalgic getaway. Fantasy, of course, revels in the Middle Ages; the “default” fantasy setting is England circa 1200, at the height of the era. But any kind of fiction can take us to medieval times. We have our Game of Thrones and Lord of the Rings, yes, but also our Vikings and The Last Kingdom, our Braveheart and Excalibur.

But what about earlier times? What about the days before the castles and cathedrals were built, before knights wrote their code of chivalry? What about the ancient era?

Defining the ancient

First, let’s define what we mean by “ancient”. We can consider the Middle Ages to end in 1453, with the fall of Constantinople; the refugees fleeing into Europe from that city sparked the Renaissance. The beginning of the era, however, is harder to characterize. That’s mostly because of the Dark Ages, those centuries where nothing much happened. (Except when it did.) Records are fairly scanty in the period before Charlemagne—before about 800—but I think we can all agree that the Roman Empire really was ancient. Thus, the year of its fall in the west, 476 AD, marks a good boundary between the ancient and the medieval.

So we’ll say ancient times ended in 476. When did they begin? That’s a difficult question that gets to the heart of anthropology. Suffice to say, the ancient era began with human civilization. Even if you’d prefer to subdivide (Bronze Age, Classical Era, etc.), its all ancient.

That leaves us with a grand sweep of history, possibly as much as ten thousand years! In our modern, fast-paced world, that seems like an eternity. Indeed, it is a long time, no matter how you look at it, and things changed remarkably from the beginning of the era to the end. Fifth-century Rome was nothing like Homer’s Athens, and neither really resembled Sargon’s Babylon from the eighth century BC, or Middle Kingdom Thebes a millennium before that, or the Stone Age settlement of Çatalhöyük. (Jericho has been occupied almost continuously since the beginning of the ancient era, and you can bet it went through a number of different looks through the ages.)

Writing an ancient-times work requires you to know the period. For the big names—Rome, Greece, Egypt, Mesopotamia—that’s relatively easy. These cultures all left a large body of written knowledge, in addition to easily excavated structures. We know a lot about how they lived, so a writer has more than enough to work with. Lesser-known peoples, such as the Etruscans, Harappans, or Picts, are much harder. Quite a few are only attested in a few sites, and those may be impossible to fully grasp. (On the other hand, that means no one can complain that you screwed up your history!)

The ancient world

Whichever part of Antiquity you choose as your setting, you’ll have to get to know the world. The hardest part is seeing what little you have to work with. Technology, for instance, is such an important part of our times that it’s hard enough to imagine the medieval world, with its lack of…well, everything we take for granted. And ancient times were even worse in that regard. At the earliest, we’re talking about days when the wheel was the height of invention. The reason the Iron Age is called the Iron Age is because it’s defined by the working of iron. For ancient smiths, that was awfully hard as it was; steel was literally impossible.

But the ancients (especially the Romans) made great advances in their own right. Rome, of course, invented concrete, while the Egyptians built the pyramids and the Greeks had all their grand wonders. China built a Great Wall that, like the Maginot Line, never really lived up to its promise. These cultures of old also developed early sciences (the Greeks were pretty good at geometry, as you probably know) and quite a few other things. Our modern legal system also owes a lot to the Roman one, filtered through the Middle Ages though it was.

One part of life rises above everything else in the ancient world: religion. Every ancient culture placed a heavy focus on matters of religion. In fact, it’s often hard to untangle religion from other fields, because it permeated life. Science, government, art, and literature were all tools used for religion’s purposes. And it’s not hard to see why. When the world is so much bigger than you, than anything you know, and when it’s so wild and untamed compared to ours, where can you find any form of safety? Religion was so important that most archaeological sites are practically assumed to be religious in nature until proven otherwise.

Besides the sacred, many other forces worked to shape the ancient world. Remember that we’re dealing with a time before modern industry, but also before the developments of the Middle Ages. People had to look to their basic needs first: food, water, shelter. Survival. Only once they were certain they could survive could they work to thrive. Most people didn’t make it that far, however. Subsistence farming was a way of life. So was hunting and gathering, a practice preserved in only a very few spots today. Only a select few rose above that. True, there were more “middle-class” people in the great cities, particularly towards the end of the era, but urban life was for the 1%.

Travel was hard. Communities were small. People could go their whole lives—much shorter than our own, on average—without leaving their homeland. But that was probably for the best, as danger lurked everywhere. Disease, predators (on two legs or four), war, famine—all these can be subsumed under the one word that best describes the foreign: uncertainty.

The city on the hill

Rome was the big exception to this. Romans made a habit of being worldly, urbane, sophisticated. Their empire, as horrible as we’d consider it today, was the apex of ancient civilization. It removed the uncertainties of life in the era, replacing them with the rule of law, with connections and bureaucracy and, well, government. Earlier cultures built roads to connect towns, but Rome took that to an extreme. Aqueducts existed long before the Appian was built, but we associate these creations with the Romans because they perfected the art through repeated practice.

A story set in Imperial (or even Republican) Rome will still have most of the same aspects as something from earlier Antiquity, but it can also show a different way of life, one which has much more in common with our own. That’s probably why it has some of the best representation in fiction, including:

  • The HBO series Rome (naturally)
  • Shakespeare’s Julius Caesar, required reading for high-school English classes
  • Spartacus, whether in its original movie form or the stylized TV series from a few years ago
  • Ben-Hur, recently remade as a box-office flop
  • Passion of the Christ, because the birth of Christianity came in a corner of the Roman Empire

By contrast, other ancient cultures show up less often in modern media. The Greeks get endless retellings of Alexander, the Iliad, and the wars against the Persians (e.g., 300). Ancient Egypt gets fanciful flicks like Exodus: Gods and Kings and The Scorpion King. Mesopotamia is almost totally limited to Biblical stories such as Noah. (In books, things are a little better, if only because you don’t have to spend money on costumes and set design.)

It’s entirely possible to write a story about the ancient world. It’ll take research and thought, as well as the capability to imagine a time so alien to anything we know. It’s been done before, though, and there are good stories to tell. Not just the Caesars and the Constantines, or Jesus or the Jews. Antiquity comprises an entire world far larger than our own, a world in the process of being formed.

Let’s make a language, part 19c: Plants (Ardari)

Ardari mostly inhabits the same region of space and time as Isian, as we have previously stated. It’s a little more…worldly, however. Yes, it’ll take in loans from outside languages, but not always, and it’ll often change them around to fit its own style. It has essentially the same “stock” of native botanical terms as Isian, though with a few quirks.

Word List

General terms

Remember that Ardari has a gender distinction in nouns. It’s not entirely arbitrary, although it may seem that way when you look at the vocabulary list. But there is actually something of a pattern. “Flower” words tend to be feminine (byali “berry”, afli “flower”), while “stem” words (pondo “stem”, kolbo “root”) are often masculine.

  • berry: byali
  • flower: afli
  • fruit: zulyi
  • grain: tròk
  • grass: sèrki
  • leaf: däsi
  • nut: gund
  • plant: pämi
  • root: kolbo
  • seed: sano
  • stem (stalk): pondo
  • to harvest: kèt-
  • to plant: mäp-
  • tree: buri
Plant types

Ardari doesn’t like compounds very much, but nature is an exception, as you can see from nòrpèpi “orange” below. The other words are pretty standard, with the “foreign” plants often showing up in loanword form: bönan, pòtato, etc. Note that the masculine/feminine distinction above doesn’t carry through the whole language, but there is a tendency for fruits and flowers to be feminine, while “ground” crops are more often masculine.

  • apple: pèpi
  • banana: bönan (loan)
  • bean: bècho
  • carrot: dälyo
  • cherry: twali
  • corn (maize): mescon (loan, “maize corn”)
  • cotton: dos
  • fig: saghi
  • flax (linen): tintir
  • grape: kalvo
  • mint: òm
  • oak: ulk
  • olive: älyo
  • onion: ösint
  • orange: nòrpèpi (compound: “orange apple”)
  • pea: myo
  • pepper: pypèr (loan)
  • pine: byuno
  • potato: pòtato (loan)
  • rice: izho
  • rose: zalli
  • wheat: èmlo

Later on

Again, Ardari has more words for plants than I’ve shown here, but I don’t want to be here all month. We’ve got better things to do. The next part of the series moves on to animals, from the tiniest insects to the biggest behemoths nature can throw at us.

On procedural generation

One of my favorite uses of code is to create things. It always has been. When I was young, I was fascinated by fractals and terrain generators and the like. The whole idea of making the code to make something else always appealed to me. Now, as it turns out, the rest of the world has come to the same conclusion.

Procedural generation is all the rage in games these days. Minecraft, of course, has made a killing off of creating worlds from nothing. No Man’s Sky may have flopped, but you can’t fault its ambition: not only was it supposed to have procedurally-generated worlds, but a whole galaxy full of aliens, quests, and, well, content. That last part didn’t happen, but not because of impossibility. The list goes on—and back, as Elite, with its eight galaxies full of procedural star systems, is about as old as I am.

Terrain

Procedural terrain is probably the most widely known form of generation. Even if you’ve never played with TerraGen or something like that, you’ve probably played or seen a game that used procedural heightmaps. (Or voxels, like Minecraft.) Making terrain from code is embarrassingly easy, and I intend to do a post in the near future about it.

From the initial generation, you can add in lots of little extras. Multiple passes, possibly using different algorithms or parameters, give a more lifelike world. Tweaking, say, the sea level changes your jagged mountain range into an archipelago. You can go even further, adding in simulated plate tectonics or volcanic deposition or coastline erosion. There really are no boundaries, but realism takes some work.

Textures and models

Most 3D modeling software will give you an option to make “procedural” textures. These can be cool and trippy, especially those based on noise functions, but it’s very difficult to use them to make something realistic. That doesn’t stop them from being useful for other things; a noise bump map might be more interesting than a noise texture, but the principle is the same.

Going one step up—to actual procedural models—is…not trivial. The “creature generators” as in No Man’s Sky or Spore are severely limited in what they can do. That’s because making models is hard work already. Leaving the job in the hands of an algorithm is asking for disaster. You’re usually better off doing as they do, taking a “base” and altering it algorithmically, but in known ways.

Sound

Procedural sound effects and music interest me a lot. I like music, I like code. It seems only natural to want to combine the two. And there are procedural audio packages out there. Making them sound melodic is like getting a procedural model to work, but for your ears instead of your eyes. It’s far from easy. And most procedural music tends to sound either very loopy and repetitive, or utterly listless. The generating algorithms we use aren’t really suited for musical structure.

Story

Now here’s an intriguing notion: what if algorithms could write a story for us? Creating something coherent is at the high end of the difficulty curve, but that hasn’t stopped some from trying. There’s even a NaNoWriMo-like contest for it.

On a smaller scale, games have been making side quests and algorithmic NPCs for years. That part isn’t solved, but it isn’t hard. (For some reason, Fallout 4 got a lot of press for its “radiant quests” in 2015, like it was something new. Though those are, I think, more random than procedural…) Like modeling, the easiest method is to fill in parts of a pre-arranged skeleton, a bit like Mad Libs.

Anything else

Really, there’s no limit to what can be made through procedural generation. That’s probably why I like it so much. From a small starting seed and a collection of algorithms, amazing things can be brought forth. In an age of multi-gigabyte games full of motion-captured animation, professional voice talent, and real-world scenery, it’s a refreshing change to make something beautiful out of a few letters and numbers.

The alternative

No form of government is perfect. If one were, every nation-state would eventually gravitate towards it. Nor will I say that I have developed the perfect form of rule. In fact, I’m not sure such a thing is possible. However, I can present an alternative to the deeply flawed systems of the present day.

Guided by the principles of good government we have previously seen, and aided by logic, reason, and the wisdom of ages, we can derive a new method, a better method. It is not a fully-formed system. Rather, it is a framework with which we can tinker and adjust. It is a doctrine, what I call the Doctrine of Social Liberty.

I cannot accept the strictures of current political movements. In my eyes, they all fail at some point. That is the reason for stating my principles. Those are the core beliefs I hold, generalized into something that can apply to any nation, any state. A government that does not follow those principles is one that fails to represent me. I am a realist; as I said above, nothing is perfect. Yet we should strive for perfection, despite it being ever unattainable. The Doctrine of Social Liberty is my step in that direction.

More than ever, we need a sensible, rational government based on sound fundamentals. The answer does not lie in slavishly following the dogmatic manifestos of radical movements. It does not lie in constant partisan bickering. It can only be found by taking a step back, by asking ourselves what it is that we want from that which governs us.

Over the coming weeks, I hope to detail what I want from a government. I don’t normally post on Tuesdays, but the coming election presents a suitable reason to do so. In four posts, I will describe my doctrine in its broadest strokes, and I will show how our current ruling class violates the principles I have laid out. Afterward, following the start of next year, I want to go into greater detail, because I think these things will become even more relevant.

Let’s make a language, part 19b: Plants (Isian)

We’ve already established that Isian is a language of our world. We’ve also set it somewhere in the Old World, in a place relatively untouched by the passage of time. By definition, that means it won’t have much contact with the Americas, so the most common plant terms will be those from Eurasia, with a few popular items coming from Africa. On the other hand, Isian has native words for all the different parts of the plant, as well as what to do with them. Again, this comes from our worldbuilding: Isian is spoken in an agrarian society, so it’s only natural that its speakers would name such an integral part of their world.

Word list

General terms

These are parts of plants, mainly the important (i.e., edible) parts, as well as a few terms for the broad types of plants. Note that all of these are native Isian words, and almost all are also “fundamental” words, not derived from anything.

  • berry: eli
  • flower: atul
  • fruit: chil
  • grain: kashel
  • grass: tisen
  • leaf: eta
  • nut: con
  • plant: dires
  • root: balit
  • seed: som
  • stem (stalk): acut
  • to harvest: sepa
  • to plant: destera
  • tree: taw
Plant types

This set of words names specific types of plants. These fall into three main categories. First, there are the native terms, like pur “apple”, which are wholly Isian in nature. Next are the full-on loanwords, taken from the “common” names used in many parts of Europe; these are usually the New World plants where Isian has no history of association. Finally, there are a few compounds, like cosom, “pepper”, formed from ocom “black” and som “seed”.

  • apple: pur
  • banana: banan (loan)
  • bean: fowra
  • carrot: cate(s)
  • cherry: shuda(s)
  • corn (maize): meyse (loan)
  • cotton: churon
  • fig: dem
  • flax (linen): wod
  • grape: ged
  • mint: ninu
  • oak: sukh
  • olive: fili(r)
  • onion: dun
  • orange: sitru(s) (loan, “citrus”)
  • pea: bi (note: not a loan)
  • pepper: cosom (compound: “black seed”)
  • pine: ticho (from a compound “green tree”)
  • potato: pota (loan)
  • rice: manom
  • rose: rale(r)
  • wheat: loch

Coming up

These are far from the only words in the Isian language regarding plants, but they’re a good start, covering a lot of bases while also illustrating how we can combine worldbuilding and conlanging to make something better. Next week, we’ll see things from the Ardari side of the fence. Spoiler alert: it’s not exactly the same.

On visual programming

With the recent release of Godot 2.1, the engine’s developers broke the news of a project coming to a future version: a visual scripting system. That’s nothing new, even in the world of game development. Unreal came out with Blueprints a while back, and even Godot already used something similar for its shaders. Elsewhere, “visual programming” has its proponents and detractors, and I’m going to throw in my two cents.

Seeing is believing

In theory, visual programming is the greatest revolution in coding since functions. It’s supposed to deliver all the benefits of writing programs to those who don’t want to, well, write. The practice has its origins in flowcharts, which are graphical representations of algorithms. Computer scientists got to thinking, “Okay, so we can make algorithms like this, so why not whole applications?” It’s only been fairly recently that computers have been able to fulfill this desire, but visual programming languages are now springing up everywhere.

The idea is deceptive in its simplicity. Instead of laboriously typing out function declarations and loop bodies, you drag and drop boxes (or other shapes), connecting them to show the flow of logic through your program. The output of one function can be “wired” to another’s input, for example. The benefits are obvious. Forget about syntax errors. Never worry about type mismatches again. Code can truly become art. With the name of this site, you’d think I could get behind that.

It does work, I’ll give you that. MIT’s Scratch is used by tens of thousands of programmers, young and old alike. Through its ingenious “building block” system, where the “boxes” are shaped like pieces in a jigsaw puzzle, you never have to worry about putting the wrong peg into the wrong hole. For children who barely understand how a computer works—or, in extreme cases, may not know how to read yet—it’s a great help. There’s a reason why it’s been copied and cloned to no end. It even makes coding fun.

What can’t be unseen

The downsides to visual programming, however, are not fun at all. Ask anyone who’s ever suffered through LabVIEW (I’ve read enough horror stories to keep me away from it forever). Yes, the boxes and blocks are cute, but complex logic is bad enough when it’s in linear, written form. Converted to a more visual format, you see a literal interpretation of the term “spaghetti code”. Imagine how hard it was to write. Now imagine how hard it would be to maintain.

Second, visual programming interfaces have often suffered from a lack of operations. If you wanted to do something that there isn’t a block for, you were most likely out of luck. Today, it’s not so bad. Unreal’s Blueprints, for example, gives you about 95% of the functionality of C++, and Godot’s variation purports to do the same.

But some things just don’t fit. Composition, an important part of programming, is really, really hard to get right visually. Functional styles look like tangled messes. Numerics and statistics are better served in a linear form, where they’re close to the mathematics they’re implementing.

The verdict

I’m not saying visual programming is useless. It’s not. There are cases where it’s a wonderful thing. It’s great for education, and it’s suited to illustrating the way an algorithm works, something that often gets lost in the noise of code. But it needs to be used sparingly. I wouldn’t write an operating system or device driver in Scratch, even if I could. (You can’t, by the way.)

In truth, the visual “style” doesn’t appeal to me. That’s a personal opinion. Your mileage may vary. When I’m learning something new, it’s certainly a big help, but we have to take the training wheels off eventually. Right now, that’s how I see a visual scripting system. For a new programmer, sure, give it a try. You might love it.

But you probably won’t. After a while, you’ll start bumping into the limitations. You’ll have webs of logic that make spiders jealous, or customized blocks that look like something by Escher. Stay simple, and you might keep your sanity—as much as any programmer has, anyway. But we’re not artists. Not in that way, at least. Code can be beautiful without being graphical.

Magic and tech: safety and security

Despite what you may hear from TV and other sources of news, the world we live in today is the safest there’s ever been. Those of us living in the modern, industrialized West enjoy a level of personal, private, and public safety that would make earlier ages green with envy. Some of that comes from philosophy, from political science and enlightened ideas about the responsibilities of good government. With the representative democracies that make up most of Europe and North America, we’re all invested in the safety of everyone. An attack on one of us is an attack on all of us.

But technology also plays an important role in keeping us protected, on allowing us to live our lives free of the fears of random violence or other threats. Say what you will about them, but guns are a sufficient deterrent in many instances. But this isn’t the only form of technological security. Look at crash helmets, airbags, or even knee pads—all inventions created to keep us safe from incidental harm.

Science of safety

Today, we’re seeing a lot of talk about safety and security. Before we can look at them, though, we need to distinguish these two terms. Security, as I see it, is active protection from external threats, looking out for the things that might hurt you and dealing with them. Safety is more like not having those threats in the first place, or mitigating their causes in such a way that they never have the chance to harm you in the first place. Both of these aspects are intertwined, however.

Most technology deals with both ends of this spectrum at the same time. Take, for instance, collision avoidance. It’s a safety feature, in that its whole point is to steer you away from the possibility of a crash. But it can also be an active security system: if another car cuts you off, it can avoid that potential crash, too. Some of the more advanced systems can also stop you from causing an accident, by creating a negative feedback in steering or simply ignoring your movements of the wheel completely.

Safety and security aren’t limited to electronic assistance. They go back to the beginning of time. Any non-hunting weapon (or hunting weapon used for self-defense) is an implement of security. So are bodyguards and even standing armies. Public policies dating back to the age of Rome and before instituted measures of safety, from sanitation standards to traffic ordinances to weapons bans. (Whether these worked, of course, is a matter of debate.)

Socially speaking, there are also two ways we can look at safety. First, we can take it into our own hands. Anyone who owns a gun, has an alarm system, or even wears a seatbelt is doing exactly this. By following what we perceive to be “best practices”, we can make ourselves as safe as we wish. If X will harm you, then you try to put yourself in a position where X can’t get to you.

The alternative (not that they are mutually exclusive) is to put your trust in another. We also do that all the time. The whole point of a society based on the rule of law is that someone, somewhere, is responsible for the safety of the public. Whether that’s a king, president, or whatever you like, it doesn’t matter. Someone is looking out for you. We can’t protect against every threat, so we delegate to them.

Safety in magic

Most of our best safety and security comes from technology, whether that’s guns, cameras, anti-virus programs, or just a combination lock. Since we’ve established that magic can replace an awful lot of tech, we have to wonder: can magic make people safer?

Well, we’ve already seen a couple of realms where it does: medicine and self-defense. That’s proof enough of the merit of magical security. But how much further can we take this?

If your magic system allows shields of force (for this series, ours doesn’t, but bear with me), then that right there is a great example. Something like that would become extremely popular, especially if it’s not that hard to make. A single charm or enchantment that makes you all but immune to weapons, blunt trauma, falling, and the elements? You’d be crazy not to get one. But let’s say you’re working with something a little more low-key, like we are. We don’t have the luxury of an easy illustration of the power of magical security, so we’ll have to look at a few other possibilities.

We have an amplifying spell. A crafty mage can take this and turn it around. Instead of a speaker making his voice louder, a wary person can make ambient sounds louder. Sounds like, say, someone creeping through the bushes. It’s a primitive, but useful, security microphone. From the same earlier entry in this series, we also see a ventriloquist effect that can serve as a helpful bit of misdirection. If they think you’re over there, but you’re really here, those dangerous enemies will be out of position, giving you time to strike or run away.

Magical power, whether electrical or motive, gives us the opportunity to create such things as self-locking doors and electrified fences. Metallurgy, improved by the arcane arts, makes it easier to forge heavy, secure locks, but also the delicate keys needed to open them. A mage’s invisible markings can be used as fingerprinting or watermarking: a secure method of verifying the identity of a message’s sender. On the safety side, we have, of course, medicine and sanitation as the big winners, but they’re not the only ones.

Magic, and the scientific, empirical mindset it’s bringing to our fictional realm, will make many areas safer. From the grand (weather forecasting) to the mundane (washing hands), as our magical society becomes more advanced, it will seek out ways to keep its populace safe and secure. Sometimes, this may go too far—the seemingly inexorable slide of our own world into a surveillance state is an example—but one can hope the mages are smarter.

Safe and sound

If you’ll recall, our magical kingdom is, technologically speaking, still in the late medieval era. The added magic, however, is bringing it up to near-modern levels. Part of that advancement is in making people safer. If you do that, they live longer, healthier, better lives. They become more productive, and you eventually get that positive reinforcement that can explode into modernity. All you have to do is take some of the danger out of the world. Once the existential threats are no longer, people can begin to make themselves better.

Let’s make a language, part 19a: Plants (Intro)

Plants are everywhere around us. Grass, trees, flowers…you can’t get away from them. We eat them, wear them, and write on them. Growing them for our own use is one of the markers of civilization. Which plants a culture uses is as defining as its architectural style…or its language.

Every language used by humans will have an extensive list of plant terms. It’ll have names for individual plants, names for collections of them, names for parts of them. How many? Well, that depends. To answer that question, we’ll need to do a little worldbuilding.

The easy method

If you’re creating a modern (or future) language intended to be spoken by everyday humans, your task is fairly easy. All you have to do is borrow plant terms from one of the major languages of the day: English, Spanish, etc. Or you can use the combination of Latin and Greek that has served the West so well for centuries. Either way, an auxlang almost doesn’t need to make its own plant words.

Even naturalistic languages set in modern times can get away with this a bit. Maybe some plant terms have “native” words, but most of the rest are imported, just like the plants themselves. You could have native/loanword pairs, where the common folk use one word, but educated or formal contexts require a different one.

Harder but better

The further you get from modern Earth, the harder, but ultimately more rewarding, your task will be. Here’s where you need to consider the context of your language. Where is it spoken? By whom? And when? How much of the world do its speakers know?

Let’s take a few examples. The grapefruit is a popular fruit, but its history only extends back to the 1700s. A “lost” language in medieval Europe wouldn’t know of it, so they wouldn’t have a word for it. (Which is probably close to why it received the rather generic name of “grapefruit” in the first place.) Coffee, though grown in Colombia today, is native to the Old World, so ancient Amazonians would have never seen it. It wouldn’t be part of their world, so it wouldn’t get a name. Conversely, potatoes and tomatoes are American-born; you’d have to have a really good reason why your hidden-in-the-Caucasus ancient language has words for them.

For alien planets, it’s even worse. Here, you don’t even have the luxury of borrowing Earth names. But that also gives you the ultimate freedom in creating words. And that leads us to the next decision: which plants get which names.

Making your own

Remember this one general principle: common things will have common names. The more “outlandish” something is, the more likely it will be represented by a loanword. Also, the sheer number of different plants means that only a specific subset will have individual words. Most will instead be derived. In English, for example, we have the generic berry, describing (not always correctly) a particular type of fruit. We also have a number of derived terms: strawberry, blueberry, raspberry, huckleberry, and so on. Certain varieties of plants can even get compound names that are descriptive, such as black cherries; locative, like Vidalia onions; or (semi-)historical, such as Indian corn.

Plants often grow over a wide area, so it stands to reason that there will be dialectal differences. This provides an element of depth, in that you can create multiple words for the same plant, justifying them by saying that they’re used by different sets of speakers. Something of an English example is corn itself. In England, “corn” is a general term referring to a grain. For Americans, it’s specifically the staple crop of the New World, scientific name Zea mays. Back across the pond, that crop is instead called maize, but the American dialect’s “maize” tends to connote less-cultivated forms, such as the multicolored “Indian corn” associated with Thanksgiving. Confusing, I know, but it shows one way the same plant can get two names in the same language.

The early European explorers of America had the same problem a budding conlanger will have, so we can draw some conclusions from the way they did it. Some plants kept their native names, albeit in horribly mangled forms; examples include cocoa and potato. Some, such as tomatillo (Spanish for “little tomato”), are derived from indigenous terms. A few, like cotton, were named because they were identical or very close to Old World plants; the Europeans just used the old name for the new thing. Still others got the descriptive treatment, where they were close enough to a familiar plant to earn its name, but with a modifier to let people know it wasn’t the same as what they were used to.

The other side

In the next two entries, we’ll see what words Isian and Ardari use for their flora, and then it’s on to the other side of the coin, the other half of the couple. Animals. Fauna. Whatever you call them, they’re coming up soon.

Let’s end threads

Today’s computers almost all have multiple cores. I don’t even think you can buy a single-core processor anymore, at least not one meant for a desktop. More cores means more things can run at once, as we know, and there are two ways we can take advantage of that. One is easy: run more programs. In a multi-core system, you can have one core running the program you’re using right now, another running a background task, and two more ready in case you need them. And that’s great, although you do have the problem of only one set of memory, one hard drive, etc. You can’t really parallelize those.

The second option uses the additional cores to run different parts of a single application. Threads are the usual way of doing this, though they’ve been around far longer than multi-core processors. They’re more of a general concurrency framework. Put a long-running section of code in its own thread, and the OS will make sure it runs. It won’t block the rest of the program. And that’s great, too. Anything that lets us fully utilize this amazing hardware we have is a good thing, right?

Well, no. Threads are undoubtedly useful. We really couldn’t make modern software without them. But I would argue that we, as higher-level programmers, don’t need them. For an application developer, the very existence of threads should be an implementation detail in the same vein as small string optimizations or reference counting. Here’s why I think this.

Threads are low-level

The thread is a low-level construct. That cannot be denied. It’s closer to the operating system layer than the application layer. If you’re working at those lower levels, then that’s what you want, but most developers aren’t doing that. In a general desktop program or mobile app, threads aren’t abstract enough.

To put it another way: C++ bears the brunt of coders’ ire at its “manual” memory management. Unlike C# or Java, a C++ programmer does need to understand the lifecycle of data, when it is constructed and destructed, what happens when it changes hands. But few complain about keeping track of a thread’s lifecycle, which is essentially the same problem.

Manual threading is error-prone

This comes from threads being low-level because, as any programmer will tell you, the lower you are in the “stack”, the more likely you’ll unwittingly create bugs. And there might not be any bigger source of bugs in multithreaded applications than in the threading itself.

Especially in C++, but by no means unheard of in higher-level languages, threading leads to all sorts of undefined or ill-defined behavior, race conditions, and seemingly random bugginess. Because threads are scheduled by the OS, they’re out of your control. You don’t know what’s executing when. End a thread too early, for example, and your main program could try reading data that’s no longer there. And that can be awfully hard to detect with a debugger, since the very act of running something in debug mode will change the timing, the scheduling.

Sharing state sucks

In an ideal situation, one that the FP types tell us we should all strive for, one section of code won’t depend on any state from any other. If that’s the case, then you’ll never have a problem with memory accesses between threads, because there won’t be any.

We code in the real world, however, and the pure-functional approach simply does not work everywhere. But the alternative—accessing data living in one thread from another—is a minefield full of semaphores and mutexes and the like. It’s so bad that processors have implemented “atomic” memory access instructions just for this purpose, but they’re no magic bullet.

Once again, this is a function of threads being “primitive”. They’re manually operated, with all the baggage that entails. In fact, just about every problem with threads boils down to that same thing. So then the question becomes: can we fix it?

A better way

Absolutely. Some programming languages are already doing this, offering a full set of async utilities. Generally, these are higher-level functions, objects, and libraries that hide the workhorse threads behind abstractions. That is, of course, a very good thing for those of us using that higher level, those who don’t want to be bogged down in the minutiae of threading.

The details differ between languages, but the usual idea is that a program will want to run some sort of a “task” in the background, possibly providing data to it at initialization, or perhaps later, and receiving other data as a result. In other words, an async task is little more than a function that just happens to run on a different thread, but we don’t have to care about that last part. And that is the key. We don’t want to worry about how a thread is run, when it returns, and so on. All we care about is that it does what we ask it to, or else it lets us know that it can’t.

This async style can cover most of the other thread-based problems, too. Async tasks only run what they’re asked, they end when they’re done, and they give us ways (e.g., futures) to wait for their results before we try to use them. They take care of the entire lifecycle of threading, which we can then treat as a black box. Sharing memory is a bit harder, as we still need to guard against race conditions, but it can mostly be automated with atomic access controls.

A message-passing system like Scala and Erlang use can even go beyond this, effectively isolating the different tasks to an extent resembling that of the pure-FP style. But even in, say, C++, we can get rid of most direct uses of threads, just like C++11 removed most of the need for raw pointers.

On the application level, in 2016, there’s no reason a programmer should have to worry about manual memory allocation, so why should we worry about manual thread allocation? There are better ways for both. Let’s use them.