Summer Reading List 2024: Second

The world’s a mess, my life’s a mess, but at least I’m reading. Right?

Military History

Title: The Guns of August
Author: Barbara Tuchman
Genre: Military History
Year: 1962

World War I has always fascinated me. Ever since I was in the 6th grade, when I had a to choose a topic for a social studies project (that year was world history, and I had reached the early 20th century), I was hooked by the stories and the sheer scope of the Great War. My grade on that project was terrible—I almost failed, simply because there was just so much to learn that I couldn’t narrow it down enough, and didn’t have time to rehearse—but the memory remained.

Over the past decade, the war that had languished in relative obscurity all my life finally started to get back into the public eye. Mostly, that’s because of the centennial, the 100th anniversary of Archduke Franz Ferdinand’s assassination in June 1914. That milestone brought about the desire to make new media, whether movies (1917), video games (Verdun and its sequels, Tannenberg and Isonzo), web series (Indy Neidell’s The Great War), or music (Sabaton’s…er, The Great War). Many of these are excellent, and I’ve spent the past ten years basking in the knowledge that I finally got to be a history hipster.

But I haven’t read a book about WWI in decades. And I hadn’t planned on doing so this summer, either, until Elon Musk shared a list of his must-listen audiobooks. Since I don’t really care for audiobooks—I’m a visual learner—I downloaded them in written form, and I picked the first interesting one I saw that wasn’t 11 volumes. Sorry, Will Durant.

The Guns of August is a fairly detailed narrative, drawn from diaries, newspapers, the occasional eyewitness report, and other primary or good secondary sources. Its topic is, broadly speaking, the first month of the war. In practice, it starts somewhat before that, with the death of Edward VII, King of England. His death, and his succession by George V, created a power vacuum in a geopolitical landscape that was already growing increasingly tense. Europe had four years until war finally broke out (ignore the Balkan Wars for a moment), but the buildup had already begun. Edward’s death, as Tuchman argues in a roundabout way, set Europe on the path to war.

Most of us know the broad strokes of the summer of 1914. Ferdinand was shot in Sarajevo. Austria demanded reparations from Serbia; recall that this was before Yugoslavia existed, much less Bosnia. Favors were called in on both sides, drawing in first Germany, France, Russia, and Great Britain, then seemingly every other country in the world. Four years and many millions of dead young men later, the original belligerents peaced out one by one, ending with Germany signing the Treaty of Versailles.

As not enough of us know, that treaty was designed to be so ruinous that the German Empire would cease to be a nation able to project power abroad. Indeed, it was the end of the empire as a whole. Instead, the Kaiser’s rule was replaced with the decadent debauchery of the Weimar Republic, which served to suck out the marrow of the German economy while leaving its society fractured, fragmented. Exactly as we’re seeing in modern America, but I digress.

Anyway, Tuchman’s book isn’t about that part of the war. In fact, it leaves off as the Battle of the Marne begins, ending with a series of what-ifs that are tantalizing to the worldbuilder in me. What if the German armies hadn’t tried to do a forced march just to stick to their predetermined schedule of battles? What if Britain’s Field Marshal French hadn’t been swayed by a rare emotional outpouring from the normally stoic General Joffre? (Now I really want to write that alt-history!)

No matter what might have happened if things had gone differently in August 1914, the author makes it clear that what occurred in the weeks immediately prior to the German advance to the outskirts of Paris were pretty much set in stone. Before Franz Ferdinand was so much as cold, Europe was going to war. It was only a matter of when.


As far as the book goes, it’s a good read. It’s nothing brilliant, and certainly not worth a Pulitzer, in my opinion. The writing can be almost too highbrow at times, as if Tuchman is trying to capture the last gasp of the Victorian Era in words. To be fair, that’s how most of the major players talked and wrote, but readers even in the 1960s wouldn’t have been exposed to it except in literature classes. Certainly not when discussing military history. There are also scores of untranslated sentences in French and German, an oddity in a book written for English speakers.

The pacing is also very uneven. The Russians get a couple of fairly long chapters, but are otherwise forgotten; Tannenberg is practically a footnote compared to Liege. Conditions on a forced march get page after page of narration, including diary excerpts from soldiers, while the battles themselves are mostly reduced to the traditional "date and body count" sort of exposition.

If there’s any real critique of The Guns of August, it has to come from its very obvious and very intentional Allied bias. While the happenings in Germany and among the Kaiser’s generals are well-represented, they’re often cast in a negative light. When the Germans demolish a village in retaliation for partisan attacks, it’s a war crime and an international outrage. When the French demolish a village because they think they might need to put up defenses, it’s a heroic effort to save their country.

This is, of course, the same kind of thinking that still permeates the discussion about the Great War’s sequel. The "bad guys" aren’t allowed to take pride in their country. Their nationalism is evil; ours is sacred. (This line of reasoning also leads otherwise sensible people to praise Communists.) The simple fact is, the Weimar Republic was far worse than it’s portrayed, and the governments to either side of it on the timeline, whether Empire or Reich, were not as bad as they’re portrayed. Barbara Tuchman, being a student of her generation, can’t get past that. Even if she tried, I imagine her publishers wouldn’t let her.

Otherwise, The Guns of August is a worthwhile read for its subject matter. It’s a good look at the backdrop to World War I, something that occasionally gets lost among the trenches. Personally, I find it a bit overrated, but I’m glad I read it.

Summer Reading List 2024: First

It’s been about a month, and I finally made time to read something. Thanks to my brother’s timely discovery of a Youtube channel called "In Deep Geek", I got a little inspired for this one. Man, I hope that guy starts posting on a site that respects its users soon.

Biography

Title: The Nature of Middle-Earth
Author: J.R.R. Tolkien (ed. Carl Hostetter)
Genre: Biography/History?
Year: 2021

I don’t really know how to classify this book. It’s basically a collection of notes and scraps that Tolkien left behind. Much like his son Christopher’s History of Middle-Earth series, a ton of editing had to be done to make something readable. And…well, that didn’t quite work. The book as a whole is very disjointed, full of footnotes and editor comments and just a mess overall.

That makes perfect sense, though. Tolkien was probably the first great worldbuilder. He worked in an era without computers, without the internet. He had to write out his notes longhand. And there were a ton of those notes, because his constructed world began all the way back in the days before World War I. 1909, or thereabouts, was when he first started sketching out the conlang that would become Quenya. By his death, those earliest notes were senior citizens. There was a lot of cruft.

This book, then, is about organizing a lot of that cruft. In that, Hostetter does a good job. His is the job of an archaeologist, in a sense, as well as a forensic scientist. Oh, and a linguist, because Tolkien’s languages were ever the most important part of his creation.

The Nature of Middle-Earth, as its name suggests, gives us notes and drafts related to some of the fundamental questions and thorny problems Tolkien had to solve to give his invented world verisimilitude while also keeping it true to his long-standing ideas and ideals. After all, Middle-Earth is intended to be our world, just a few thousand years in the past. How many, exactly? It’s never stated anywhere in his published books, but this book tells us that Tolkien saw his present day—well, in 1960—as being about 6000 years after the end of LOTR. Convenient, that number, since it’s basically the same as what creationists claim.

And that brings me to the point I want to make. Our editor here repeats his own note a couple of times, emphasizing that Tolkien saw his world as a "fundamentally Catholic" creation. He was a Catholic, so that makes sense in some regard.

Much of the book—much of Tolkien’s corpus of personal notes—is thus about harmonizing a high fantasy world at the cusp of the Dominion of Man with the low, anti-human dogma of the Catholic Church. So Tolkien writes at length, and sometimes in multiple revisions, that his Elves were strictly monogamous, and that they didn’t reincarnate into different bodies. The men of Numenor were the same (except that he didn’t have to worry about reincarnation for them) because they had grown more godly.

In a few cases, Tolkien shows glimpses of a modern scientific worldview that was probably heretical in the churches of his youth. Sure, it’s all in an explicitly theistic framework, but he even accepts evolution for the most part; he can’t quite make the logical leap that humans are subject to it, too, but he meets science halfway, which is more than most would dare.

There is also a glimpse of what I’ve previous called "hardcore" worldbuilding. Tolkien was, of course, a master of that, but The Nature of Middle-Earth shows the extremes he was willing to go to for the sake of his creation. Multiple chapters are taken up with his attempts at giving believable dates for some of the events that were considered prehistorical even in the tales of The Silmarillion. In each, he went into excruciating detail, only to discard it all when he reached a point where the numbers just wouldn’t work. I’ve been there, and now I don’t feel so bad about that. Knowing that the undisputed master of my craft had the same troubles I do is refreshing.

All in all, most of the chapters of the book are short, showing the text of Tolkien’s notes on a subject, plus the occasional editorial comment, and the copious footnotes from both authors. We get to see how the sausage is made, and it’s sometimes just as disgusting as we’d expect. Not one reader of LOTR or The Silmarillion cares about the exact population of each tribe of Elves, or what the etymology of Galadriel’s name indicates about her travels, but Tolkien isn’t writing these things for us.
When worldbuilding, we authors do so much work not because we expect to show every bit of it to our audience, but so that the parts we do show are as good as they can be.

If this book has any lesson, then, it’s that. Worldbuilding is hard work. Worse, it’s work that accomplishes almost nothing in itself. Its sole value is in being a tool to better convey a story. Perfectionist and obsessive that Tolkien was, he wanted an answer to any plausible question a reader might ask. But he also wanted to create for the sake of creating. Remember that the intended goal of Middle-Earth was to become a new mythology, mostly for the British peoples. When you set your sights on something that sweeping, you’re always going to find something to do.

Summer Reading List Challenge 2024

Is it already that time of the year? 2024 seems like it’s just flying by, or maybe that’s because I’m old now. Whatever the case, it’s Memorial Day, and that means time to start a new Summer Reading List challenge! Take a look at the original post if you want to see how this all started. If you don’t really care that this is the 9th straight year I’m doing this challenge, then read on.

The rules are the same as always, because they just fit the challenge perfectly. As always, remember that the "rules" presented here are intended to be guidelines rather than strictures. This is all in fun. You won’t be graded, so all you have to do is be honest with yourself.

  1. The goal is to read 3 new (to you) books between Memorial Day (May 27) and Labor Day (September 2) in the US, the traditional "unofficial" bounds of summer. For those of you in the Southern Hemisphere reading this, it’s a winter reading list. If you’re in the tropics…I don’t know what to tell you.

  2. A book is anything non-periodical, so no comics, graphic novels, or manga. Anything else works. If you’re not sure, just use common sense. Audiobooks are acceptable, but only if they’re books, not something like a podcast.

  3. One of the books should be of a genre you don’t normally read. For example, I’m big on fantasy and sci-fi, so I might read a romance, or a thriller, or something like that. Nonfiction, by the way, also works as a "new" genre, unless you do read it all the time.

  4. You can’t count books you wrote, because they obviously wouldn’t be new to you. (Yes, this rule exists solely to keep me from just rereading my books.)

Social media is an awful place these days, and even my usual fediverse haunt is in flux at the moment. I’ll try to post on my alt @nocturne@bae.st, but don’t hold your breath. Instead, just wait for me to write something here. Of course, you can post wherever you like, even if that’s to Facebook, Twitter (I’m not calling it anything else), or something weird like Threads.

Have fun, and keep reading!

Tools and appliances

I was trying to sleep late last night when I had something of an epiphany. I’ve long lamented the dumbing-down of the world, and particularly the tech world. Even in programming, what should be a field that relies on high intelligence, common sense, and reasoning abilities, you can’t get away from it. We’ve reached the point where I can only assume malice, rather than mere incompetence, is behind the push for what I’m calling the Modern Dark Age.

The revelation I had was that, at least for computers, there’s a simple way of looking at the problem and its most obvious solution. To do that, however, we need to stop and think a little more.

The old days

I’m not a millennial. I’m in that weird boundary zone between that generation the Generation X that preceded it. In terms of attitude and worldview, that puts me in a weird place, and I really don’t "get" either side of the divide. But I grew up with computers. I was one of the first to do so from an early age. I learned BASIC at 8, taught myself C++ at 13, and so on to the dozen or so languages I’m proficient in now at 40. I’ve written web apps, shell scripts, maintenance tools, and games.

In my opinion, that’s largely because I had the chance to experience the world of 90s tech. Yes, my intelligence and boundless curiosity made me want to explore computers in ever-deeper detail, but the time period involved allowed me an exploratory freedom that is just unknown to younger people today.

The web was born in 1992, less than a decade after me. At the time, I was getting an Apple IIe to print my name in an infinite loop, then waiting for the after-recess period when I could play Oregon Trail. Yet the internet as a whole, and the technologies which still provide its underpinnings today, were already mature. When AOL, Compuserve, and Prodigy (among others), brought them to the masses, people in the know had been there for fifteen years or more. (They resented the influx of new, inexperienced rabble so much that "Eternal September" is still a phrase you’ll see thrown about on occasion, a full 30 years after it happened!)

This was very much a Wild West. I first managed to convince my mom that an internet subscription was worth it in 1996, not long before my 13th birthday. At the time, there were no unlimited plans; the services charged a few cents a minute, and I quickly racked up a bill that ran over a hundred dollars a month. But it was worth it.

Nowadays, Google gives people the illusion of all the answers. Back in the day, it wasn’t that simple. Oh, there were search engines: Altavista, Lycos, Excite, Infoseek, and a hundred other names lost to the mists of time. None of these indexed more than a small fraction of the web even in that early era, though. (Some companies tried to capitalize on that by making meta-engines that would search from as many sites as possible.)

Finding things was harder on the 90s web, but that wasn’t the only place to look. Before the dotcom bubble, the internet had multiple, independent communities, many of which were still vibrant. Yes, you had websites. In the days before CSS and a standardized DOM, they were rarely pretty, but the technical know-how necessary to create one—as well as the limited space available—meant that they tended to be more informative. When you found a new site about a topic, it often provided hours of reading.

That screeching modem gave you other options, though. Your ISP might offer a proprietary chat service; this eventually spawned AIM and MSN. AOL especially went all-in on what we now call the "walled garden": chat, news, online trivia games, and basically everything a proto-social network would have needed. On top of that, everyone had email, even if some places (Compuserve is the one I remember best) actually charged for it.

Best of all, in my rose-colored opinion, were the other protocols. These days, everything is HTTP. It’s so prevalent that even local apps are running servers for communication, because it’s all people know anymore. But the 90s had much more diversity. Usenet newsgroups served a similar purpose to what Facebook groups do now, except they did it so much better. Organized into a hierarchy of topics, with no distractions in the form of shared videos or memes, you could have long, deep discussions with total strangers. Were there spammers and other bad actors? Sure there were. But social pressure kept them in line; when it didn’t, you, the user, had the power to block them from your feed. And if you didn’t want to go to the trouble, there were always moderated groups instead.

Beyond that, FTP "sites" were a thing, and they were some of the best places to get…certain files. Gopher was already on its way out when I joined the internet community, but I vaguely remember dipping into it on a few occasions. And while I don’t even know if my area had a local BBS, the dialer software I got with my modem had a few national ones that I checked out. (That was even worse than the AOL per-minute fees, because you were calling long-distance!)

My point here is that the internet of 30 years ago was a diverse and frankly eye-opening place. Ideas were everywhere. Most of them didn’t pan out, but not for lack of trying. Experimentation was everywhere. Once you found the right places, you could meet like-minded people and learn entirely new ways of looking at the world. I’m not even kidding about that. People talk about getting lost in Wikipedia, but the mid 90s could see a young man going from sports trivia to assembly tutorials to astral projection to…something not at all appropriate for a 13-year-old, and all within the span of a few hours. Yes, I’m speaking from personal experience.

Back again

In 2024, we’ve come a long way, and I’m not afraid to state that most of that way was downhill. Today’s internet is much like today’s malls: a hollowed-out, dying husk kept alive by a couple of big corporations selling their overpriced goods, and a smattering of hobbyists trying to make a living in their shadow. Compared even to 20 years ago, let alone 30, it’s an awful place. Sure, we have access to an unprecedented amount of information. It’s faster than ever. It’s properly indexed and tagged for easy searching. What we’ve lost, though, is its utility.

A computer in the 90s was still a tool. Tools are wonderful things. They let us fix, repair, build, create. Look at a wrench or a drill, a nail gun or a chainsaw. These are useful objects. In many cases, they may have a learning curve, but learning unlocks their true potential. The same was true for computers then. Oh, you might have to fiddle with DIP switches and IRQs to get that modem working, but look at what it unlocks. Tweaking your autoexec.bat file so you can get a big Doom WAD running? I did that. Did I learn a useful skill? Probably not. Did it give me a sense of accomplishment when I got it working? Absolutely.

Tools are just like that. They provide us with the means to do things, and the things we can do with them only increase as we gain proficiency. With the right tools, you can become a craftsman, an artisan. You can attain a level of mastery. Computers, back then, gave us that opportunity.

Now, however, computers have become appliances. Appliances are still useful things, of course. Dishwashers and microwaves are undeniably good to have. Yet two aspects set them apart from tools. First, an appliance is, at its heart, a provider of convenience. Microwaves let us cook faster. TVs are entertainment sources. That dryer in the laundry room is a great substitute for a clothesline.

Second, and more important for the distinction I’m drawing here, is that an appliance’s utility is bounded. They have no learning curve—except figuring out what the buttons do—and a fixed set of functions. That dryer is never going to be useful for anything other than drying clothes. There’s no mastery needed, because there’s nothing a mastery of an appliance would offer. (Seriously, how many people even use all those extra cooking options on their microwave?)

Modern computers are the same way. There is no indication that mastery is desirable or useful. Instead, we’re encouraged and sometimes forced into suboptimal solutions because we aren’t given the tools to do better. Even in this century, for example, it was possible to create a decent webpage with nothing more than a text editor. You can’t do that now, though, because browsers won’t even let you access local files from a script. The barrier to entry is thus raised by the need to install a server.

It only gets worse from there. Apple has become famous for the total lockdown of its software and hardware. They had to be dragged into opening up to independent app stores, and they’ve done so in the most obtuse way possible as protest. Google is no better, and is probably far worse; they’re responsible for the browser restriction I mentioned, as well as killing off FTP in the browser, restricting mobile notifications to only use their paid service, and so on. Microsoft? They’re busy installing an AI keylogger into Windows.

We’ve fallen. There’s no other way to put it. The purpose of a computer has narrowed into nothing more than a way to access a curated set of services. Steam games, Facebook friends, tweets and Tiktoks and all the rest. That’s the internet of 2024. There’s very little information there, and it’s so spread out that it’s practically useless. There’s almost no way to participate in its creation, either.

What’s the solution? I wish I knew. To be honest, I think the best thing to do would be a clean break. Create a new internet for those who want the retro feel. Cut it off from the rest, maybe using Tor or something as the only access point. Let it be free of the corrupting influence of corporate greed, while also making it secure against the evils of progressivism. NNTP, SMTP, FTP…these all worked. Bring them back, or use them as the basis for new protocols, new applications, new tools that help us communicate and grow, instead of being ever further restrained.

Twine thoughts

As I mentioned a few months back, I’m writing interactive fiction now. I’ve been planning one called The Anitra Incident, which I envision as a kind of prequel to my Orphans of the Stars novel series. (The second, which I’m actually in the process of writing, is…something else that I’ll never attach my real name to.)

In the previous post, I looked at what I consider the top four tools for creating interactive fiction: Inform 7, Twine, Ren’Py, and Ink. I think I made it clear then why I felt Twine was the best choice for what I’m writing. Now that I’ve been working with it for a while, I have some thoughts to share. These are more of a ramble than even my usual posts here, so bear with me.

Ditch the editor

Twine’s biggest draw is that it has its own editor, with a nifty little drag-and-drop visual tool to organize your stories. It looks good, and it helps to get people interested in creating, rather than whining about how they don’t want to have to learn anything.

But it sucks.

Yes, the editor works just fine for small-scale constructions. Twine divides its stories into passages, which are just that: bits of text that can be anywhere from a few words to an entire chapter, with all the necessary logic for interactivity sprinkled in. A big story with a lot of branching points, arcs, and the like is going to have hundreds, if not thousands, of passages. (Case in point: my unnamed side project has 232 total passages already, and that’s not much more than a set of locations and a handful of conversation scenes.) Trying to keep all that straight will quickly become impossible.

On top of that, the editor’s structure makes it difficult to write code. There isn’t much room for "metadata" on a passage; for the most part, that’s limited to a series of tags, which you have to edit using the "chip" style of tagging that web devs love for some inexplicable reason. But that means you have to put all the code in that little box, even if you’re using a tool that expects tags. In my case, that’s TinyQBN, a library for implementing what the creators of Sunless Sea call "storylets".

I could rant about the editor for another few posts, but I just don’t bother using it, so I won’t bother discussing it further. Yes, setting up a custom workflow is a bit more difficult. Yes, it’s worth it in the end. After doing the work, I can now write my story in Vim and my code in, er, Code. And it all comes out the same, except that I also have better handling of external JS libraries, static analysis tools that can run automatically, and so much more that I’m used to from my life as a developer.

People are stupid

Which brings me to my next point. The average Twine user is not a professional developer or a professional author. Worse yet, neither are the Twine power users. As far as I can tell, I’m just about the only one using Twine who does both. Believe me, it shows.

Most Twine tutorials are written for someone who has never so much as looked at code, and who barely even knows what fiction is, let alone how to write it. I don’t know why Twine’s community targets journalists as its intended audience, but that’s how it is.

For someone who knows both fields, it’s just frustrating. I’ve already read the intro material. I know what a macro is. But no one out there is creating any resources for the intermediate or advanced users. How should I structure a story in terms of source files? What are some common design patterns in interactive fiction, and how do I apply them in Twine? When should I break a scene across multiple passages, and what’s the best way to handle that?

I get that much of writing fiction is an art. I’m well aware that there’s no one-size-fits-all method for creating a novel. But to assume that everyone is forever going to be stuck at the beginner stage is doing the rest of us a disservice. I’m aware that zoomers, degenerates, and progressives (the main components of the intfiction.org "community") don’t know how to learn; people who look to Tumblr for knowledge and wisdom have shown pretty definitively that they have neither. Surely somebody out there cares about the rest of us, though.

If not, maybe I should work on that myself.

Wokeness taints everything

Allegedly, the interactive fiction community is thriving, and Twine is a big part of that. In reality, there’s not much of a community. Much like any other hobby (people don’t generally make a living off adventure stories, unless they work for Failbetter), the anti-human rot of progressivism infects every large gathering that would have the chance to become a community. Those of us who prefer free expression to censorship are, as usual, labeled extremists for the radical view that words are just words. Strange for a hobby built around words, but that’s the whole point of the woke ideology: to tear apart any gathering of like-minded individuals by setting them against one another.

So there’s an interactive fiction forum, but it’s so heavily censored that you get banned just for saying something that someone might think is "bad" in some ill-defined way. There’s a group on Reddit, but that’s…well, Reddit. It’s the Mos Eisley of the internet. Your other major option is Discord, which might be even worse!

Interactive fiction started in the days before the web. It became popular because of technologies like Usenet, where you were expected to be civil, yes, but you weren’t coddled. To have its gathering places be nothing more than wastelands of diversity, mere online versions of Portland and Detroit, is just sad.

(This isn’t specific to Twine, mind you. The Inform community goes even farther. They not only stand against freedom of speech, but also anonymity.)

Tech is tech

Beneath it all, Twine is nothing more than a very weird SPA framework. Sure, you have to compile the source, but the end result is an HTML page and a bunch of assets. It’s like Svelte in a way, except that (as far as I’m aware) the Twine authors don’t openly support child trafficking and religious persecution. As a developer, I think looking at it as a web framework has helped me better understand how to use it as an authoring tool.

This is where my earlier point about getting rid of the Twine IDE as soon as possible comes back into play. Once you abandon that crutch, you realize just how much freedom you have, with all that entails. For my current story, I’ve added the Pure CSS library to help with some layout issues. On my initial draft of The Anitra Incident, I’d used Moment.js for timekeeping; now, somebody finally made a decent native date system macro that does most of what you’d need in a story.

The output is HTML, meaning that you get to use CSS for styling, Javascript for scripting, and all that good stuff. People have managed to integrate Phaser, a 2D sprite-based game engine, into Twine stories, and I’ve been looking at how they did that. I wouldn’t be surprised if somebody even tried combining Twine with React and a full-stack framework. (Come to think of it, that’s not a bad idea. Okay, maybe not React, but Vue and Nuxt…)

One true format

Twine comes prepackaged with a number of "story formats", which are combinations of style templates and authoring DSLs. I briefly went over them in the previous post on this topic. In short, Chapbook is new, and mostly unused. Snowman is not much more than raw Javascript with a parser.

The other two are the most popular: Harlowe and Sugarcube. Harlowe is the default format in the Twine IDE, so it’s the one most newcomers learn first, but I think that’s a horrible decision. If you want to do anything even remotely complex, you’ll quickly run into the limitations of Harlowe. Far worse, however, is the fact that those limitations are by design. The authors, much like Apple, go out of their way to break any attempt at getting outside their sandbox.

In other words, there’s really no reason not to go straight to Sugarcube and stay there. It works. It’s not difficult to pick up. Most of the libraries out there are for it. (A few are format-agnostic, I’ll admit.) And you won’t be supporting the intentional hobbling of technology.

Conclusion

To sum up, then, what I’ve learned about Twine from using it is that it’s a great tool for what it does. It has some extraneous bits, and these are unfortunately the same bits that newcomers are pushed towards. If you’re willing to take the time to set up your own dev environment, use Sugarcube and a compiler like Tweego, and live with the fact that you’ll get no help from the community beyond "here’s how to make text red" and "here’s how to let your players make up their own words to use as pronouns", you won’t have any problems.

Writing a novel is a lot of work. Writing a program is a lot of work. Trying to do both, which is all interactive fiction really is, can be a monumental undertaking. But it’s fun, too. That’s what I’ve discovered in the past few months.

Dumbing down tech

I recognize that I’m smarter than most people. I’ve known that as long as I can remember. When I was six years old, I took a standardized IQ test. You know, the kind whose results are actually usable. I apparently scored a minimum of 175; it wasn’t until a few years later, when I was studying statistics, that I understood both what that meant in relation to society at large and why it had to be a minimum. (IQ is a relative measurement, not an absolute one. Once you get to a certain point, small sample sizes make a precise evaluation impossible.)

There is, of course, a big difference between intelligence and wisdom, though I like to think I also have a good stock of the latter. In some fields, however, the intelligence factor is the defining one, and tech is one of those fields. Why? Because intelligence isn’t just being able to recite facts and formulas. It’s about reasoning. Logic, deduction, critical thinking, and all those other analytical skills that have been absent from most children’s curricula for decades. While some people literally do have brains that are better wired for that sort of thinking—I know, because I’m one of them—anyone can learn. Logic is a teachable skill. Deductive reasoning isn’t intuition.

Modern society, in a most unfortunate turn of events, has deemed analytical thinking to be a hindrance rather than an aid. While public schooling has always been about indoctrination first, and education a very distant second, recent years have only made the problem both more visible and more pronounced. I’ll get back to this point in a moment, but it bears some consideration: as a 40-year-old man, I grew up in a society that was indifferent to high intelligence, but I now find myself living in one that is actively hostile to it.


I’ve always enjoyed reading tech books. Even in the age of Medium tutorials, online docs, and video walkthroughs, I still find it easiest to learn a new technology from a book-length discussion of it. And these books used to be wonderful. Knuth’s The Art of Computer Programming has parts that are now approaching 60 years old, yet it’s still relevant today. The O’Reilly programming language books were often better than a language’s official documentation.

It’s been awhile since I really needed to read a book for a new technology. I’ve spent the past few years working with a single stack that doesn’t have a lot of "book presence" anyway, and the solo projects I’ve started have tended to use things I already knew. Now that I’m unemployed and back to the eternal job hunt, though, I wanted to look for something new, and I was tired of looking at online resources that are, by and large, poorly written, poorly edited, and almost completely useless beyond the beginner level. So I turned to books, hoping for something better.

I didn’t find it.

One book I tried was Real World OCaml. For a few years, I’ve been telling myself that I should learn a functional language. I hate functional programming in general, because I find it useless for real-world problems—the lone exception is Pandoc, which I use to create my novels, because text transformation is one of the few actual uses for FP. But it’s become all the rage, so I looked at what I felt to be the least objectionable of the lot.

The language itself isn’t bad. It has some questionable decisions, but it’s far more palatable than Haskell or Clojure. That comes from embracing imperative programming as valid, meaning that an OCaml program can actually accomplish things without getting lost in mathematical jargon or a sea of parentheses.

But the book…well, it wasn’t bad. It just didn’t live up to its title. There wasn’t much of the real world in it, just the same examples I’d get from a quick Brave search. The whirlwind tour of the language was probably the best part, because it was informative. Tech books work best when they inform.


Okay, maybe that’s a one-off. Maybe I ran into a bad example. So I tried again. I’m working on Pixeme again, the occasional side project I’ve rewritten half a dozen times now, and I decided that this iteration would use the stack I originally intended before I got, er, distracted. As it turns out, the authors of the intriguing Htmx library have also written a book about it, called Hypermedia Systems.

This was where I started getting worried. The book is less about helping you learn their library and more about advancing their agenda. Yes, that agenda has its good parts, and I agree with the core of it, that a full-stack app can offer a better experience for both developers and users than a bloated, Javascript-heavy SPA. The rest of it is mostly unhelpful.

As someone who has been ridiculed for pronouncing "GIF" correctly (like the peanut butter, as the format’s author said) and fighting to keep "hacker" from referring to blackhats, I have to laugh when the authors try to claim that a RESTful API isn’t really REST, and use an appeal to authority to state that the term can only apply to a server returning HTML or some reasonable facsimile.

Advocacy aside, the book was unhelpful in other ways. I can accept that you feel your technology is mostly for the front end, so you don’t want to bog down your book with the perils and pitfalls of a back-end server. But when you’re diving into a method of development that requires a symbiotic relationship between the two, using the academic "beyond the scope" cop-out to wall off any discussion of how to actually structure a back end to benefit from your library is doing your readers—your users—a great disservice. If the scope of your book doesn’t include patterns for taking advantage of a "hypermedia" API, then what does it include? A few new HTML attributes and your whining that people are ignoring a rant from three decades ago?


Alright, I thought after this, I’ll give it one more shot. Never let it be said that I’m not stubborn. The back end of this newest version of Pixeme is going to use Django. Mostly, that’s because I’m tired of having to build out or link together all the different parts of a server-side framework that FastAPI doesn’t include. Things like logins, upload handling, etc. I still want to use Python, because that’s become the language I’m most productive in, but I want something with batteries included.

The official documentation for Django is an excellent reference, but it’s just that: a reference. There’s a tutorial, but this ends very quickly, and offers almost no insight on, say, best practices. That, for me, is the key strength of a tech book: it has the space and the "weight" to explain the whys as well as the hows. So I went looking for a recent book on the topic, and I ended up with Ultimate Django for Web App Development Using Python. A bit of a mouthful, but it’s so new that it even uses the "on X, formerly Twitter" phrasing that mainstream media has adopted to refer to Twitter. (Seriously, nobody in the real world calls it X, just like nobody refers to the Google corporate entity as Alphabet.)

In this case, the book is somewhat informative, and it functions a lot like an expanded version of the official Django tutorial. If you’re new to the framework, then it’s probably not a bad guide to getting started. From something with "ultimate" in the title, I just expected…more. Outside of the tutorial bits, there’s not much there. The book has a brief overview of setting up a Docker container, but Docker deserves to be wiped off the face of the earth, so that’s not much help. And the last chapter introduced Django Ninja, a sort of FastAPI clone that would be incredible if not for the fact that its developers support child trafficking and religious persecution.

Beyond that, the text of the book is littered with typos and grammatical errors. Most of these cases have the telltale look of an ESL author or editor, a fact which is depressingly common in tech references of all kinds nowadays. Some parts are almost unreadable, and I made sure to look over any code samples I wanted to use very carefully. It’s like dealing with ChatGPT, except here I know there was a real human involved at some point, someone who looked at the text and said, "Yeah, this is right." That’s even worse.


Three strikes, and I’m out. Maybe I’m just unlucky, or maybe these three books are representative of modern tech literature. If it’s the latter, that only reinforces my earlier point: today’s society rewards mediocrity and punishes intelligence, even in fields where intelligence is paramount.

Especially in programming, where there is no room for alternate interpretations, the culture of "good enough" is not only depressing but actively harmful. We laugh wryly at the AAA video game with a 200 GB install size and a 50 GB patch on release day, but past experiences show that it doesn’t have to be that way. We can have smart developers. As with any evolutionary endeavor, we have to select for them. Intelligence needs to be rewarded at all stages of life. Otherwise, we’ll be stuck where we are now: with ESL-level books that recapitulate tutorials, screeds disguised as reference texts, and a thousand dead or paywalled links that have nothing of value.

As a case in point, I was looking just yesterday for information about code generation from abstract syntax trees. This is a fundamental part of compiler design, something every programming language has to deal with at some point. Finding a good, helpful resource should be easy, right?

Searching the web netted me a few link farms and a half-finished tutorial using Lisp. Looking for books wasn’t much better, because the only decent reference is still the Dragon Book, published in 1986! Yes, the state of the art has certainly advanced in the past 38 years, but…good luck finding out how.

That’s what needs to change. It isn’t only access to information. It isn’t only that this information isn’t being written down. It’s a confluence of factors. All of it happening all at once is making us dumber as a people. Worst of all is that we accept it. Whether you consider it the "price of democracy" or simply decide that there’s nothing you can do about it, accepting this rot has only let it fester.

Lands of the lost

Recently, I finished reading Fingerprints of the Gods. I picked it up because I found the premise interesting, and because the mainstream media made such a big deal about author Graham Hancock getting a Netflix miniseries to showcase his unorthodox theories. I went into the book hoping there would be something tangible about those theories. Unfortunately, there isn’t.

Time of ice

The basic outline of the book is this: What if an advanced civilization existed before all known historical ones, and imparted some of its wisdom to those later civilizations as a way of outliving its own demise?

Put like that, it’s an intriguing proposition, one that has cropped up in many places over the past three decades. The Stargate franchise—one of my favorites, I must admit—is based largely on Hancock’s ideas, along with those of noted crackpots like Erich von Daniken. Chrono Trigger, widely regarded as one of the greatest video games of all time, uses the concept as a major plot point. Plenty of novels, especially in fantasy genres, suppose an ancient "builder" race or culture whose fingerprints are left within the world in some fashion.

It was this last point that piqued my interest, because my Otherworld series revolves around exactly this. And I even unknowingly used some of Hancock’s hypotheses for that. The timing of my ancients leaving Earth for their second world matches that of his ancients’ final collapse. The connection of archaeoastronomy as a way of leading to their knowledge arises in my books. Even using the prehistoric Mesoamericans as the catalyst wasn’t an original idea of mine; in my case, however, I did it so I wouldn’t have to deal with the logistics of the characters traveling to another continent.

Some of the questions Hancock asks are ones that need to be asked. It’s clear that ancient historical cultures the world over have some common themes which arise in their earliest mythology. Note, though, that these aren’t the specific ones he lists. The flood of Noah and Gilgamesh is entirely different from those of cultures beyond the Fertile Crescent and Asia Minor, for example, because it most likely stems from oral traditions of the breaking of the Bosporus, which led to a massive expansion of the Black Sea. Celts, to take one instance, would instead have a flood myth pointing to the flooding of what is now the Dogger Bank; peoples of New Guinea might have one relating to the inundation of the Sunda region; American Indian myths may have preserved echoes of the flooding of Beringia; and so on.

While the details Hancock tries to use don’t always work, the broad strokes of his supposition have merit. There are definitely astronomical alignments in many prehistoric structures, and some of them are downright anachronistic. Too many indigenous American cultures have myths about people who most definitely are not Amerind. (And now I’m wondering if Kennewick Man was a half-breed. I may need to incorporate that into a book…)

The possibility can’t yet be ruled out that cultures with technology more advanced than their direct successors did exist in the past. We know that Dark Ages happen, after all. We have historical records of two in the West (the familiar medieval Dark Age beginning c. 500 AD and the Greek Dark Age that started c. 1200 BC), and we’re very likely on the threshold of what might one day be termed the Progressive Dark Age.

With the cataclysmic end of the Ice Age and the catastrophic Younger Dryas cold snap, which now seems likely to be caused by at least one asteroid impact, there’s a very good impetus for the "breaking the chain" effect that leads to a Dark Age, one that would erase most traces of such an advanced civilization.

Habeas corpus

Of course, the biggest problem with such a theory is the lack of evidence. Even worse, Hancock, like most unorthodox scholars, argues from an "absence of evidence is not evidence of absence" line of thought. Which is fine, but it’s not science. Science is about making testable and falsifiable predictions about the world. It’s not simply throwing out what-ifs and demanding that the establishment debunk them.

The onus is on those who make alternative theories, and this is where Hancock fails miserably. Rarely in the book does he offer any hard evidence in favor of his conjecture. Instead, he most often uses the "beyond the scope of this book" cop-out (to give him credit, that does make him exactly like any orthodox academic) or takes a disputed data point as proof that, since the establishment can’t explain it, that must mean he’s right. It’s traditional crackpottery, and that’s unfortunate. I would’ve liked a better accounting of the actual evidence.

Probably the most disturbing aspect of the book is the author’s insistence on taking myths at face value. We know that mythology is absolutely false—the Greek gods don’t exist, for example—but that it can often hide clues to historical facts.

To me, one of the most interesting examples of this is also one of the most recent: the finding in 2020 of evidence pointing to an impact or airburst event near the shore of the Dead Sea sometime around 1600 BC. This event apparently not only destroyed a town in such a violent event that it vaporized human flesh, but it also scattered salt from the sea over such a wide region that it literally salted the earth. And the only reference, oral or written, to this disaster is as a metaphor, in the Jewish fable of Sodom and Gomorrah.

Myths, then, can be useful to historians and archaeologists, but they’re certainly not a primary source. The nameless town on the shore of the Dead Sea wasn’t wiped out by a capricious deity’s skewed sense of justice, but by a natural, if rare, disaster. Similarly, references in Egyptian texts of gods who ruled as kings doesn’t literally mean that their gods existed. Because they didn’t.

In the same vein, Hancock focuses too much on numerological coincidences, assuming that they must have some deeper meaning. But the simple fact is that many cultures could independently hit upon the idea of dividing the sky into 360 degrees. It’s a highly composite number, after all, and close enough to the number of days in the year that it wouldn’t be a huge leap. That the timeworn faces of the Giza pyramids are currently in certain geometric ratios doesn’t mean that they always were, or that they were intended to be, or that they were intended to be as a message from ten thousand years ago.

Again, the burden of proof falls on the one making the more outlandish claims. Most importantly, if there did exist an ancient civilization with enough scientific and technological advancement to pose as gods around the world, there should be evidence of their existence. Direct, physical evidence. An industrial civilization puts out tons upon tons of waste. It requires natural resources, including some that are finite. The more people who supposedly lived in this Quaternary Atlantis, the more likely we would have stumbled upon one’s remains by now.

Even more than this, the scope of Hancock’s conjecture is absurdly small. He draws most of his conclusions from three data points: Egypt, Peru, and Central America. Really, that’s more like two and a half, because there were prehistoric connections between the two halves of the Americas—potatoes and corn had to travel somehow. Rarely does he point to India, where Dravidians mangled the myths of the Yamnaya into the Vedas. China, which became literate around the same time as Egypt, is almost never mentioned. Did the ancients just not bother with them? What about Stonehenge, which is at least as impressive, in terms of the necessary engineering, as the Pyramids?

Conclusion

I liked the book. Don’t get me wrong on that. It had some thought-provoking moments, and it makes for good novel fodder. I’ll definitely have to make a mention of Viracocha in an Otherworld story at some point.

As a scientific endeavor, or even as an introduction to an unorthodox theory, it’s almost useless. There are too many questions, too few answers, and too much moralizing. There’s also a strain of romanticism, which is common to a lot of people who study archaeological findings but aren’t themselves archaeologists. At many points, Hancock denigrates modern society while upholding his supposed lost civilization as a Golden Age of humanity. You know, exactly like Plato and Francis Bacon did.

That said, it’s worth a read if only to see what not to do. In a time where real science is under attack, and pseudoscience is given preferential treatment in society, government, and media, it’s important to know that asking questions is the first step. Finding evidence to support your assertions is the next, and it’s necessary if you want to advance our collective knowledge.

Looking ahead: 2024

So we’ve made it to the end of another year. Somehow, "we" includes me, despite all odds. Since I never planned to see the end of 2023, I never really thought about what I wanted to accomplish in 2024. That means this post is going to be more of a stream of consciousness than usual, as I try to work out just what I feel I can reasonably manage in the year ahead.

Recap

First, to recap my goals for this year. I never did go back and finish The Prison of Ignorance, unfortunately. So we’ll put that at the top of the pile for 2024. The same goes for the draft of On the Stellar Sea; Pitch Shift is more of a stretch goal, because I feel like writing has become more of a chore than a passion these past few years.

The "Great Books" idea was great…in theory. In practice, I only managed 6 of the 12. That wasn’t because I didn’t want to read. But trying to hold a relationship together when you constantly have to fight for it tooth and nail, well, that takes a lot out of a man my age. Between spending the morning looking for a job to support myself and my beloved, spending the evening with her, and spending the afternoon working on solo projects to make myself look employable, there’s not much time for casual reading. I still want to read the classics. It’s just finding the chance to do so.

Last, development has been hit or miss this entire year. I did work on Pixeme, but it’s not quite to an alpha release. It doesn’t need much more, so maybe I can get that out the door soon.

What’s next

Thus, most of 2024’s goals are the ones I didn’t achieve in 2023, but I still have a couple more to add to the list. First, I’m working on interactive fiction again, and there’s one I’m writing that I’d like to get done soon. If that goes well, I know I can write The Anitra Incident without much trouble.

Second, while Pixeme is probably the most "marketable" of my solo projects, I’ve recently been wanting to revamp Liblio, the federated creator platform I worked on all the way back in 2019. It was a decent pre-alpha back then, but technology has advanced, and I’m more experienced. There’s an underlying motivation, too: Patreon, Twitch, Youtube, and all the other homes for "content creators" have become even more repressive and regressive in the past few years.

The whole point of Liblio—and, for that matter, any communication application I design—is to allow people to connect without fear of censorship. As centralized platforms become more censorious by the day, I feel that this is even more needed. For that reason, I think Liblio is the better option for benefiting humanity as a whole, even if Pixeme has a greater chance of benefiting me personally.

There’s another project I’m working on, which I’ve called Clef. It is, in effect, a messaging protocol for local applications. My idea here is to abstract APIs by using servers to present standardized request/response messages. Instead of linking to, for example, a video encoder library (or calling it through a fragile series of shell commands), an application could just send a Clef request saying, in effect, "I want this file transcoded to MKV." The receiving server doesn’t even have to be local, though it probably will be. And it doesn’t have to matter whether the encoding was done by FFmpeg, VLC, or whatever. That choice would be up to the server, the abstraction removing one more decision a developer has to make.

For 2024, then, the goal is to have at least 3 applications written to communicate using Clef. One is a simple client demonstrating the possibilities. The second is a server with a switchable back-end, similar to the video encoding scenario. And third, I’d like to write a "metaserver" that can be used to register and discover Clef-aware services on a user’s local machine.

Maybe these aren’t the most ambitious goals for the year, but…I’m tired of ambition. I’m tired, period. Just making it to 2024 has been more than I expected. Anything else I can accomplish seems to pale in comparison to just living another day, week, month, year.

Interactive fiction revisited

I’ve always been one to do things just to say I did them. It’s why I became an author, why I ran for office last year, and why I still, despite having failed on multiple occasions, try to create electronic music. (Now I really want to get back into LMMS…)

I’ve also felt that teaching programming is an important goal. Not because I believe everyone should, or even can, become a developer, but because the critical thinking, reasoning, and logical skills necessary to write code are in short supply throughout society these days. If young people learned a little about programming, my thinking goes, that would better prepare them to look at every other part of the world in the same way.

These two desires of mine combine in a few very narrow ways. I’ve tried writing pedagogical programming languages, for example, and I’ve urged those I feel most receptive to try out Scratch, Grasshopper, and other teaching tools.

For the most part, that hasn’t worked. But lately I’ve been getting back into the idea of creating interactive fiction. For those who don’t know, this is a nebulous catch-all term for visual novels, old-school text adventures, and a few other types of games. (For those who disagree with me calling them "games", you’re wrong, because they’re games by any reasonable definition.)

Interactive fiction isn’t so much a genre as it is a medium, but all types have something in common: they use programming to turn simple prose into something a player can interact with. Some work by presenting the user a list of choices. Some, like the older text adventures, are played by typing commands. This isn’t so much a dichotomy as it is a spectrum; "choice-based" games can incorporate a parser. Thanks to the power of programming—every Turing-complete language is equally capable—there are no absolutes.

But there are differences. As I prepare to write, and in some cases rewrite, my first piece of interactive fiction, The Anitra Incident, I’ve studied the tools available, searching for the ones that fit me best, and the ones that work for the needs of the story. In that process, I’ve come to see four of them as standing above the rest, each for a different reason.

Inform 7

Inform 7 is the king of "parser-based" interactive fiction. It continues the tradition of old-fashioned text adventures like Zork, occasionally updating them to work better with modern computing. Programming is done through a natural-like language intended to vaguely resemble English prose. Games are compiled to an antiquated virtual machine and run through an interpreter that can be anything from a web browser to a native app to an executable on an old Amiga floppy disk.

Until last year, I wouldn’t even look at Inform 7 for development, for one very specific reason: it was closed-source. I don’t use closed-source tools for any other part of my development (Python, Vim, Clang, Git, open-source VS Code forks, and every other tool I use, they’re all freely available), so I was happy to finally have the chance to explore Inform when it was released under the Artistic License in 2022.

The good:

  • Being mature is a good thing in programming, and resisting the temptation to add faddish things just to keep up with trends is a noble goal. Inform has, as far as possible, perfected the parser style of interaction.
  • Inform serves as a de facto introduction to text adventures, so it has a large community, with lots of extensions and examples to draw from.
  • Tools like Vorple allow the intentionally limited language to access the rich multimedia features of modern web browsers, which opens up a whole new world of interaction.

The bad:

  • The Inform 7 programming language is just different enough from English that you can’t really write it as prose, and it’s peculiar enough in its function that you can’t take it as just another programming language.
  • While the primary documentation is vast, it’s also horrible. The developers’ guide, called Writing With Inform, is baroque to the extreme, and it’s written in a stuffy British style that gives me the impression of a Brontë character sneering at the rabble who would dare to write code.
  • The community seems to embody that same style, turning their noses up at the perceived limitations of non-parser adventure games.

Overall, Inform 7 isn’t bad. It excels in a very narrow niche: anything that resembles Zork, Colossal Cave Adventure, and old text adventures of that sort. If you want to write something that isn’t based around puzzles, rooms, and the guess-the-verb game of using a text parser, though, you’re going to fight the system every step of the way. And you’ll be doing it without much help.

Twine

Twine is, in many ways, the opposite of Inform 7. It’s been around a long time, but it embraces the open community that comes from having open source. Instead of being based around a parser, it uses the concept of passages, linking between them mostly through player interaction. (That makes it "choice-based", in the parlance of the interactive fiction community. Problem is, "choice-based" is used mostly as a slur, from what I’ve seen.)

For programming, Twine allows a variety of "story formats", which all work around a core set of capabilities. In the default installation, you have four options:

  • Chapbook, which I’ve never used
  • Harlowe, hobbled by design to the point of uselessness
  • Snowman, a too-thin veneer over Javascript
  • Sugarcube, an HTML-looking middle ground

I chose Sugarcube because of the way it comes closest to the sweet spot of being powerful and extensible while also providing a decent standard library.

There’s an editor for Twine, but you can also use the Tweego compiler and just write your games in a text editor or IDE, which is what I do. Output is an HTML file plus some ancillary Javascript and CSS, reminiscent of a single page app of the kind you’d make with React or Vue.

The good:

  • Twine is easy to get started with. The editor is friendly, and the output looks nice even by default.
  • Sugarcube is actually decent, as long as you treat it like any other templating language. Think of it like Jinja, for example. The built-in macros, for the most part, cover what you’re going to need, but making your own isn’t that hard.
  • The Twine community is almost as big as Inform’s, and there are a lot of tutorials for getting started.

The bad:

  • You’ll quickly outgrow the editor, but setting up a Tweego dev environment isn’t trivial.
  • Although the community is big, the differing story formats mean it’s also fractured. So you’ll often find someone asking exactly the question you were going to ask…but they’re using Harlowe, so the answers they get won’t help you.
  • As with Inform, the documentation assumes you’re a programming newbie, and there’s little out there for those of us who know how to write code (and prose, for that matter!) but want to know how to write this kind of code.

My overall opinion of Twine is positive. I think it’s the best gateway to interactive fiction for two reasons. One, it’s more accessible than Inform, in both development of the game and playing it. Two, Twine offers more room to grow, at least if you’re using Sugarcube or Snowman.

Ren’Py

Ren’Py describes itself as a "visual novel engine". Visual novels are probably the most popular type of interactive fiction nowadays, especially in the anime fandom. In fact, some big indie games in recent years, like Doki Doki Literature Club, are nothing more than visual novels. Of all the ways, to create this type of game, Ren’Py tends to get the most press, so I’ve taken multiple looks at it over the years.

Programming a Ren’Py novel is done using a Python-based DSL that directly exposes the tropes of the medium. So, for example, you can define characters, and then the game will show them when showing their dialogue. The final result will be a native executable that runs on the platform of your choice, and there’s a web export currently listed as beta.

The good:

  • Ren’Py is simple to get started with. The tutorial is actually a complete visual novel, and it has more content than some I’ve seen.
  • The engine is geared toward multimedia. You don’t have to worry about "What if the player’s using an old version of mobile Safari?" as with Twine, or Inform’s "What if they want to play on a C64?" You just use your art and assets like you would any "real" game.
  • Python is, in my opinion, one of the easiest programming languages out there, so extensibility is not that difficult.

The bad:

  • The documentation is horribly lacking. Outside of the basic tutorial, there’s almost nothing official to go on, apart from API docs.
  • Ren’Py is very much a visual novel engine, and it shows. If you want to write anything else, you’re going to struggle.
  • The English-speaking community isn’t as big as that of Twine and Inform; many, if not most, developers are Japanese, meaning that language barriers are always going to be in the way.

I can’t really recommend Ren’Py for general use, but if you want to make a visual novel, it’s unparalleled. Well, I assume it is. As bad as the documentation is, it’s sometimes hard to tell.

Ink

Ink is the fourth and final option I’ve considered. Calling itself a "narrative scripting language", Ink’s niche is in the Choose Your Own Adventure and "branching narrative" space. In that sense, it can be seen as a very simplified Twine. But it’s also designed to be embedded. Unlike the others on this list, where you’re expected to make a game in them by extended their capabilities, Ink expects you to extend it by putting it in the game you’re making.

That’s a big difference, and it’s why Ink is so hard to classify. On one hand, it can be seen as little more than a dialogue library for games. On the other, it has enough power to create interactive fiction by itself. The Ink compiler offers a web export option, and that qualifies as a game in its own right. The JSON export option, however, is probably the one most games that aren’t intended to be interactive fiction will use.

The good:

  • Ink’s syntax is very clean and sparse, so the "I know I can’t code" people have little to worry about.
  • The embedding option is the killer feature for non-solo work, because Ink is by far the easiest to integrate with any other game development engine/library/whatever.
  • Also unlike the other options on my list, Ink has corporate backing while still being open source. That means there’s always going to be some quality control, if only because the game studios using it will expect it…and pay for it.

The bad:

  • Ink is only really good for branching narration and dialogue. That severely limits its niche when using it alone.
  • The engine integrations are pushed really hard, but Unity is the officially-blessed one. If you’ve followed Unity news lately, you know that’s a disaster waiting to happen.
  • Outside of Ink’s developers, there’s not much of a community.

Of the four options on this list, Ink is the best if you’re working directly with anything else. Want to make a Godot game with some CYOA-style interaction? This is the top choice. But anything more complex isn’t going to be done with Ink alone, and learning an entire game engine, with all that entails, is probably too much for a single dev working on a passion project.

Conclusion

Those are my thoughts on four of the most popular interactive fiction development systems. I have other thoughts on the medium as a whole, but I’ll save them for later.

40

Every year, I write a post for my birthday. I talk about the things I’ve accomplished in the past year, what I hope to do in the next, and generally use the time as a chance to get some weight off my chest or some ideas out of my head.

This one, however, wasn’t supposed to happen.

That’s not a joke or a flippant comment. I really, truly did not believe I would be alive on my 40th birthday. As recently as two weeks ago I was still somewhat unsure whether I would wake up this morning. With my depression, my lack of income, and the generally declining state of things in my life over the past few years, I spent a lot of time wondering how (not "if" or "when"; I already knew the answers to those) I should end it.

The intermittent and cryptic posts on here in 2021 and 2022 were part of a countdown that started in my head almost a decade ago and would have ended last Friday. And, if things had turned out differently, I would have ended then, too.

I obviously didn’t. Part of me is glad, but a much smaller portion wonders why. I don’t have am actual job, or any legitimate hope of getting one in the near future. I don’t have any visible path forward for the life I want to live. I remain in an occupied country, where I live as a persecuted minority and an effective second-class citizen.

At some point, anyone rational will wonder, after facing such hardship and privation for so long, whether to keep going. It’s only natural. And I’ve heard plenty of so-called motivational speeches trying to urge me forward. "Find your path," they often say. Well, the simple fact is: sometimes there isn’t one. Or if there is, it’s blocked by forces beyond our control.

I’ve been a developer and an author. I’ve been a freelancer and a business owner. I’ve worked for nothing, and I’ve worked at the C-level. I’ve worn a lot of hats, sometimes too many at once. In every case, however, I’ve only ever seen work as a means to an end, a way to help me become what I really want to be. A husband, a father, and a creator.

Some people want to change the world, to leave a mark that lasts throughout history. I’d be content with something much smaller, something that I feel too many take for granted. But what holds me back from that future is not of my own making. In every case, it’s society, or the world at large, that stops my progress.

I don’t believe in fate or destiny, or in some grand conspiracy stacked against me. In my mind, these problems are not the work of some cabal—though they may be caused by the actions of one—but simple bad luck. I was born in the wrong place, or at the wrong time, to be successful. Every little scrap of good that I’ve found in my life has been earned only through herculean levels of effort. I’m living proof that pulling yourself up by the bootstraps is an antiquated notion that no longer applies to the modern world.

Since I never planned to reach this point in my life, I don’t know what I’ll do next. I still have a few projects I’m working on: Borealic, the Godot games, Concerto, and so on. I want to get back into writing at some point, to finish On the Stellar Sea and Pitch Shift. And who knows? Maybe my old boss will finally give me the rest of my back pay, so I can start up that gaming shop I’ve been wanting for the last 5 years.

Whatever it is, I’m in uncharted territory now. The terra incognita of life, as far as I’m concerned. Whether it’s a "Here be dragons" kind of mystery place, a bounteous land of opportunity, or an "Abandon all hope, ye who enter" type, I can’t yet tell. I guess I’ll find out along the way.