Themis Dev Diary #2

It’s been a few weeks, and this project of mine is still moving along. Maybe not as fast as I would like, but I am making progress. Since the last post, I’ve spent much of my coding time working on what I consider the biggest feature of Themis: filtering. Here, I want to talk a little bit about what I mean, and why it’s so important.

My computer, my rules

Today, essentially every discussion platform is moderated. What that means depends on the place, but let’s boil it down to its essence. Moderation is censorship, plain and simple. Sometimes it’s necessary, sometimes it serves a purpose, but a moderated community is one that has decided, either by collective choice or external fiat, to disallow certain topics. More importantly, the administrators of the platform (or their anointed assistants) have the power to remove such content, often without debate or repercussion.

Removing the users that post the prohibited content is the next step. If online communities were physical, such suspensions would be the equivalent of banishment. But a much larger site like Facebook or Twitter, so integrated into the fabric of our society, should be held to a higher standard. When so much in our lives exists only in these walled-off places, banning is, in fact, more akin to a death sentence.

It is my strong belief that none of this is necessary. Except in the most extreme cases—automated spamming, hacking attempts, or illegal content that disrupts the infrastructure of the site—there really isn’t a reason to completely bar someone from a place simply because of what others might think. Themis is modeled on Usenet, and Usenet didn’t have bans. True, your account on a specific server could be locked, but you could always make a new one somewhere else, yet retain the ability to communicate with the same set of people.

This is where Facebook, et al., fail by design. Facebook users can only talk to each other. You can’t post on Twitter timelines unless you have a Twitter account. On the other hand, the “fediverse” meta-platform of Mastodon, Pleroma, etc., returns to us this ability. It’s not perfect, but it’s there, which is more than we can say for traditional social media.

Out of sight, out of mind

But, you may be thinking, isn’t that bad? If nobody wants to see, say, propaganda from white supremacists in their discussions, then how is discussion better served by allowing those who would post that content to do so?

The answer is simple: because some people might want to see that. And because what is socially acceptable today may become verboten tomorrow. Times change, but the public square is timeless. As the purpose of Themis is to create an online public space, a place where all discussion is welcome, it must adhere to the well-known standards of the square.

This is where filtering comes in. Rather than give the power of life and death over content to administrators and moderators, I seek to place it back where it belongs: in the hands of the users. Many sites already allow blocklists, muting, and other simple filters, but Themis aims to do more.

Again, I must bring up the analogy of Usenet. The NNTP protocol itself has no provisions for filtering. Servers can drop or remove messages if they like, but this happens behind the scenes. Instead, users shape their own individual experiences through robust filtering mechanisms. The killfile is the simplest: a poster goes in, and all his posts are hidden from view. Most newsreader software supports this most basic weapon in our arsenal.

Others go the extra mile. The newsreader slrn, for instance, offers a complex scoring system. Different qualities of a post (sender, subject text, and so on) can be assigned a value, with the post itself earning a score that is the sum of all filters that affect it. Then, the software can be configured to show only those posts that meet a given threshold. In this way, everything a user doesn’t want to see is invisible, unless it has enough “good” in it to rise above the rest. Because there are diamonds in the rough.

Plans

The score system works, but it’s pretty hard to get into. So, by default, Themis won’t have it. But that doesn’t mean you can’t use it. The platform I’m building will be extensible. It will allow alternative clients, not just the one I’m making. Thus, somebody out there (maybe even me, once I have time) can create something that rivals slrn and those other newsreaders with scoring features.

But the basics have to be there. At the moment, that means two things. First is an option to allow a user to “mute” groups and posters. This does about what you’d expect. On the main group list (the first step in reading on Themis), muted groups will not be shown. In the conversation panel, posts by muted users will not be shown, instead replaced by a marker that indicates their absence. In the future, you’ll have the option to show these despite the blocks.

Second is the stronger filtering system, which appears in Alpha 4 at its most rudimentary stage. Again, groups and users can be filtered (posts themselves will come a little later), and the criteria include names, servers, and profile information. As of right now, it’s mostly simple string filtering, plus a regex option for more advanced users. More will come in time, so stay tuned.

In closing

This is why I started the project in the first place, and I hope you understand my reasoning. I do believe that open discussion is necessary, and that we can’t have that without, well, openness. By placing the bulk of the power back in the hands of the users, granting them the ability to create their own “filter bubbles” instead of imposing our own upon them, I think it’s possible. I think we can get past the idea that moderators, with all their foibles and imperfections, are an absolute necessity for an online forum. The result doesn’t have to be the anarchy of 4chan or Voat. We can have serious, civil conversations without being told how to have them. Hopefully, Themis will prove that.

Themis Dev Diary #1

Yesterday, I quietly released the second alpha of my current long-term software project, Themis. For the moment, you can check out the code on my Github, which has returned to life after lying dormant for three years. I’m developing this one in the open, in full view of all the critics I know are lurking out there, and I’ll be updating you on my progress with dev diaries like this one.

The Project

First off, what is Themis? Well, it’s hard to explain, but I’ll give it a shot. What I mainly want to create with this project is a kind of successor to Usenet.

We have a lot of discussion platforms nowadays. There’s really no shortage of them. You’ve got proprietary systems like Facebook, Twitter, and Disqus; old-school web forums such as vBulletin, XenForo, and the more modern NodeBB; open, federated services like Mastodon and Pleroma; and the travesty known as Discourse. No matter what you’re looking for, you have options.

Unless you want the freedom, the simplicity, and the structure Usenet brought to discussions so long ago. That’s what Themis aims to recapture. My goal is to make something that anyone can install, effectively creating their own node (instance, in Mastodon parlance) that connects (federates) with all the others.

So far, that’s not too different from most of the “fediverse” platforms, but here’s the kicker. While Mastodon, Pleroma, GNU Social, and Misskey all focus on a flat, linear timeline in the vein of Twitter, Themis will use a threaded model more akin to newsgroups. Or Reddit, if you prefer. (Yes, there’s Prismo as the federated counter to Reddit, but bear with me.)

Also, while the current drama on most any platform is about banning, filtering, and censorship, I want to make Themis a place where speech is free by default. Rather than hand all the power to server admins, who can implement blocklists, filter policies, etc., Themis is going to be focused on user-guided filtering. If you don’t want to see what a certain user says, then you block that user. If you don’t like a specific topic, you can hide any threads where it’s discussed. And so on.

In my opinion, that’s a more viable model for open discussion. Rather than skirt around sensitive topics out of the fear of “deplatforming”, we assume that users are adults, that they have the maturity to know what they like and don’t like. The filtering system will need to be robust, powerful, and precise, but the key is that every part of it will be in the user’s hands. Yes, admins will still have the ability to ban problematic users (only on their server, of course) and remove posts that may violate laws or rules, but these should be the exception, not the rule.

Also, Themis is group-oriented. Every post falls into at least one group (crossposting isn’t implemented yet, but I’m getting there), and every group contains a set of threads. This will also fall into the filtering system, and here’s a place where admins can steer the discussion. A “tech” group on example.social, for instance, would follow the rules of that server, and it might have an entirely different “feel” to the tech group on themis-is-awesome.tld. Configuration will allow admins to make groups intended only for local users, or invite-only, or moderated in the classic “all posts must be approved” style.

Where we are now

At the moment, most of this is a distant dream. I won’t lie about that. Themis is at a very early alpha stage, and there’s a lot of work left to even get it feature-complete, much less in a state worthy of release. To make matters worse, I’m not entirely sure how possible it is. I’m working alone, and I’m not the best programmer out there.

I’m giving it a shot, though. In only six weeks, I’ve gone from nothing more than a skeleton app in a framework I’d never even used to something that actually runs (albeit only on localhost). As of the 0.0.2 release, you can create an account, log in, view posts, add new ones, and reply to existing ones. The group creation functionality isn’t there yet, authentication is…haphazard at best, and the admin section is next to nonexistent. But that’s what alphas are for. They’re for getting all these pieces into place, even if there are a lot more of those pieces than you first anticipated.

What’s next

As I said, Themis isn’t even close to beta yet. I’ll likely put out quite a few more alphas in the coming weeks. The third release, if all goes well, will add in an admin control panel, plus the necessary scaffolding for site settings, preferences, and other configuration stuff. Alpha 4, in my vague mental outline, will fix up the posting functionality. Future milestones include group creation, filtering (a big one!), network optimization, and so on.

The beta releases, assuming I make it there, are all about getting Themis where I want it to be. That’s when I plan to start adding in federation, even better filtering, ActivityPub support, and an NNTP gateway, among others. (In case you’re wondering, I don’t have the slightest idea how to do half of that. And here I thought I’d be reading fiction in 2019! Nope, looks like specs instead.)

In my wildest dreams, all this somehow works out, and I can make a stable 1.0 release on October 1st. Stay tuned to see how that pans out.

If you want to help with Themis, or just take a look at it, check it out here. For the techies out there, it’s written in Typescript, using NestJS for the back end, Vue for the interface, TypeORM as a database abstraction layer, Axios for HTTP, Passport and JWT for authentication, and a whole bunch of other libraries I can’t remember right now. The project is entirely open source, under the MIT license (not AGPL, as so many other fediverse projects are), and I promise I’ll take a look at all serious suggestions, issues, bug reports, and advice.

Whatever the future holds, I’ll call this venture well worth it. Maybe I’ll burn out and fade away. Maybe I’ll change the world as much as Gargron and Lain are doing. I don’t care what the outcome is. I’ve found a passion, and this is it.

De-ESL-ifying the web

English is the language of the world at this moment in time. True, Chinese has more native speakers, but the overwhelming majority of those live in China, whereas English is spoken as a first or second language essentially everywhere. Whatever you think of it, it’s not going anywhere, and anybody doing serious work on the Internet, on the global web that so suffuses our everyday life, really needs a good grounding in standard English.

That is a problem, however. Not everyone has that grounding, and it shows. Especially among developers, programmers, and documentation writers, it’s all too common to see broken English, even when the work in question is intended for audiences of all kinds. It’s not their fault, of course, and it’s not exactly fair to ask everyone to learn formal English before they’re allowed to write software or documentation.

Yet language has the sole function of communication, and when we use poor language (for whatever reason), communication suffers. Think of how many times you’ve had to strain your brain to decrypt a particularly obtuse text message. Think about how much more effective a well-written post on Facebook or Twitter can be when compared to the word salad used by certain…politicians.

Even among those who try, there can be problems. As English is spoken in many different countries, the other languages of those countries have imprinted themselves upon it. Thus, “World” English contains quite a few phrases and idioms that can confuse even native speakers. To take one common instance, someone on a game’s forum might speak of a “doubt” about performance; what they’re really saying is that they have a question to ask.

Not everybody needs correction, and a lot of people will consider it insulting to offer. (Indeed, a lot of people actually are insulting when they offer a grammar or wording correction, so the concern is understandable.) For a project intended to appear professional, however, it’d be nice to have an editor.

I am not an editor. I am an author and programmer, an amateur linguist and creator of languages. In nearly a quarter of a century online, I’ve probably seen every possible “ESL-ism”, and I think my experience and expertise qualifies me to lead the charge in eradicating them from the world of professional software and its documentation.

So that’s what I’m doing with this post. Today, I announce that I’m open for business. If you are an author or creator, and you’d like to de-ESL your project, I am here to help. I offer my services in the hope that I can make the world, the web, a better place.

For a small fee (rates are negotiable, especially for Free Software projects), I will proofread your documentation, tutorial, wiki, or other prose work concerning your software. I’ll remove ESL idioms, American or other regional colloquialisms, and any sort of unprofessional language to create a document that is easier for everyone to understand. If you’re interested, contact me at support@potterpcs.net with a subject containing “ESL”.

New adventures in Godot

As I’ve said many times before, I think Godot Engine is one of the best options out there for indie game devs, especially those working on a budget in their spare time. Since I don’t have all that much spare time (rather, I have more than I know what to do with), I thought I’d put my money where my mouth is and try to make a game.

I know it’s not easy. It’s not simple. That doesn’t mean I won’t give it a shot. And I’ll be sure to keep you updated as to my progress.

What I’ve got so far is an idea for a casual word-find game. Think a cross between Tetris and Boggle. You’ve got a playfield of letters, and the object is to select a series of them that spells out a word. You get points based on the length of the word and the letters you use (rarer letters like J or Z are worth more than the common E or R). Then, the letters you chose disappear, and others fill the space.

That’s where I’m at now: getting the field set up. Then, I’ll work on the rest of the basic game mechanics, from selection to scoring. UI comes after that, and games need sound effects, animations, etc. Eventually, I’d like to produce a mobile and desktop version for you to download here or elsewhere. (Still weighing my options on that.)

Don’t expect too much, but if I can get this done, I hope to move on to more ambitious projects. Although I do focus far more on writing these days, I still love coding, and game programming remains one of my favorite aspects. Godot makes that part easy, and it does it without all the cruft of Unity or Unreal. It really hits the sweet spot, at least as far as I’m concerned.

On Godot 3.0

For a couple of years now, I’ve been talking about the amazing Godot Engine. Recently, I’ve been writing so much that I haven’t had time to get back into coding, but the announcement of Godot Engine version 3.0 got my attention. Now, I’ve had some time to play with it, and here’s my verdict.

First off, Godot is free. It’s open source. It’s cross-platform, for both development and deployment. All three of those are important to me. I’m poor, so an engine that costs hundreds or thousands of dollars just to get started is, well, a non-starter. Second, my desktop runs Linux, so a Windows-only tool is probably going to be useless, and something proprietary really isn’t good when you have the very real possibility of having to delve into the bowels of the engine for customization purposes.

Yes, something like Unity is probably better for someone who’s just starting out. It’s got a bigger community, it’s more mature, and it does have a much more professional air. On the other hand, Godot’s new version really ups the bar, putting it on the same general level as other “indie-friendly” game engines.

New and improved

The biggest new feature in Godot 3.0 has to be the improved 3D renderer. 3D was always Godot’s weak point, especially on certain hardware. Namely, mine. Last year, I was running on integrated graphics (A10 Trinity, if you care), and I got maybe 5 FPS on Godot’s platformer demo. Fast-forward to January 1st, 2018, after I installed a new (to me) RX 460 running the fancy amdgpu drivers and everything. Curious, I fired up Godot 2.1 and the demo. Results? 5 FPS max. No difference.

With 3.0, though, that’s no longer a problem. From what I’ve read, that’s because the developers have completely overhauled the 3D portion of the engine. It’s faster on low-end (and medium-level) hardware, and some of the sample images are stunning. I’d have to do more tests to see just how well it works in practice, but it could hardly be worse than before.

In a way, that perfectly describes all the new features. The renderer’s been rewritten to be better. Physics now uses the Bullet engine instead of a homebrew system. Audio? Rewrite. Shaders? Rewrite. It’s not so much revolution as evolution, except that doesn’t work. No, think of it more as evolution by revolution. Now, that’s not to say there are no new features in this version. It’s just that those are overshadowed by the massive improvements in the existing parts.

I’ll gladly admit that I don’t care much about VR gaming. I’m not one of those who see it as a gimmick, but it’s not ready for primetime yet. But if you’re of a different persuasion, then you might be interested in the VR support that’s been added. I’ll leave that to you to discover, as I honestly have no idea how it all works.

More to my taste is the additional programming support. Godot uses a custom scripting language by default, a Python clone designed to interface with the engine. I’m not really a fan of that approach, as I’ve written before. Clearly, I’m not alone in that thinking, as version 3.0 now offers more options. First up is GDNative, way to extend the engine using external libraries (written in native code, hence the name) without going through the trouble of recompiling the whole thing every time you make a change. That one looks good on its face, as it opens up the possibility of integrating popular and well-tested libraries much more easily.

But that doesn’t really replace GDScript, although it does add the ability to make bindings for other scripting languages. The new Mono support, on the other hand, actually does change the way you write code. It’s not perfect (as of this writing, it’s not even complete!), but it definitely shows promise.

As you know, Unity uses C# as its language of choice; they’ve deprecated JavaScript, and they try to pretend Boo never existed. Well, now (or once it’s done) Godot will let you write your game in C#, too. Even better, it surpasses Unity by using a much newer version of Mono, so you get full C# 7.0 support, assuming you trust Microsoft enough to use it.

If that wasn’t enough, there’s also a “visual” scripting system, much like Unreal’s Blueprints. That one’s in its early stages, so it’s not much more than a flowchart where you don’t have to write the function names, but I can’t see it not getting improved in future versions.

So there you have it. I haven’t even scratched the surface here, but I hope it whets your appetite, because I still think Godot is one of the best indie game engines out there. If you don’t have the money to spend on Unity, you’d rather use a platform without a port of the Unreal editor, or you don’t want to risk getting sued by Crytek, then there’s almost no reason not to give Godot a shot. It’s only getting better, and this new version proves it.

First glance: Kotlin

In my quest to find a decent language that runs on the JVM (and can thus be used on Android without worrying about native compilation) while not being as painfully awful as Java itself, I’ve gone through a number of possibilities. Groovy was an old one that seems to have fallen by the wayside, and it’s one of those that always rubbed me the wrong way. A bit like Ruby in that regard, and I think that was the point. Scala was another contender, and I still think it’s a great language. It’s a nice blend of functional and object-oriented programming, a true multi-paradigm language in the vein of C++, and it has a lot of interesting features that make coding in it both easier and simply more fun than Java.

Alas, Scala support is sketchy at best, at least once you leave the world of desktops and servers. (Maybe things have picked up there in the past year and a half. I haven’t checked.) And while the language is great, it’s not without its flaws. Compilation is fairly slow, reminding me the “joys” (note sarcasm) of template-heavy C++ code. All in all, Scala is the language I wish I could use more often.

A new challenger

Enter Kotlin. It’s been around for a few years, but I’ve only recently given it a look, and I’m surprised by what I found. First and foremost, Kotlin is a language developed by JetBrains, the same ones who make IntelliJ IDEA (which, in turn, is the basis for Android Studio). It’s open source under the Apache license, so you don’t have to worry about them pulling the rug out from under you. Also, Android Studio has support for the language built in, and Google even has a few samples using it. That’s already a good start, I’d say.

At the link above, you can find all the usual hallmarks of a “modern” programming language. You’ve got docs, tutorials, lots of press-type stuff, commercial and free support, etc. In that, it’s no different from, say, Typescript. Maybe there’s a bit more emphasis on using IDEA instead of Eclipse or Netbeans, but that’s no big deal. All in all, it’s nothing special…until you dig into the language itself.

Pros

Kotlin’s main mission, essentially, is to make a better Java than Java. Maybe not in the wholesale sense like C#, but that’s the general vibe I get. Java interop is an important part of the documentation, and that’s a good thing, because there’s a lot of Java code out there. Some of it is even good. It’d be a shame to throw it all away just to use what’s undoubtedly a better language.

But let’s talk about why Kotlin is better than Java. (Hint: because it’s not Java.) It borrows heavily from Scala, and this is both intentional and promising. Like Scala, there’s a heavy pressure on the programmer to use the immutable val where possible, while reserving the mutable var for those situations where it’s needed. Personally, I’m not a fan of the “all constant, all the time” approach of e.g., Haskell or Erlang, but the “default constant” style that Kotlin takes from Scala is a happy medium, in my opinion.

In addition, Kotlin makes a major distinction between nullable and non-nullable types. For instance, your basic String type, which you’d normally use for, well, strings, can’t be assigned a value of null. It’s a compile-time error. (If you need a nullable type, you have to ask for it: String?.) Likewise, you have an assortment of “safe” syntax elements that check for null values. Compiler magic allows you to work with a nullable type without hassle, as long as the compiler can prove that you’ve checked for null. That only works with immutable values, another instance where the language guides you onto the “right” path.

Those two alone, nullability and mutability, are already enough to eliminate some of the most common Java programming mistakes, and Kotlin should be lauded for that alone. But it also brings a lot of nifty features, mostly in the form of syntactic sugar to cut down on Java’s verbosity. Lambdas are always nice, of course. Smart casts (the same compiler trick that lets you dispense with some null checks) are intriguing, and one of those things that makes you wonder why they weren’t already there. And then you have all the little things: destructuring assignments, ranges, enum classes, and so on.

All told, Kotlin is a worthy entry in the “what Java should be” competition. It fits the mold of the “modern” language, taking bits and pieces from everywhere. But it puts those pieces together in a polished package, and that’s what’s important. This isn’t something that involves arcane command-line incantations or a leaning tower of 57 different build systems to get working. If you use IDEA, then it’s already there. If not, it’s not hard to get. Compared to the torture of getting Scala to work on Android (never did manage it, by the way), Kotlin is a breeze.

Cons

Yet no language is perfect, and this is no exception. Mostly, what I find to be Kotlin’s disadvantages are minor annoyances, silly limitations, and questionable choices. Some of these, however, give rise to bigger issues.

The heavy emphasis on tracking nullability is worthwhile, but the documentation especially turns it into something akin to religious dogma. That gets in the way of some things, and it can make others cumbersome. For example, although the language has type inference, you have to be careful when you cast. The basic as cast operator doesn’t convert from nullable to non-nullable. That’s good in that you probably don’t want it to, but it does mean you sometimes have to write things in a roundabout way. (This is particularly the case when you’re taking data from Java code, where the null-checks are toned down.)

Interop, in fact, causes most of Kotlin’s problems. There’s no solving that, I suppose, short of dropping it altogether, but it is annoying. Generics, for instance, are just as confusing as ever, because they were bolted on after the JVM had gone through a few revisions, and they had to preserve backward compatibility.

Except for syntax choices, that’s my main complaint about Kotlin, and I recognize that there’s not a JVM language out there that can really fix it. Not if you want to work with legacy Java code, anyway. So let’s take a look at the syntax bits instead.

I can’t say I like the belt-and-suspenders need to have both classes and methods final by default. I get the thinking behind it, but it feels like the same kind of pedantry that causes me to dislike Java in the first place.

Operator overloading exists, after a fashion. It’s about as limited as Python’s, and one of the few cases where I wish Kotlin had swiped more from Scala. Sure, you can make certain methods into infix pseudo-operators, but that’s not really anything more than dropping a dot and a pair of parentheses. You’re not making your own power operator here.

The verdict

Really, I could go on, but I think you see the pattern. Kotlin has some good ideas, none of them really new or innovative by themselves, but all bringing a welcome bit of sanity to the Java world. The minor problems are just that: minor. In most cases, they’re a matter of personal opinion, and I’ll gladly admit that my opinion is in the minority when it comes to programming style.

On the whole, Kotlin is a good language. I don’t know if I’d call it great, but that’s largely because I don’t really know what would make a programming language great these days. I’m not even sure we need one that’s great. Just one that isn’t bad would be a start, if you ask me.

And that’s what we have here. If I get back into Android development in 2018, I’ll definitely be looking at Kotlin as my first choice in language. Anything is an improvement over Java, but this one’s the biggest improvement I’ve seen yet.

Modern C++ for systems, part 2

C++ was always a decent language for low-level programming. Maybe not the best, but far from the worst. Now, with Modern C++, it gets even better. Newer versions of the standard have simplified some of the more complex portions of the language, while playing to its strengths. In this post, we’ll look at a couple of these strengths, and see how modern versions of C++ only make them that much stronger.

Types

How a language treats the types of values is one of its defining characteristics. At the highest levels (JavaScript, PHP, etc.), types can be so loosely defined that you barely even know they’re there…until they blow up in your face. But on a lower level, when you want to eke out just a little more performance, or where safety is of the essence, you want a strong type system.

C++ provides this in multiple ways, but older C++ was…not exactly fun about it. Variables had to have their types explicitly specified, and some of those type names could run to absurd lengths, especially once templates got involved. Sure, you had typedef to help you out, but that really only worked once you knew which type you were looking for. It was ugly. You could very easily end up with something hard to write, harder to read, and nearly impossible to maintain.

No more. That’s thanks to type inference, something already present in a number of strongly-typed languages, but brought into the core of C++ in 2011. Now, instead of trying to remember exactly how to write std::vector<MyClass>::iterator, you can just do this:

auto it { myVector.begin() };

The compiler can tell what type it should have, and you probably neither need nor care to know. For temporaries with ridiculously long type names, auto is invaluable.

But it’s good everywhere, and not only because it saves keystrokes. It’s safer, too, because declaring an auto variable without initializing it is an error. (How could it not be?) So you know that, no matter what, a variable declared auto will always have a valid value for its type. Maybe that won’t be the right value, but defined is almost always better than undefined.

The downside, of course, is that the compiler may not get the right hints, so you may need to give it a little help. Some examples:

auto n { 42 }; // type is int
auto un { 42u }; // type is unsigned int

auto db { 2.0 }; // type is double
auto fl { 2.0f }; // type is float

auto cstr { "foo" }; // type is const char *

// (C++14) ""s literal form
auto stdstr { "bar"s }; // type is std::string

That last form, added in C++14, requires a bit of setup before you can use it:

#include <string>
using namespace std::string_literals;

But that’s okay. It’s not too onerous, and it doesn’t really hurt compilation times or code size. After all, you’re probably going to be using C++ standard strings anyway. They’re far safer than C-style strings, and we’re after safety, right?

That’s key, because one of the benefits of Modern C++ over C is its increased safety. Type safety, prevention and warning of common coding errors, we want these at the lowest level. And if we can get them while decreasing code complexity? Of course we’ll take that.

Using the compiler

Unlike more “dynamic” languages, C++ uses a compiler. That does add a step in the build cycle, but we can use this step to take care of some things that would otherwise slow us down at run-time. Languages like JavaScript don’t get this benefit, so you’re stuck using minifiers and JIT and other tricks to squeeze every ounce of performance out of the system. With the intermediate step of compilation, however, C++ allows us to do a lot of work before our final product is even created.

In past versions, that meant one thing and one thing only: templates. And templates are great, until you get into some of the hairier kinds of metaprogramming. (Look at the source code to Boost…if you dare.) Templates enable us to do generic programming, but they’re easy to abuse and misuse.

With Modern C++, we’ve got something even better. First off, compilers are smarter now. They have to be, in order to handle the complexities of the language and standard library. (People complain that C++ is too complex, but modern versions have hidden a lot of that in the underbelly of the compiler.)

More important, though, is the notion of constant expressions. Every compiler worth the name can take an expression like 2 + 2 and reduce it to its known value of 4, but Modern C++ takes it to a whole new level. Now, compilers can take some amazing leaps in calculating and reducing expressions. (See that same video I mentioned in Part 1 if you don’t believe me.)

And it only gets better, because we have constexpr to let us make our own functions that the compiler can treat as constant. The obvious ways to use this capability are the old standbys (factorials, translation tables, etc.), but almost anything can fit into a constexpr function, as long as it doesn’t depend on data not available at the time of compilation. With C++14 and 17, it only gets better, as those did away with the requirement of writing everything as a single return statement.

With const and constexpr, plus the power of templates and even metaprogramming libraries like Boost’s Hana, Modern C++ has a powerful tool that most other programming environments can’t match. Best of all, it comes with essentially no additional run-time cost, whether space or speed. For low levels, where both of those are at a premium, that’s exactly what we want. And the syntax has been cleaned up greatly since the old days. (Error messages are still a bit iffy, but they’re working on that.)

Plenty of other recent changes have eased the workload for low-level coding, even as the high levels have become ever simpler. I haven’t even mentioned some of the best parts of C++17, for instance, like constexpr-if and if initializers. But that’s okay. C++ is a big language. It’s got something for everybody. Later, we’ll actually start looking at ways it helps make our code safer and smarter.

Modern C++ for systems, part 1

In today’s world, there are two main types of programming languages. Well, there are a lot of main types, depending on how you want to look at it, but you can definitely see a distinction between those languages intended for, say, web applications and operating systems. It’s that distinction that I want to look at here.

For web apps (and most desktop ones), we have plenty of options. JavaScript is becoming more and more common on the desktop, and it’s really the only choice for the web. Then we’ve got the “transpilers” bolted on top of JS, like TypeScript, but those are basically the same thing. And we can also write our user applications in Python, C#, or just about anything else.

On the other side—and this is where it gets interesting, in my opinion—the “systems” side of the equation doesn’t look so rosy. C remains the king, and it’s a king being assaulted from all sides. We constantly see articles and blog posts denigrating the old standby, and wouldn’t you know it? Here’s a new, hip language that can fix all those nasty buffer overflows and input-sanitization problems C has. A while back, it was Go. Now, it’s Rust. Tomorrow, it might be something entirely different, because that’s how fads work. Here today, gone tomorrow, as they say.

A new (or old) challenger

C has its problems, to be sure. Its standard library is lacking, the type system is pretty much a joke, and the whole language requires a discipline in coding that really doesn’t mesh with the fast and loose attitudes of today’s coders. It’s also mature, which hipsters like to translate as “obsolete”, but here that age does come with a cost. Because C has been around so long, because it’s used in so many places, backwards compatibility is a must. And that limits what can be done to improve the language. Variable-length arrays, for instance, are 18 years old at this point, and they still aren’t that widely used.

On the other hand, “user-level” languages such as Python or Ruby tend to be slow. Too slow for the inner workings of a computer, such as on the OS level or in embedded hardware. Even mobile apps might need more power. And while the higher-level languages are more expressive and generally easier to work with, they often lack necessary support for lower-level operations.

Go and Rust are attempts at bridging this gap. By making languages that allow the wide range of operations needed at the lowest levels of a system, while preventing the kinds of simple programming errors a C compiler wouldn’t even notice, they’re supposed to be the new wave in system-level code. But Go doesn’t have four decades of evolution behind it. Rust can’t compile a binary for an 8-bit AVR microcontroller. And neither has the staying power of the venerable C. As soon as something else comes along, Rust coders will chase that next fad. It happened with Ruby, so there’s no reason it can’t happen again.

But what if I told you there’s a language out there that has most of C’s maturity, more modern features, better platform support1, and the expressive power of a high-level language? Well, if you didn’t read the title of this post, you might be surprised. Honestly, even if you did, you might still be surprised, because who seriously uses C++ nowadays?

Modern world

C++ is about as old as I am. (I’m 33 for the next week or so, and the original Cfront compiler was released in 1985, so not too much difference.) That’s not quite C levels of endurance, but it’s pretty close, and the 32-bit clocks will overflow before most of today’s crop of lower-level languages reach that mark. But that doesn’t mean everything about C++ is three decades out of date.

No, C++ has evolved, and never so much as in 2011. With the release of that version of the standard, so much changed that experienced programmers consider “Modern” C++ to be a whole new language. And I have to agree with them on that. Compared to what we have now, pre-2011 C++ looks and feels ancient, clunky, baroque. Whatever adjectives your favorite hater used in 2003, they were probably accurate. But no more. Today, the language is modern, it’s fresh, and it is much improved. So much improved, in my opinion, that it should be your first choice when you need a programming language that is fast, safe, and expressive. Nowhere else can you get all three.

Why not Java/C#?

C++ often gets compared to Java and C#, especially when talk turns to desktop applications. But on the lower levels, there’s no contest. For one, both Java and C# require a lot of infrastructure. They’re…not exactly lightweight. Look at Android development if you don’t believe me on the Java side. C# is a little bit better, but still nowhere near as efficient in terms of space.

Second, you’re limited in platform support. You can’t very well run Java apps on an iPhone, for instance, and while Microsoft has opened up on .NET in the past few years, it’s still very much a Windows-first ecosystem. And good luck getting either of them on an embedded architecture. (Or an archaic one.)

That’s not to say these aren’t good languages. They have their uses, and each has its own niche. Neither fits well for our purposes, though.

Why not Go/Rust?

Go, Rust, and whatever other hot new ideas are kicking around out there don’t suffer from the problem of being unfit for low levels. After all, they’re made for that purpose, and it’d be silly to make a programming language that didn’t fill its intended role.

However, these lack the maturity of C++, or even Java or C#. They don’t have the massive weight of history, the enormous library of documentation and third-party code and support. Worse, they aren’t really standardized, so you’re basically at the mercy of their developers. Tech companies have a reputation for short attention spans, which means you can’t be sure they won’t stop development tomorrow because they think they’ve come up with something even better. (Ask me about Angular.)

Why C++?

Even if you think some other language works better for your system, I’d still argue that you should give Modern C++ a shot. It can do essentially everything the others can, and that’s not just an argument about what’s Turing-complete. With the additions of C++11, 14, and 17, we’re not talking about the “old” style anymore. You won’t be seeing screen-long type declarations and dizzying levels of bracket nesting. Things are different now, and C++ even has a few features lacking from other popular languages. So give it a shot.

I know I will. Later on, I hope to show you how C++ can replace both the old, dangerous C and the new, limiting languages like Rust. Stay tuned for that.


  1. Strictly speaking, Microsoft doesn’t have a C compiler, only a C++ one. MSVC basically doesn’t track new versions of the C standard unless they come for free with C++ support. 

The JavaScript package problem

One of the first lessons every budding programmer learns is a very simple, very logical one: don’t reinvent the wheel. Chances are, most of the internal work of whatever it is you’re coding has already been done before, most likely by someone better at it than you. So, instead of writing a complex math function like, say, an FFT, you should probably hunt down a library that’s already made, and use that. Indeed, that was one of the first advances in computer science, the notion of reusable code.

Of course, there are good reasons to reinvent the wheel. One, it’s a learning experience. You’ll never truly understand the steps of an algorithm until you implement them yourself. That doesn’t mean you should go and use your own string library instead of the standard one, but creating one for fun and education can be very rewarding.

Another reason is that you really might be able to improve on the “standard”. A custom version of a standard function or algorithm might be a better fit for the data you’ll be working with. Or, to take this in another direction, the existing libraries might all have some fatal flaw. Maybe they use the wrong license, or they’re too hard to integrate, or you’d have to write a wrapper that’s no less complicated than the library itself.

Last of all, there might not be a wheel already invented, at least not in the language you’re using. And that brings us to JavaScript.

Bare bones

JavaScript has three main incarnations these days. First, we have the “classic” JS in the browser, where it’s now used for pretty much everything. Next is the server-side style exemplified by Node, which is basically the same thing, but without the DOM. (Whether that’s a good or bad thing is debatable.) Finally, there’s embedded JavaScript, which uses something like Google’s V8 to make JS work as a general scripting language.

Each of these has its own quirks, but they all share one thing in common. JavaScript doesn’t have much of a standard library. It really doesn’t. I mean, we’re talking about a language that, in its standardized form, has no general I/O. (console.log isn’t required to exist, while alert and document.write only work in the browser.) It’s not like Python, where you get builtin functions for everything from creating ZIP files to parsing XML to sending email. No, JS is bare-bones.

Well, that’s not necessarily a problem. Every Perl coder knows about CPAN, a vast collection of modules that contains everything you want, most things you don’t, and a lot that make you question the sanity of their creators. (Question no longer. They’re Perl programmers. They’ve long since lost their sanity.) Other languages have created similar constructs, such as Python’s PyPi (or whatever they’re using these days), Ruby’s gems, the TeX CTAN collection, and so on. Whatever you use, chances are you’ve got a pretty good set of not-quite-standard libraries, modules, and the like just waiting to be used.

So what about JavaScript? What does it have? That would be npm, which quite transparently started out as the Node Package Manager. Thanks to the increase in JS tooling in recent years, it’s grown to become a kind of general JavaScript repository manager, and the site backing it contains far more than just Node packages today. It’s a lot more…democratic than some other languages, and JavaScript’s status as the hipster language du jour has given it a quality that sometimes seems a bit questionable, but there’s no denying that it covers almost everything a JS programmer could need. And therein lies the problem.

The little things

The UNIX philosophy is often stated as, “Do one thing, and do it well.” JavaScript programmers have taken that to heart, and they’ve taken it to the extreme, and that has caused a serious problem. See, because JS has such a small standard library, there are a lot of little utility functions, functions that pop up in almost any sizable codebase, that aren’t going to be there.

For most languages, this would be no trouble. With C++, for instance, you’d link in Boost, and the compiler would only add in the parts you actually use. Java or C#? If you don’t import it, it won’t go in. And so on down the line, with one glaring exception.

Because JavaScript was originally made for the browser—because it was never really intended for application development—it has no capability for importing or even basic encapsulation. Node and recent versions of ECMAScript are working on this, but support is far from universal at this point. Worse, since JavaScript comes as plain text, rather than some intermediate or native binary format, even unused code wastes space and bandwidth. There’s no compilation step between server and client, and there’s no way to take only the parts of a library that you need, so evolutionary pressure has caused the JavaScript ecosystem to create a somewhat surprising solution.

That is the NPM solution: lots of tiny packages with myriad interdependencies, and a package manager that integrates with the build system to put everything together in an optimized bundle. JavaScript, of course, has no end of build systems, which come in and out of style like seasonal fashions. I haven’t really looked into this space in about eight months, and my knowledge is already obsolete! (Who uses Bower anymore? It’s all Webpack…I think. Unless something else has replaced by the time this post goes up.)

This is a prime example of the UNIX philosophy in action, and it can work. Linux package managers do it all the time: for reference, my desktop runs Debian, and it has about 2000 packages installed, most of which are simple C or C++ libraries, or else “data” packages used by actual applications. But I’m not so sure it works for JavaScript.

Picking up the pieces

From that one design decision—JavaScript sent as plain text—comes the necessity of small packages, but some developers have taken that a bit too far. In what other language would you need to import a new module to test if a number is positive? Or to pad the left side of a string? The JS standard library provides neither function, so coders have created npm packages for both, and those are only two of the most egregious examples. (Some might even be jokes, like the one package that does nothing but return the number 5, but it’s often hard to tell what’s serious and what isn’t. Think Poe’s Law for programmers.)

These wouldn’t be so bad, but they’re one more thing to remember to import, one more step to add into the build system. And the JavaScript philosophy, along with the bandwidth requirements its design enforces, combine to make “utility” libraries a nonstarter. Almost nobody uses bigger libraries like Underscore or Lodash these days; why bother adding in all that extra code you don’t need? People have to download that! The same even goes for old standbys like jQuery.

The push, then, is for ever more tiny libraries, each with only one use, one function, one export. Which wouldn’t be so bad, except that larger packages—you know, applications—can depend on all these little pieces. Or they can depend on each other. Or both, with the end result a spaghetti tangle of interdependent parts. And what happens if one of those parts disappears?

You might think that’s crazy, but it did happen. That “left pad” function I mentioned earlier? That one actually did vanish, thanks to a rogue developer, and it broke everything. So many applications and app libraries depended on little old leftpad, often indirectly and without even noticing, that its disappearance left them unable to update, unable to even install. For a few brief moments, half the JavaScript world was paralyzed because a package providing a one-line function, something that, in a language with simple string concatenation, comes essentially for free, was removed from the main code-sharing repository.

Solutions?

Is there a middle ground? Can we balance the need for small, space-optimized codebases with the robustness necessary for building serious applications? Or is NPM destined to be nothing more than a pale imitation of CPAN crossed with an enthusiast’s idea of the perfect Linux distro? I wish I knew, because then I’d be rich. But I’ll give a few thoughts on the matter.

First off, the space requirement isn’t going away anytime soon. As long as we have mobile and home data caps, bandwidth will remain important, and wasting it on superfluous code is literally too expensive. In backwards Third World countries without net neutrality, like perhaps the US by the time this post goes up, it’ll be even worse. Bite-size packages work better for the Internet we have.

On the other hand, a lot of the more ridiculous JS packages wouldn’t be necessary if the language had a halfway decent standard library. I know the standards crew is giving it their best shot on this one, but compatibility is going to remain an issue for the foreseeable future. Yes, we can use polyfills, but then we’re back to our first problem, because that’s just more code that has to be sent down the wire, code that might not be needed in the first place.

The way the DOM is set up doesn’t really help us here, but there might be a solution hiding in there. Speculative loading, where a small shim comes first, checking for the existence of the needed functions. If they’re found, then the rest of the app can come along. Otherwise, send out a request for the right polyfills first. That would take some pretty heavy event hacking, but it might be possible to make it work. (The question is, can we make it work better than what we’ve got?)

As for the general problem of ever-multiplying dependencies, there might not be a good fix. But there also might not be a need to keep the old maxim in mind. Do we really need to import a whole new package to put padding at the beginning of a string? Yes, JS has a wacky type system, but " " + string is what the package would do anyway. (Probably with a lot of extra type checking, but you’re going to do that, too.) If you only need it once, why bother going to all the trouble of importing, adding in dependencies, and all that?

Ultimately, what has happened is that, as JavaScript lacks even the most basic systems for code reuse, its developers have reinvented them, but poorly. As it has a stunted standard library, the third party has had to fill those gaps. Again, they have done so poorly, at least in comparison to more mature languages. That’s not to say that it doesn’t work, because everything we use on the Internet today is proof that it does. But it could be better. There has to be a better way, so let’s find it.

Lettrine and other packages

TeX and its descendants (LaTeX, et al.) have a vast array of add-on packages for an author to use. Most of these are so specific that they’d probably only be useful to a handful of people, but some are almost universal. Memoir, of course, is one of them, though I’ve already spoken about it. This time, I’d like to look at a few others that I use.

Lettrine

The lettrine package is what I use to make drop caps and raised initials, as you’ll recall from the debacle that is my Pandoc filters. For paperback books, especially fiction, these are a nice typographic touch, the kind of thing that, I feel, makes a book look more professional. Personally, I prefer raised initials rather than the dropped capitals, but lettrine works for both.

It’s geared towards European languages, and the examples are actually only in French and German, not English. The documentation, however, is perfectly readable.

Using lettrine isn’t that hard. Unless you need some serious customization, you can get by with just putting the first letter (the one you want to raise or drop) in one set of braces, then anything you’d like in small caps in another: \lettrine{L}{ike this}.

By default, that gives you a two-line dropped capital, but you can change that with options that you place in square brackets before the text. So, to get my preferred raising, I would do: \lettrine[lines=1]{T}{his}. The manual has more options you can use, mostly for tweaking problem letters like J and Q, typesetting opening quotation marks in normal size, and even adding images for something like a medieval manuscript.

Microtype

The second package, microtype, is one of the more complicated ones. Fortunately, there’s not a lot you have to do to use it. Just including it in your document already gets you something subtly better.

What microtype actually does is hard to explain without delving deep into typography. Basically, it gives you a way to change aspects of a font such as kerning and have those changes affect the entire document. I’ll freely admit that I don’t understand everything it does, nor how it works. And the manual is over 240 pages long, so that won’t change anytime soon. Still, I can’t deny its usefulness.

selnolig

Finally, we have selnolig. This one is a bit obscure, compared to the other two, but it turned out to be exactly what I needed for one very specific scenario. Thus, I thought it made a good illustration of the breadth of TeX packages.

If you look closely at a (good) printed book, you’ll notice that the letters aren’t always distinct. In printing, we use a number of ligatures to join letters, which helps them “flow” together. Letter pairs and triplets like “fl”, “ffi”, and “ft” are often combined like this, though there are cases where it’s recommended that you don’t.

The selnolig package handles all that for you, breaking up the automatic ligatures TeX likes to add in the words where they don’t necessarily belong. It also activates some “historic” ligatures, if you want them, so your book can look like it was written in the 1700s.

Far more important, however, is the ability to selectively disable ligatures that gives the package its name. The font I used in Before I Wake (which I’ll probably continue to use in future books) has a very annoying “Th” ligature. Personally, I just don’t like the way it looks; it makes that combination of letters look too…thin. So I went looking for a way to get rid of it, and I found selnolig. Ridding myself of this pesky addition was a single line of code: \nolig{Th}{T|h}. That tells selnolig to always break “Th” into a separate “T” and “h”, with an invisible space in between. This space stops the ligature from forming, which is exactly what I wanted.

Everything else

I haven’t even touched on the myriad other TeX packages out there. There’s not enough time in my life to go through them. Of course, there are quite a few that I couldn’t live without: geometry, fontspec, graphicx, etc. For my aborted attempt at a popular book on mathematics, I used tikz to draw diagrams. I tried to write a book about conlangs almost ten years ago, and I used tipa for that one. Whatever you’re looking for, you’ll probably find it over at CTAN.

And that concludes this little series. Now, it’s back to writing books, rather than writing about writing them. Look for the latest fruit borne from my work with Pandoc, LaTeX, Memoir, and all the rest coming in July.