Let’s end threads

Today’s computers almost all have multiple cores. I don’t even think you can buy a single-core processor anymore, at least not one meant for a desktop. More cores means more things can run at once, as we know, and there are two ways we can take advantage of that. One is easy: run more programs. In a multi-core system, you can have one core running the program you’re using right now, another running a background task, and two more ready in case you need them. And that’s great, although you do have the problem of only one set of memory, one hard drive, etc. You can’t really parallelize those.

The second option uses the additional cores to run different parts of a single application. Threads are the usual way of doing this, though they’ve been around far longer than multi-core processors. They’re more of a general concurrency framework. Put a long-running section of code in its own thread, and the OS will make sure it runs. It won’t block the rest of the program. And that’s great, too. Anything that lets us fully utilize this amazing hardware we have is a good thing, right?

Well, no. Threads are undoubtedly useful. We really couldn’t make modern software without them. But I would argue that we, as higher-level programmers, don’t need them. For an application developer, the very existence of threads should be an implementation detail in the same vein as small string optimizations or reference counting. Here’s why I think this.

Threads are low-level

The thread is a low-level construct. That cannot be denied. It’s closer to the operating system layer than the application layer. If you’re working at those lower levels, then that’s what you want, but most developers aren’t doing that. In a general desktop program or mobile app, threads aren’t abstract enough.

To put it another way: C++ bears the brunt of coders’ ire at its “manual” memory management. Unlike C# or Java, a C++ programmer does need to understand the lifecycle of data, when it is constructed and destructed, what happens when it changes hands. But few complain about keeping track of a thread’s lifecycle, which is essentially the same problem.

Manual threading is error-prone

This comes from threads being low-level because, as any programmer will tell you, the lower you are in the “stack”, the more likely you’ll unwittingly create bugs. And there might not be any bigger source of bugs in multithreaded applications than in the threading itself.

Especially in C++, but by no means unheard of in higher-level languages, threading leads to all sorts of undefined or ill-defined behavior, race conditions, and seemingly random bugginess. Because threads are scheduled by the OS, they’re out of your control. You don’t know what’s executing when. End a thread too early, for example, and your main program could try reading data that’s no longer there. And that can be awfully hard to detect with a debugger, since the very act of running something in debug mode will change the timing, the scheduling.

Sharing state sucks

In an ideal situation, one that the FP types tell us we should all strive for, one section of code won’t depend on any state from any other. If that’s the case, then you’ll never have a problem with memory accesses between threads, because there won’t be any.

We code in the real world, however, and the pure-functional approach simply does not work everywhere. But the alternative—accessing data living in one thread from another—is a minefield full of semaphores and mutexes and the like. It’s so bad that processors have implemented “atomic” memory access instructions just for this purpose, but they’re no magic bullet.

Once again, this is a function of threads being “primitive”. They’re manually operated, with all the baggage that entails. In fact, just about every problem with threads boils down to that same thing. So then the question becomes: can we fix it?

A better way

Absolutely. Some programming languages are already doing this, offering a full set of async utilities. Generally, these are higher-level functions, objects, and libraries that hide the workhorse threads behind abstractions. That is, of course, a very good thing for those of us using that higher level, those who don’t want to be bogged down in the minutiae of threading.

The details differ between languages, but the usual idea is that a program will want to run some sort of a “task” in the background, possibly providing data to it at initialization, or perhaps later, and receiving other data as a result. In other words, an async task is little more than a function that just happens to run on a different thread, but we don’t have to care about that last part. And that is the key. We don’t want to worry about how a thread is run, when it returns, and so on. All we care about is that it does what we ask it to, or else it lets us know that it can’t.

This async style can cover most of the other thread-based problems, too. Async tasks only run what they’re asked, they end when they’re done, and they give us ways (e.g., futures) to wait for their results before we try to use them. They take care of the entire lifecycle of threading, which we can then treat as a black box. Sharing memory is a bit harder, as we still need to guard against race conditions, but it can mostly be automated with atomic access controls.

A message-passing system like Scala and Erlang use can even go beyond this, effectively isolating the different tasks to an extent resembling that of the pure-FP style. But even in, say, C++, we can get rid of most direct uses of threads, just like C++11 removed most of the need for raw pointers.

On the application level, in 2016, there’s no reason a programmer should have to worry about manual memory allocation, so why should we worry about manual thread allocation? There are better ways for both. Let’s use them.

Leave a Reply

Your email address will not be published. Required fields are marked *