Tools and appliances

I was trying to sleep late last night when I had something of an epiphany. I’ve long lamented the dumbing-down of the world, and particularly the tech world. Even in programming, what should be a field that relies on high intelligence, common sense, and reasoning abilities, you can’t get away from it. We’ve reached the point where I can only assume malice, rather than mere incompetence, is behind the push for what I’m calling the Modern Dark Age.

The revelation I had was that, at least for computers, there’s a simple way of looking at the problem and its most obvious solution. To do that, however, we need to stop and think a little more.

The old days

I’m not a millennial. I’m in that weird boundary zone between that generation the Generation X that preceded it. In terms of attitude and worldview, that puts me in a weird place, and I really don’t "get" either side of the divide. But I grew up with computers. I was one of the first to do so from an early age. I learned BASIC at 8, taught myself C++ at 13, and so on to the dozen or so languages I’m proficient in now at 40. I’ve written web apps, shell scripts, maintenance tools, and games.

In my opinion, that’s largely because I had the chance to experience the world of 90s tech. Yes, my intelligence and boundless curiosity made me want to explore computers in ever-deeper detail, but the time period involved allowed me an exploratory freedom that is just unknown to younger people today.

The web was born in 1992, less than a decade after me. At the time, I was getting an Apple IIe to print my name in an infinite loop, then waiting for the after-recess period when I could play Oregon Trail. Yet the internet as a whole, and the technologies which still provide its underpinnings today, were already mature. When AOL, Compuserve, and Prodigy (among others), brought them to the masses, people in the know had been there for fifteen years or more. (They resented the influx of new, inexperienced rabble so much that "Eternal September" is still a phrase you’ll see thrown about on occasion, a full 30 years after it happened!)

This was very much a Wild West. I first managed to convince my mom that an internet subscription was worth it in 1996, not long before my 13th birthday. At the time, there were no unlimited plans; the services charged a few cents a minute, and I quickly racked up a bill that ran over a hundred dollars a month. But it was worth it.

Nowadays, Google gives people the illusion of all the answers. Back in the day, it wasn’t that simple. Oh, there were search engines: Altavista, Lycos, Excite, Infoseek, and a hundred other names lost to the mists of time. None of these indexed more than a small fraction of the web even in that early era, though. (Some companies tried to capitalize on that by making meta-engines that would search from as many sites as possible.)

Finding things was harder on the 90s web, but that wasn’t the only place to look. Before the dotcom bubble, the internet had multiple, independent communities, many of which were still vibrant. Yes, you had websites. In the days before CSS and a standardized DOM, they were rarely pretty, but the technical know-how necessary to create one—as well as the limited space available—meant that they tended to be more informative. When you found a new site about a topic, it often provided hours of reading.

That screeching modem gave you other options, though. Your ISP might offer a proprietary chat service; this eventually spawned AIM and MSN. AOL especially went all-in on what we now call the "walled garden": chat, news, online trivia games, and basically everything a proto-social network would have needed. On top of that, everyone had email, even if some places (Compuserve is the one I remember best) actually charged for it.

Best of all, in my rose-colored opinion, were the other protocols. These days, everything is HTTP. It’s so prevalent that even local apps are running servers for communication, because it’s all people know anymore. But the 90s had much more diversity. Usenet newsgroups served a similar purpose to what Facebook groups do now, except they did it so much better. Organized into a hierarchy of topics, with no distractions in the form of shared videos or memes, you could have long, deep discussions with total strangers. Were there spammers and other bad actors? Sure there were. But social pressure kept them in line; when it didn’t, you, the user, had the power to block them from your feed. And if you didn’t want to go to the trouble, there were always moderated groups instead.

Beyond that, FTP "sites" were a thing, and they were some of the best places to get…certain files. Gopher was already on its way out when I joined the internet community, but I vaguely remember dipping into it on a few occasions. And while I don’t even know if my area had a local BBS, the dialer software I got with my modem had a few national ones that I checked out. (That was even worse than the AOL per-minute fees, because you were calling long-distance!)

My point here is that the internet of 30 years ago was a diverse and frankly eye-opening place. Ideas were everywhere. Most of them didn’t pan out, but not for lack of trying. Experimentation was everywhere. Once you found the right places, you could meet like-minded people and learn entirely new ways of looking at the world. I’m not even kidding about that. People talk about getting lost in Wikipedia, but the mid 90s could see a young man going from sports trivia to assembly tutorials to astral projection to…something not at all appropriate for a 13-year-old, and all within the span of a few hours. Yes, I’m speaking from personal experience.

Back again

In 2024, we’ve come a long way, and I’m not afraid to state that most of that way was downhill. Today’s internet is much like today’s malls: a hollowed-out, dying husk kept alive by a couple of big corporations selling their overpriced goods, and a smattering of hobbyists trying to make a living in their shadow. Compared even to 20 years ago, let alone 30, it’s an awful place. Sure, we have access to an unprecedented amount of information. It’s faster than ever. It’s properly indexed and tagged for easy searching. What we’ve lost, though, is its utility.

A computer in the 90s was still a tool. Tools are wonderful things. They let us fix, repair, build, create. Look at a wrench or a drill, a nail gun or a chainsaw. These are useful objects. In many cases, they may have a learning curve, but learning unlocks their true potential. The same was true for computers then. Oh, you might have to fiddle with DIP switches and IRQs to get that modem working, but look at what it unlocks. Tweaking your autoexec.bat file so you can get a big Doom WAD running? I did that. Did I learn a useful skill? Probably not. Did it give me a sense of accomplishment when I got it working? Absolutely.

Tools are just like that. They provide us with the means to do things, and the things we can do with them only increase as we gain proficiency. With the right tools, you can become a craftsman, an artisan. You can attain a level of mastery. Computers, back then, gave us that opportunity.

Now, however, computers have become appliances. Appliances are still useful things, of course. Dishwashers and microwaves are undeniably good to have. Yet two aspects set them apart from tools. First, an appliance is, at its heart, a provider of convenience. Microwaves let us cook faster. TVs are entertainment sources. That dryer in the laundry room is a great substitute for a clothesline.

Second, and more important for the distinction I’m drawing here, is that an appliance’s utility is bounded. They have no learning curve—except figuring out what the buttons do—and a fixed set of functions. That dryer is never going to be useful for anything other than drying clothes. There’s no mastery needed, because there’s nothing a mastery of an appliance would offer. (Seriously, how many people even use all those extra cooking options on their microwave?)

Modern computers are the same way. There is no indication that mastery is desirable or useful. Instead, we’re encouraged and sometimes forced into suboptimal solutions because we aren’t given the tools to do better. Even in this century, for example, it was possible to create a decent webpage with nothing more than a text editor. You can’t do that now, though, because browsers won’t even let you access local files from a script. The barrier to entry is thus raised by the need to install a server.

It only gets worse from there. Apple has become famous for the total lockdown of its software and hardware. They had to be dragged into opening up to independent app stores, and they’ve done so in the most obtuse way possible as protest. Google is no better, and is probably far worse; they’re responsible for the browser restriction I mentioned, as well as killing off FTP in the browser, restricting mobile notifications to only use their paid service, and so on. Microsoft? They’re busy installing an AI keylogger into Windows.

We’ve fallen. There’s no other way to put it. The purpose of a computer has narrowed into nothing more than a way to access a curated set of services. Steam games, Facebook friends, tweets and Tiktoks and all the rest. That’s the internet of 2024. There’s very little information there, and it’s so spread out that it’s practically useless. There’s almost no way to participate in its creation, either.

What’s the solution? I wish I knew. To be honest, I think the best thing to do would be a clean break. Create a new internet for those who want the retro feel. Cut it off from the rest, maybe using Tor or something as the only access point. Let it be free of the corrupting influence of corporate greed, while also making it secure against the evils of progressivism. NNTP, SMTP, FTP…these all worked. Bring them back, or use them as the basis for new protocols, new applications, new tools that help us communicate and grow, instead of being ever further restrained.

Free means free

The news everyone has been talking about this past week was Elon Musk’s acquisition of Twitter. People on the left are apoplectic, people on the right overjoyed, and both of them are utterly wrong. No one, it seems, even remembers what free speech actually means, much less why it’s worth defending. So let’s back up just for a moment and set the record straight.

First, we have the First Amendment:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

That’s pretty self-explanatory even if you aren’t steeped in the culture of 18th-century America. But a lot of commenters get hung up on those first five words. “Congress shall make no law.” Well, that just means that free speech is only a government thing, and that private companies can do whatever they want, right?

Wrong.

The First Amendment, like the rest of the Bill of Rights, was meant to limit the powers of the federal government. It did not grant us the right to free speech, because a granted right is no right at all. What was given may be taken away at any time, as we saw in Canada a few months ago.

The American Bill of Rights instead recognizes that these rights are inherent in being human. They’re inalienable. Only a tyrant can even try to take them from us.

Thus, we already have the right to free speech, government or no government. Since the First Amendment is part of a social contract between We The People and those we have chosen to represent us, it makes sense that its text would specify who shall make no law. That doesn’t make it the last word on the matter. On the contrary, it’s just the first and most important.


We are born with the absolute, natural right to speak our minds. That was one of the key ideas of the Enlightenment. Modern so-called liberalism, however, believes that rights are contingent upon others’ feelings. You can’t speak your mind, they argue, because it might upset someone you’ve never met. And that has led us down the rabbit hole of being forced to deny basic scientific facts (there are two biological sexes, natural immunity protects us from viruses, etc.) if we want to participate in public discussion.

But isn’t Twitter a private company that can moderate however they want? That’s the next argument from the thought police, and it is indeed correct from a legal standpoint. That doesn’t make it right from a moral one, however. And it is indeed morally wrong.

Twitter, Facebook, Youtube, and similar socially-oriented sites have, in effect, become public spaces. Their sheer size and their cartel-like hold over the internet cause them to attract meaningful discussion and debate, even as their business and engagement models try their best to prevent it. By advertising themselves as open to all, then gaining an audience comprised of the majority of American adults, these sites have lost any claim to being “private” in the social sense.

Somewhere like Twitter, in other words, is—rather, should be—the modern-day equivalent of the public square. Because we can’t very well gather a hundred million Americans into a national park, we need a place where all of us can use our inalienable rights, and social media sites should be honored to take on that role. Instead of crowding out rational or traditionalist voices, they should embrace them while providing a place for honest debate.

As for the “company” part of the Left’s objections, remember that every website is owned by someone. With few exceptions outside the .gov space, that someone is a private entity, whether a person, group, or corporation. On top of that, almost all American ISPs are private companies. The backbone routers are privately owned. The domain registrars are private. The root DNS servers are mostly private. How far down the stack do you allow censorship to go? (Interestingly, net neutrality was a liberal cause a few years ago, yet few of them ever made the leap from keeping ISPs content-neutral to doing the same for platforms.)

Finally, while Twitter and others are legally allowed to fight against free speech, the obligations to society they have gained by their position as market controllers have, in effect, made them governments of their own. They represent communities, after all. They have the power to imprison, banish, or execute. Should they not, then, have the same responsibilities as any other government, up to and including a respect for the natural rights of their citizens?


To end, I’d also like to remark on the limits of free speech, because those are much, much farther away than most on either side of the political spectrum care to admit.

The First Amendment is traditionally taken as having only a few limits. Notably, “true threats” are not protected; though the legal definition is, like all legal definitions, hopelessly opaque, the gist is that a true threat is one for which a reasonable person would assume that there is a definite risk. It’s understandable why that isn’t protected. The rights of life, liberty, and the pursuit of happiness are just as inalienable, so direct threats to them can’t be allowed in a just society.

Likewise, direct calls to break a law or to defraud violate the principles of the social contract. The first is a little too strict, in my opinion, as it can also prevent the “petition for a redress of grievances” mentioned at the end of the First Amendment. But at least it makes logical sense.

Last of the major exceptions to free speech protection is the very nebulous category of obscenity. This, of course, should be done away with entirely. While there are some ideas so obscene they should not be spoken aloud, there are none which are so obscene that they should not be allowed to be spoken. The category is too broad, too ill-defined, to justify its continued inclusion.

Note, however, what isn’t on this short list, what does still have protection as free speech. “Hate speech” isn’t forbidden. Doxxing isn’t forbidden. Referring to a man as a man, rather than whatever he believes himself to be, does not meet the criteria to lose protection under the First Amendment. Neither does pointing out that a cold virus doesn’t justify a total lockdown, or that an election’s results were fraudulent, or any of the hundreds of other things for which Twitter’s thought police issued suspensions.

A social platform that, whether intentionally or not, has taken on the role of a public marketplace of ideas must allow those ideas to be traded. The First Amendment is a statement that government may not take away something we were born possessing, but we need the same protection for our speech when these pseudo-governmental corporations are the ones controlling every access point.

(To head off any potential objections, “just start your own Twitter” is not a valid argument. Gab and Parler tried that and found themselves barred from cloud providers, hosting providers, payment processors, and more. The fediverse operates mostly under the radar, but its larger instances still have to worry about being cut off economically.)

I don’t have a Twitter account. I don’t have a Facebook account. I’ve never uploaded a video to Youtube or even visited Snapchat. Why? Because I value my freedom more than the convenience these sites offer. If I can’t speak my mind, then I am not free. I’m a slave to those who control my words. If Elon Musk understands that, and he makes the appropriate changes so that his new acquisition respects the natural rights of its citizens above and beyond the extent required as a private company, then I’ll consider supporting him. But it’ll take more than words to get to that point.

Retirement

I’m writing this from my new laptop.

That alone is something I haven’t said in a very long time. But time marches on, and so does technology. What once was more than enough power has since become anemic, to the point where even simple web browsing was mostly impossible on the old machine. For the last five years, I’d used it less and less, until it became nothing more than a second screen for taking notes. Even that was only for work and my Otherworld series, the only one whose vast body of notes I hadn’t moved.

But it was a good little machine. I’ll give it that. When I bought it in 2007, I didn’t expect it to still be kicking 15 years later. In fact, it almost didn’t. At the time, Ubuntu had a major bug in the way it handled laptop hard drives. Rather than bore you with technical details that are boring and completely pointless in this day of SSDs, I’ll just say this: it was killing them by default. Fortunately, I found out before too much damage had been done. The “load cycle count” rating on the drive capped at 200,000; at that point, it was even money whether it would fail. After 4 months, mine sat at almost 160,000. Today, it’s around 167,000, so maybe there’s a little life left.

The specs, on the other hand, mean that life wouldn’t be a good one. A 100 GB hard drive, “only” 1 GB of memory, and no actual video card to speak of? That’s not much to go on these days, especially when the new system is sitting at 1.7 GB used for just the desktop, a file manager, and a single browser window. I’m into retro, though, so maybe I’ll try to do something with it.

The only constant is change. I hate to see the old machine go, and I know upgrades aren’t always for the best, but I hope this one will allow me to become more mobile again.

After the after

Many, many people believe the world as we know it will come to an end soon. Some of those people happen to be in positions to make such a dire prediction come true. So let’s talk about the apocalypse for a moment, why don’t we?

The cause doesn’t really matter for our purposes. Suffice to say, some catastrophe causes a severe drop in the world’s population. How far? Well, we’re close to 8 billion now, so there’s a long way to fall. Obviously, an I Am Legend scenario of the last remaining man is pretty pointless to consider: humanity ends when he does. For similar reasons, a very small remaining population (up to a few hundred) is essentially extinction-level.

The last time humans numbered only a thousand was about 74,000 years ago, at the genetic bottleneck caused by the eruption of the Toba supervolcano in Indonesia. (By the way, climate catastrophists have been unsuccessfully trying to debunk this theory for years, because the idea that a volcano can cause a drop in global temperatures up to 15°C is awfully hard to reconcile with the idea that people are the sole cause for all the climate’s ills.)

Since that fateful day, we have progressed in an almost monotonic fashion. The only major setbacks in recorded history were the Black Death of the 14th century and the lesser-known plague, volcanic winter, and famine years of the 6th century. But our growth as a species really started getting exponential within the last 200 years: the Industrial Revolution. Around the start of that glorious era, humanity numbered less than a billion.

Let’s assume, then, that our apocalypse knocks us under that threshold and, from there, halfway to our doom. In other words, a population of around 500 million, which is just what the “population control” (i.e., genocide) believers want. This mass slaughter can come from a bioweapon or its supposed “cure”, a nuclear exchange, an asteroid impact, or some combination of factors, but we can assume it happens with no last-minute heroics to stop it.

One day, we wake up to find 7.5 billion human lives have been extinguished. Now what?

The first stage

The survivors will need to, well, survive. We’ve all seen that in television (The Walking Dead); literature (The Decameron, not to mention Genesis!); movies (way too many to name); video games (Fallout, The Last of Us, 7 Days to Die); and novels (my own The Linear Cycle, for the shameless plug). Those who survive the calamity band together, scavenge what they can, and fend off the hordes of aliens or zombies or mutants while trying to rebuild society.

While that makes for great drama, cinema or otherwise, it’s been done to death. No pun intended.

As a fan of worldbuilding, I’m more interested in what comes next. What happens after the post-apocalypse? That is, in a sense, literal worldbuilding, don’t you think?

So I’ve been thinking about that a lot lately. I don’t have time to start yet another story (I already have three that are basically stalled because of my new job!), but it’s still a fun topic to contemplate. What would the next iteration of civilization look like, especially if it retained some continuity?

At present, we’re seeing the beginning of a slide into a kind of neo-feudalism. Take away 93% of the population in one fell swoop, and two things could happen. Either the powers that be consolidate that power, or the hollowing-out of society causes a complete collapse that leads to revolution. The latter has precedent: it’s basically how the first feudal period in Europe came to an end after the Black Death. So many people died (one out of every three, in some places) that labor became scarce, and peasants could essentially name their price. They gained leverage over the nobility, pushing them into irrelevance in a gradual process that took about four centuries.

The modern-day nobles, the men and women who claim the right to rule our lives, don’t call themselves lords or bishops or anything of the sort. And they probably won’t even after the vast majority of humans have fallen victim to whatever disaster awaits. No, they’ll keep calling themselves businessmen, politicians, and celebrities even after capitalism, democracy, and mass media are destroyed.

But feudalism requires a certain population density to be worthwhile. So does industry, as a matter of fact, and our figure of 500 million is actually below that, by all accounts. Our apocalypse will have the side effect (or possibly intended effect) of reversing the Industrial Revolution. Maybe even the Enlightenment before it. The medieval era before that? It’s possible. And we should hope so.

Where to go from here

As I said, I’ve been thinking about this one, so…let’s make a new post series. I haven’t done that in a while. This one won’t be anything like “Let’s Make a Language” or “Magic & Tech” in size. Well, it shouldn’t be, but you know me.

The goal for the posts will be to sketch out one plausible post-post-apocalyptic scenario. I’m not saying that’s what will happen once the Omega Variant kills 90% of the world, and The Climate Crisis (capitals to emphasize how stupid the notion is) does for half the rest. No, this is just a possibility.

Again, my focus isn’t on the immediate aftermath of the disaster. It’s the part that comes after, the true rebuilding of civilization. So you won’t hear me talk about killing zombies or building sunshades or whatever. Let’s say that the disaster itself is in the past. What then? That’s the question I want to ask and answer.

This one’s going to be a little different, though. Or that’s how the idea looks in my head. On top of the posts, which I anticipate to come out once a month or so, I want to do something I’ve never done before: make videos.

Yeah, I know. We’ve all seen Bear Grylls and Les Stroud with their camera crews and helicopters. That’s not what I’m about. No, my goal is to build, not survive. To do that, we need technology. We need to create. And that is what I want to do in these videos. I want to talk about technology, its history, its re-creation. Using the materials you might have in the rebuilding era, what can you make? What will have to change?

Assuming I get that far, I’ll post these on a few platforms. Not Youtube, because I don’t believe a series of informational, scientific videos belongs on a platform as hostile to knowledge and free speech as Google’s video silo. Instead, you’ll (hopefully!) find them on places like Odysee, LBRY.to, and Rumble.

But that’s for the future. Until then, dream with me, and let’s hope that we never have to use the wisdom I’ll be giving.

Introducing Agena

I’ve been sick this past week. Sinus infections are always bad news, but this one has left me so out of sorts that I did something crazy. Okay, crazier than usual for me. Therefore, I give to you Agena.

What is it?

Agena is a server for the Gemini protocol, written in pure Python with no external dependencies. It supports static and dynamic routing, server-side scripting, virtual hosts, and wildcard SSL certificates while being light on resources and relatively easy to configure. It’s named after the Agena target vehicle, the unmanned rendezvous partner of the Gemini space missions, which was itself named after the star Agena, also known as Beta Centauri.

No, seriously, what is it?

Right. Let’s back up a step or two. First, Gemini. As you know, alt-tech is all the rage right now. If it isn’t where you live, it should be. Now that Google, Facebook, Twitter, and the other big players have shown themselves to be in opposition to basic human rights such as free speech and fair elections, while also exercising dictatorial control of their platforms by banning anyone whose ideology doesn’t perfectly align with that of the global elite, we need a change.

That change has already begun. Parler and Gab are two popular sites that have been attacked ruthlessly by Big Tech and the media for the crime of allowing free expression, while the superior alternative of the fediverse (note the link on this page) offers a truly decentralized option for social media.

But evading censorship isn’t the only reason to look at alt-tech. Some people like it because it’s new, because it’s a wide open space for experimentation, the way the internet was until it became overcommercialized in the last generation. (Wow. The internet has been around for generations now. I feel so old.)

Gemini, then, is one of a number of projects that aims to bring back some of the feel-good feel of old. Some of us still remember the glory days of Gopher, Usenet, and FTP sites, days when you didn’t need to download six megabytes of Javascript just to load a web page. Sure, those old platforms were limited, but that was by necessity; Gemini does it intentionally, replacing the HTTP protocol that underlies what we think of as the Web with a bare-bones alternative focused on content. There’s no CSS, client-side scripting, or even inline hyperlinks! In return, you get blazing speed and austere simplicity.

You get, in other words, something a decent programmer can write in a weekend.

The weekend project

Now, I’m not sure I’d be considered a decent programmer, but I did exactly that. To be fair, I needed a little longer, but that’s due to my own failings. I started on Wednesday, then slept. A lot. I haven’t been awake too much in the past few days, and most of my waking hours have been in the dead of night. The headaches and occasional dizziness make it hard to think straight sometimes.

Altogether, it took me about 8 hours of coding over 4 days to get a fully functional server. I could have finished it in a weekend, if I’d been physically capable. Sunday and Monday were for adding features: virtual hosts and server-side scripting, respectively.

No software project is ever complete. There are always bugs to be fixed, features to be added, and refactors to be, uh, refactored. Agena is no exception. I consider it beta quality (I’ve put it at version 0.4.2), and you probably shouldn’t use it for anything serious yet. That said, I’d like to keep working on it when I have the chance.

If you want to check it out, head to the Gitlab repo, where you can download a copy of the source, read the installation instructions, and all the usual Git goodness. It’s not often that I actually release something on the code side of things. It’s even rarer that it’s something I’m proud of.

But this is that time. I really feel a sense of accomplishment. Considering how down I’ve been the past few days, that’s saying something.

The merchants of despair

I am a humanist.

I’ve said that before, but it bears repeating. Now, most people who call themselves humanists do so out of a kind of rebellious nature. They’re agnostics or atheists who disapprove of such labels for whatever reason. Worse, too many tend to be the “militant” sort of atheist who hold their lack of belief with the same dogmatic zeal as the most fundamentalist Christian or Muslim.

I’m not like that at all. Instead, I see humanism as a celebration of humanity and its accomplishments, as well as a belief in its capability for good. We can achieve great things. We have. History is full of human milestones. We’re the only species on Earth (and, as far as we know, in the universe) to domesticate plants and animals, use spoken and written language, harness the power of fire, work metals, build cities, travel to the moon, cure diseases, split the atom, and a thousand other things. Above all, however, we introspect. We philosophize. We are aware of ourselves in a way no other creature has the capability of being.

That’s beautiful, in my opinion. The creations of man, whether mental, physical, or indeed spiritual, are beautiful. While we have made some awful mistakes and inventions, progress is, on the whole, a good thing for everyone involved. The rapid explosion of progress since the two most pivotal eras in history, the Enlightenment and the Industrial Revolution, has given us much to be thankful for. We live longer, healthier lives than our ancestors. We have more material wealth. We understand the world far better than they could hope.

Some people don’t like that, and I honestly can’t understand why. Why are they so dead set on keeping us poor, sick, ignorant, and isolated? A thirst for power explains a lot of irrational behavior, yes, but naked displays of dominance aren’t usually so…insidious. In 2020 alone, we have seen countless examples of human beings arguing for their own extinction, a position not only evolutionarily suspect but morally bankrupt. Yet this position finds backing in the media, on campus, and even in scientific papers. Why? Is there some kind of secret death cult out there?

Until a couple of weeks ago, I would have dismissed that notion as a conspiracy theory on the same level as the Illuminati and Pizzagate. But then I read a book that made everything click.

Humanity’s enemy

Robert Zubrin is best known for his advocacy, often to the point of mania, of manned Mars missions. For over 30 years, he has led the charge in fighting for a permanent human presence on the Red Planet as soon as possible. Growing up, I heard his name on numerous space documentaries, and I still see interviews he has given on the subject. (The series Mars is one example.)

He has other writings, though. In 2011, he published Merchants of Despair, in which he describes an “antihuman” movement that, according to his theory, has been operating for nearly two centuries with the express goal of controlling population by subverting progress.

Numerous examples show the antihumanists in action. Most are concerned with eugenics, the hateful policy of forced sterilization, abortion, and contraception for a specific set of undesirables: blacks, Jews, Indians, Uighurs, the mentally disabled, etc. The targets change depending on who’s doing the extermination, but the principle remains the same. If we don’t stop “those people” from reproducing, eugenicists claim, they’ll overrun us good and pure folk and drag us down to their level. Obviously, any sensible, rational person would reject such notions, but most people are neither rational nor sensible. Thus, population control movements have grown over the past 200 years.

It began with Malthus, who argued incorrectly that the Earth was running out of land for food, and severe measures to curb population growth had to be implemented right now in order to save our race from extinction. His theory was so wildly inaccurate that it couldn’t even predict past resource use, but he had friends and believers in high places. Malthusian principles created the Irish Potato Famine in the 1840s, then racked up an even greater death toll in 1870s India. In both cases, the country in question was a net exporter of food at the time, yet the British government forced residents to starve in order prevent some mythical calamity.

Fast forward to the 1930s, and we know what happened. The Nazis were the gold standard for eugenics, raising genocidal population control to an art form. Following the same principles as Malthus, Hitler argued that Germany would eventually be too crowded to feed itself. But now there was an added wrinkle, because science could “prove” that some races were more degenerate than others. And wouldn’t you know it, but Hitler’s enemies just happened to number among them!

Before the true horrors of the Holocaust were revealed—or even started, for that matter—many Americans were wholeheartedly in favor. Herbert Hoover attended the Second International Congress of Eugenics in 1921, seven years before he would be elected President of the United States and plunge our country into the Great Depression. J. P. Morgan was there, too. Representing the British (45 years after the India debacle, mind you) was Charles Darwin’s own son.


That was before World War II. With the end of the war, the opening of the death camps, and the subsequent Nuremberg trials, the whole world got to see what eugenics really looked like. So you’d think that would be the end of it, right?

Wrong.

Now, instead of open calls for extermination, those advocating population control became more subtle in their efforts. The best way to stop overpopulation, they decided, wasn’t to kill people who were already here, but to stop them from being born in the first place. Thanks to some politicking from such notables as Robert McNamara, forced sterilization became a condition of US foreign aid to Third World countries. Doing it at home (mostly for criminals and mental patients) was legal until the 1970s. The entire Vietnam War can be seen as a eugenics experiment, as those in power took the slogan “Better Dead Than Red” literally.

Abortion as a political and population-control tool also sees its birth in this era. Planned Parenthood formed out of the eugenics movement, and its original goal of choice carefully neglected the possibility of choosing to have children. Around the same time, one Communist Party official in China read up on these efforts and got the great idea of limiting all families in his country to one child each. Never mind the disastrous consequences for the fabric of society. Isn’t running out of food worse?

Yet the biggest crime to lay at the feet of the antihumanists is, in my opinion, environmentalism. In the past decade, and especially in the past four years, we’ve seen more radical forms of the Green movement grow like a cancer in our society, but they were there from the start. The Sierra Club has deep ties to eugenics, for instance.

Hatred

Here’s where it gets interesting. And evil, in my opinion.

We’ve all seen it this year. “Nature is healing,” they say, as they show weeds growing through cracks in concrete or wild animals overrunning a city street. “We are the virus,” they claim, often adding that the Wuhan coronavirus (most likely created in a Chinese lab, so not natural at all) is some kind of divine wrath for our excesses. How a virus with a fatality rate of around 0.1% is supposed to be apocalyptic is beyond me, but you can’t expect logical consistency from some people.

Such extreme environmentalism has been around for over half a century, and Zubrin argues that it shows a more modern form of antihumanism. Instead of calling for deaths or preventing births, green eugenicists want to use economic and government pressure to make having children financially unbearable. To do this, they have blocked the progress of technologies, inventions, and medicines that save lives. We must not help people, they argue, because then those people will breed. Better if they die sick and miserable than be fruitful and multiply.

DDT was the first casualty, according to Zubrin. The endless campaigning against nuclear power is another front in this fight. Though he was writing with incomplete information, he even targets global warming, and here is where the last piece fell into place for me.

We know that the fears of global warming are overrated. Even top climate activists such as Michael Shellenberger (Apocalypse Never) admit this. Current climate trends are well within the limits of human civilization. Sea levels aren’t rising rapidly; the Maldives archipelago, to take one example, was supposed to be completely underwater by 2018, but they’ve now announced that they’re building new airports in anticipation of heavier tourism. Add in the work done by sleuths such as Tony Heller, who illustrate how temperature records are being manipulated to claim accelerated warming, and you get the feeling that somebody somewhere isn’t telling the whole truth.

Earth isn’t going to become a second Venus because we drive too much. In fact, as Zubrin illustrated nine years ago, the slight overall warming predicted through the 21st century is actually beneficial. It increases arable land, and actual climate shifts may open up even more. We’re seeing that today, with record crop yields all over North America.

Those who fail to learn from history will find that it repeats itself. 2020 America is in real danger of turning into a mirror of 1845 Ireland. We have plenty of food. We have plenty of jobs. We have plenty of toilet paper. Yet government control and overblown fears are preventing us from using these resources properly. They’re just saying it’s because of a virus instead of overpopulation by “inferior” races. That’s all.

But the result is the same. Lives are being lost. Not to starvation, as then, but to other preventable factors. Suicidal depression, of course, is one I’m intimately familiar with. Yet we also need to look at the back side of population control. How many children weren’t born because of lockdown restrictions? How many couples didn’t get a chance to meet because they were under effective house arrest? How many relationships ended (or are on the verge of ending, or never really got going in the first place) due to the loss of a job or the failure to find one?

Whatever that number is, it’s not zero. I know for a fact.

Humanity’s hope

That’s why I’m a humanist. I see these problems in the world, and I realize how many of them are of our own making. Worse yet, they’re easily fixed. We have the means to give food to everyone on Earth. We have ways of making power literally too cheap to meter. There is more than enough wealth to go around.

We shouldn’t have to force women into tubal ligation surgery out of some fear that they’ll have too many kids. We shouldn’t distribute condoms as business cards or demand IUD implants as conditions for government aid.

We shouldn’t claim that a one-degree change in temperature is going to wipe out all life on Earth. We shouldn’t argue that the cleanest, safest form of energy production we have is actually nothing more than a way to make bombs. We shouldn’t pack millions of people into unsanitary cities, then deny them treatment for the diseases that inevitably occur.

We can be better, but only if we embrace progress. Not progressivism, but progress itself, the liberal ideals of the Enlightenment which state that, as man is the only animal with the capability for reason, it stands to us to use that reason to shape the world, and society, in a positive way.

To do otherwise is to advocate for death on an unimaginable scale. Earth’s population is roughly 7.7 billion at present. With our current technology, we can easily feed, house, and care for at least twice that. But the goals of the environmentalists, the globalists, and others who, I now see, have been aligned with the idea of eugenics all this time, are to reduce our numbers to pre-Industrial levels. The problem with that is simple to recognize: technology allows our carrying capacity to increase. By banning those advances which produce more food or lead to longer, healthier lives, that capacity drops precipitously.

They would kill not the six million of Nazi fame, but over six billion. Some claim the goal is inscribed on the monument known as the Georgia Guidestones: a population not to exceed 500 million. Think about that. To reach that figure, we would first have to let over 90% of the world die. Then, those who survive would be forcibly limited to replacement-level reproduction. How many children would never be born in such a world? How many artists, statesmen, inventors, scientists, friends, and lovers would never take their first breath?

These are our enemies. They must be, for those who value life must always stand against those who preach only death.

Now I understand the cult-like behavior I see so often in the world. It really is a cult. It’s a cult of despair, destruction, and death. Looked at in that light, the lockdowns, the Great Reset, Chinese propaganda, Antifa, global warming fearmongers, and so many other things make sense. They all share one thing in common: they’re antihuman.

Future past: computers

Today, computers are ubiquitous. They’re so common that many people simply can’t function without them, and they’ve been around long enough that most can’t remember a time when they didn’t have them. (I straddle the boundary on this one. I can remember my early childhood, when I didn’t know about computers—except for game consoles, which don’t really count—but those days are very hazy.)

If the steam engine was the invention that began the Industrial Revolution, then the programmable, multi-purpose device I’m using to write this post started the Information Revolution. Because that’s really what it is. That’s the era we’re living in.

But did it have to turn out that way? Is there a way to have computers (of any sort) before the 1940s? Did we have to wait for Turing and the like? Or is there a way for an author to build a plausible timeline that gives us the defining invention of our day in a day long past? Let’s see what we can see.

Intro

Defining exactly what we mean by “computer” is a difficult task fraught with peril, so I’ll keep it simple. For the purposes of this post, a computer is an automated, programmable machine that can calculate, tabulate, or otherwise process arbitrary data. It doesn’t have to have a keyboard, a CPU, or an operating system. You just have to be able to tell it what to do and know that it will indeed do what you ask.

By that definition, of course, the first true computers came about around World War II. At first, they were mostly used for military and government purposes, later filtering down into education, commerce, and the public. Now, after a lifetime, we have them everywhere, to the point where some people think they have too much influence over our daily lives. That’s evolution, but the invention of the first computers was a revolution.

Theory

We think of computers as electronic, digital, binary. In a more abstract sense, though, a computer is nothing more than a machine. A very, very complex machine, to be sure, but a machine nonetheless. Its purpose is to execute a series of steps, in the manner of a mathematical algorithm, on a set of input data. The result is then output to the user, but the exact means is not important. Today, it’s 3D graphics and cutesy animations. Twenty years ago, it was more likely to be a string of text in a terminal window, while the generation before that might have settled for a printout or paper tape. In all these cases, the end result is the same: the computer operates on your input to give you output. That’s all there is to it.

The key to making computers, well, compute is their programmability. Without a way to give the machine a new set of instructions to follow, you have a single-purpose device. Those are nice, and they can be quite useful (think of, for example, an ASIC cryptocurrency miner: it can’t do anything else, but its one function can more than pay for itself), but they lack the necessary ingredient to take computing to the next level. They can’t expand to fill new roles, new niches.

How a computer gets its programs, how they’re created, and what operations are available are all implementation details, as they say. Old code might be written in Fortran, stored on ancient reel-to-reel tape. The newest JavaScript framework might exist only as bits stored in the nebulous “cloud”. But they, as well as everything in between, have one thing in common: they’re Turing complete. They can all perform a specific set of actions proven to be the universal building blocks of computing. (You can find simulated computers that have only a single available instruction, but that instruction can construct anything you can think of.)

Basically, the minimum requirements for Turing completeness are changing values in memory and branching. Obviously, these imply actually having memory (or other storage) and a means of diverting the flow of execution. Again, implementation details. As long as you can do those, you can do just about anything.

Practice

You may be surprised to note that Alan Turing was the one who worked all that out. Quite a few others made their mark on computing, as well. George Boole (1815-64) gave us the fundamentals of computer logic (hence why we refer to true/false values as boolean). Charles Babbage (1791-1871) designed the precursors to programmable computers, while Ada Lovelace (1815-52) used those designs to create what is considered to be the first program. The Jacquard loom, named after Joseph Marie Jacquard (1752-1834), was a practical display of programming that influenced the first computers. And the list goes on.

Earlier precursors aren’t hard to find. Jacquard’s loom was a refinement of older machines that attempted to automate weaving by feeding a pattern into the loom that would allow it to move the threads in a predetermined way. Pascal and Leibniz worked on calculators. Napier and Oughtred made what might be termed analog computing devices. The oldest object that we can call a computer by even the loosest definition, however, dates back much farther, all the way to classical Greece: the Antikythera mechanism.

So computers aren’t necessarily a product of the modern age. Maybe digital electronics are, because transistors and integrated circuits require serious precision and fine tooling. But you don’t need an ENIAC to change the world, much less a Mac. Something on the level of Babbage’s machines (if he ever finished them, which he didn’t particularly like to do) could trigger an earlier Information Age. Even nothing more than a fast way to multiply, divide, and find square roots—the kind of thing a pocket calculator can do instantly—would advance mathematics, and thus most of the sciences.

But can it be done? Well, maybe. Programmable automatons date back about a thousand years. True computing machines probably need at least Renaissance-era tech, mostly for gearing and the like. To put it simply: if you can make a clock that keeps good time, you’ve got all you need to make a rudimentary computer. On the other hand, something like a “hydraulic” computer (using water instead of electricity or mechanical power) might be doable even earlier, assuming you can find a way to program it.

For something Turing complete, rather than a custom-built analog solver like the Antikythera mechanism, things get a bit harder. Not impossible, mind you, but very difficult. A linear set of steps is fairly easy, but when you start adding in branches and loops (a loop is nothing more than a branch that goes back to an earlier location), you need to add in memory, not to mention all the infrastructure for it, like an instruction pointer.

If you want digital computers, or anything that does any sort of work in parallel, then you’ll probably also need a clock source for synchronization. Thus, you may have another hard “gate” on the timeline, because water clocks and hourglasses probably won’t cut it. Again, gears are the bare minimum.

Output may be able to go on the same medium as input. If it can, great! You can do a lot more that way, since you’d be able to feed the result of one program into another, a bit like what functional programmers call composition. That’s also the way to bring about compilers and other programs whose results are their own set of instructions. Of course, this requires a medium that can be both read and written with relative ease by machines. Punched cards and paper tape are the historical early choices there, with disks, memory, and magnetic tape all coming much later.

Thus, creating the tools looks to be the hardest part about bringing computation into the past. And it really is. The leaps of logic that Turing and Boole made were not special, not miraculous. There’s nothing saying an earlier mathematician couldn’t discover the same foundations of computer science. They’d have to have the need, that’s all. Well, the need and the framework. Algebra is a necessity, for instance, and you’d also want number theory, set theory, and a few others.

All in all, computers are a modern invention, but they’re a modern invention with enough precursors that we could plausibly shift their creation back in time a couple of centuries without stretching believability. You won’t get an iPhone in the Enlightenment, but the most basic tasks of computation are just barely possible in 1800. Or, for that matter, 1400. Even if using a computer for fun takes until our day, the more serious efforts it speeds up might be worth the comparatively massive cost in engineering and research.

But only if they had a reason to make the things in the first place. We had World War II. An alt-history could do the same with, say, the Thirty Years’ War or the American Revolution. Necessity is the mother of invention, so it’s said, so what could make someone need a computer? That’s a question best left to the creator of a setting, which is you.

Future past: steam

Let’s talk about steam. I don’t mean the malware installed on most gamers’ computers, but the real thing: hot, evaporated water. You may see it as just something given off by boiling stew or dying cars, but it’s so much more than that. For steam was the fluid that carried us into the Industrial Revolution.

And whenever we talk of the Industrial Revolution, it’s only natural to think about its timing. Did steam power really have to wait until the 18th century? Is there a way to push back its development by a hundred, or even a thousand, years? We can’t know for sure, but maybe we can make an educated guess or two.

Intro

Obviously, knowledge of steam itself dates back to the first time anybody ever cooked a pot of stew or boiled their day’s catch. Probably earlier than that, if you consider natural hot springs. However you take it, they didn’t have to wait around for a Renaissance and an Enlightenment. Steam itself is embarrassingly easy to make.

Steam is a gas; it’s the gaseous form of water, in the same way that ice is its solid form. Now, ice forms naturally if the temperature gets below 0°C (32°F), so quite a lot of places on Earth can find some way of getting to it. Steam, on the other hand requires us to take water to its boiling point of 100°C (212°F) at sea level, slightly lower at altitude. Even the hottest parts of the world never get temperatures that high, so steam is, with a few exceptions like that hot spring I mentioned, purely artificial.

Cooking is the main way we come into contact with steam, now and in ages past. Modern times have added others, like radiators, but the general principle holds: steam is what we get when we boil water. Liquid turns to gas, and that’s where the fun begins.

Theory

The ideal gas law tells us how an ideal gas behaves. Now, that’s not entirely appropriate for gases in the real world, but it’s a good enough approximation most of the time. In algebraic form, it’s PV = nRT, and it’s the key to seeing why steam is so useful, so world-changing. Ignore R, because it’s a constant that doesn’t concern us here; the other four variables are where we get our interesting effects. In order: P is the pressure of a gas, V is its volume, n is how much of it there is (in moles), and T is its temperature.

You don’t need to know how to measure moles to see what happens. When we turn water into steam, we do so by raising its temperature. By the ideal gas law, increasing T must be balanced out by a proportional increase on the other side of the equation. We’ve got two choices there, and you’ve no doubt seen them both in action.

First, gases have a natural tendency to expand to fill their containers. That’s why smoke dissipates outdoors, and it’s why that steam rising from the pot gets everywhere. Thus, increasing V is the first choice in reaction to higher temperatures. But what if that’s not possible? What if the gas is trapped inside a solid vessel, one that won’t let it expand? Then it’s the backup option: pressure.

A trapped gas that is heated increases in pressure, and that is the power of steam. Think of a pressure cooker or a kettle, either of them placed on a hot stove. With nowhere to go, the steam builds and builds, until it finds relief one way or another. (With some gases, this can come in the more dramatic form of a rupture, but household appliances rarely get that far.)

As pressure is force per unit of area, and there’s not a lot of area in the spout of a teapot, the rising temperatures can cause a lot of force. Enough to scald, enough to push. Enough to…move?

Practice

That is the basis for steam power and, by extension, many of the methods of power generation we still use today. A lot of steam funneled through a small area produces a great amount of force. That force is then able to run a pump, a turbine, or whatever is needed, from boats to trains. (And even cars: some of the first automobiles were steam-powered.)

Steam made the Industrial Revolution possible. It made most of what came after possible, as well. And it gave birth to the retro fad of steampunk, because many people find the elaborate contraptions needed to haul superheated water vapor around to be aesthetically pleasing. Yet there is a problem. We’ve found steam-powered automata (e.g., toys, “magic” temple doors) from the Roman era, so what happened? Why did we need over 1,500 years to get from bot to Watt?

Unlike electricity, where there’s no obvious technological roadblock standing between Antiquity and advancement, steam power might legitimately be beyond classical civilizations. Generation of steam is easy—as I’ve said, that was done with the first cooking pot at the latest. And you don’t need an ideal gas law to observe the steam in your teapot shooting a cork out of the spout. From there, it’s not too far a leap to see how else that rather violent power can be utilized.

No, generating small amounts of steam is easy, and it’s clear that the Romans (and probably the Greeks, Chinese, and others) could do it. They could even use it, as the toys and temples show. So why didn’t they take that next giant leap?

The answer here may be a combination of factors. First is fuel. Large steam installations require metaphorical and literal tons of fuel. The Victorian era thrived on coal, as we know, but coal is a comparatively recent discovery. The Romans didn’t have it available. They could get by with charcoal, but you need a lot of that, and they had much better uses for it. It wouldn’t do to cut down a few acres of forest just to run a chariot down to Ravenna, even for an emperor. Nowadays, we can make steam by many different methods, including renewable variations like solar boilers, but that wasn’t an option back then. Without a massive fuel source, steam—pardon the pun—couldn’t get off the ground.

Second, and equally important, is the quality of the materials that were available. A boiler, in addition to eating fuel at a frantic pace, also has some pretty exacting specifications. It has to be built strong enough to withstand the intense pressures that steam can create (remember our ideal gas law); ruptures were a deadly fixture of the 19th century, and that was with steel. Imagine trying to do it all with brass, bronze, and iron! On top of that, all your valves, tubes, and other machinery must be built to the same high standard. It’s not just a gas leaking out, but efficiency.

The ancients couldn’t pull that off. Not from lacking of trying, mind you, but they weren’t really equipped for the rigors of steam power. Steel was unknown, except in a few special cases. Rubber was an ocean away, on a continent they didn’t know existed. Welding (a requirement for sealing two metal pipes together so air can’t escape) probably wasn’t happening.

Thus, steam power may be too far into the future to plausibly fit into a distant “retro-tech” setting. It really needs improvements in a lot of different areas. That’s not to say that steam itself can’t fit—we know it can—but you’re not getting Roman railroads. On a small scale, using steam is entirely possible, but you can’t build a classical civilization around it. Probably not even a medieval one, at that.

No, it seems that steam as a major power source must wait until the rest of technology catches up. You need a fuel source, whether coal or something else. You absolutely must have ways of creating airtight seals. And you’ll need a way to create strong pressure vessels, which implies some more advanced metallurgy. On the other hand, the science isn’t entirely necessary; if your people don’t know the ideal gas law yet, they’ll probably figure it out pretty soon after the first steam engine starts up. And as for finding uses, well, they’d get to that part without much help, because that’s just what we do.

Future past: Electricity

Electricity is vital to our modern world. Without it, I couldn’t write this post, and you couldn’t read it. That alone should show you just how important it is, but if not, then how about anything from this list: air conditioning, TVs, computers, phones, music players. And that’s just what I can see in the room around me! So electricity seems like a good start for this series. It’s something we can’t live without, but its discovery was relatively recent, as eras go.

Intro

The knowledge of electricity, in some form, goes back thousands of years. The phenomenon itself, of course, began in the first second of the universe, but humans didn’t really get to looking into it until they started looking into just about everything else.

First came static electricity. That’s the kind we’re most familiar with, at least when it comes to directly feeling it. It gives you a shock in the wintertime, it makes your clothes stick together when you pull them out of the drier, and it’s what causes lightning. At its source, static electricity is nothing more than an imbalance of electrons righting itself. Sometimes, that’s visible, whether as a spark or a bolt, and it certainly doesn’t take modern convenience to produce such a thing.

The root electro-, source of electricity and probably a thousand derivatives, originally comes from Greek. There, it referred to amber, that familiar resin that occasionally has bugs embedded in it. Besides that curious property, amber also has a knack for picking up a static charge, much like wool and rubber. It doesn’t take Ben Franklin to figure that much out.

Static electricity, however, is one-and-done. Once the charge imbalance is fixed, it’s over. That can’t really power a modern machine, much less an era, so the other half of the equation is electric current. That’s the kind that runs the world today, and it’s where we have volts and ohms and all those other terms. It’s what runs through the wires in your house, your computer, your everything.

Theory

The study of current, unlike static electricity, came about comparatively late (or maybe it didn’t; see below). It wasn’t until the 18th century that it really got going, and most of the biggest discoveries had to wait until the 19th. The voltaic pile—which later evolved into the battery—electric generators, and so many more pieces that make up the whole of this electronic age, all of them were invented within the last 250 years. But did they have to be? We’ll see in a moment, but let’s take a look at the real world first.

Although static electricity is indeed interesting, and not just for demonstrations, current makes electricity useful, and there are two ways to get it: make it yourself, or extract it from existing materials. The latter is far easier, as you might expect. Most metals are good conductors of electricity, and there are a number of chemical reactions which can cause a bit of voltage. That’s the essence of the battery: two different metals, immersed in an acidic solution, will react in different ways, creating a potential. Volta figured this much out, so we measure the potential in volts. (Ohm worked out how voltage and current are related by resistance, so resistance is measured in ohms. And so on, through essentially every scientist of that age.)

Using wires, we can even take this cell and connect it to another, increasing the amount of voltage and power available at any one time. Making the cells themselves larger (greater cross-section, more solution) creates a greater reserve of electricity. Put the two together, and you’ve got a way to store as much as you want, then extract it however you need.

But batteries eventually run dry. What the modern age needed was a generator. To make that, you need to understand that electricity is but one part of a greater force: electromagnetism. The other half, as you might expect, is magnetism, and that’s the key to generating power. Moving magnetic fields generate electrical potential, i.e., current. And one of the easiest ways to do it is by rotating a magnet inside another. (As an experiment, I’ve seen this done with one of those hand-cranked pencil sharpeners, so it can’t be that hard to construct.)

One problem is that the electricity this sort of generator makes isn’t constant. Its potential, assuming you’ve got a circular setup, follows a sine-wave pattern from positive to negative. (Because you can have negative volts, remember.) That’s alternating current, or AC, while batteries give you direct current, DC. The difference between the two can be very important, and it was at the heart of one of science’s greatest feuds—Edison and Tesla—but it doesn’t mean too much for our purposes here. Both are electric.

Practice

What does it take to create electricity? Is there anything special about it that had to wait until 1800 or so?

As a matter of fact, not only was it possible to have something electrical before the Enlightenment, but it may have been done…depending on who you ask. The Baghdad battery is one of those curious artifacts that has multiple plausible explanations. Either it’s a common container for wine, vinegar, or something of that sort, or it’s a 2000-year-old voltaic cell. The simple fact that this second hypothesis isn’t immediately discarded answers one question: no, nothing about electricity requires advanced technology.

Building a rudimentary battery is so easy that it almost has to have been done before. Two coins (of different metals) stuck into a lemon can give you enough voltage to feel, especially if you touch the wires to your tongue, like some people do with a 9-volt. Potatoes work almost as well, but any fruit or vegetable whose interior is acidic can provide the necessary solution for the electrochemical reactions to take place. From there, it’s not too big a step to a small jar of vinegar. Metals known in ancient times can get you a volt or two from a single cell, and connecting them in series nets you even larger potentials. It won’t be pretty, but there’s absolutely nothing insurmountable about making a battery using only technology known to the Romans, Greeks, or even Egyptians.

Generators a bit harder. First off, you need magnets. Lodestones work; they’re naturally magnetized, possibly by lightning, and their curious properties were first noticed as early as 2500 years ago. But they’re rare and hard to work with, as well as probably being full of impurities. Still, it doesn’t take a genius (or an advanced civilization) to figure out that these can be used to turn other pieces of metal (specifically iron) into magnets of their own.

Really, then, creation of magnets needs iron working, so generators are beyond the Bronze Age by definition. But they aren’t beyond the Iron Age, so Roman-era AC power isn’t impossible. They may not understand how it works, but they have the means to make it. The pieces are there.

The hardest part after that would be wire, because shuffling current around needs that. Copper is a nice balance of cost and conductivity, which is why we use it so much today; gold is far more ductile, while silver offers better conduction properties, but both are too expensive to use for much even today. The latter two, however, have been seen in wire form since ancient times, which means that ages past knew the methods. (Drawn wire didn’t come about until the Middle Ages, but it’s not the only way to do it.) So, assuming that our distant ancestors could figure out why they needed copper wire, they could probably come up with a way to produce it. It might not have rubber or plastic insulation, but they’d find something.

In conclusion, then, even if the Baghdad battery is nothing but a jar with some leftover vinegar inside, that doesn’t mean electricity couldn’t be used by ancient peoples. Technology-wise, nothing at all prevents batteries from being created in the Bronze Age. Power generation might have to wait until the Iron Age, but you can do a lot with just a few cells. And all the pieces were certainly in place in medieval times. The biggest problem after making the things would be finding a use for them, but humans are ingenious creatures. They’d work something out.

Future past: Introduction

With the “Magic and Tech” series on hiatus right now (mostly because I can’t think of anything else to write in it), I had the idea of taking a look at a different type of “retro” technological development. In this case, I want to look at different technologies that we associate with our modern world, and see just how much—or how little—advancement they truly require. In other words, let’s see just what could be made by the ancients, or by medieval cultures, or in the Renaissance.

I’ve been fascinated by this subject for many years, ever since I read the excellent book Lost Discoveries. And it’s very much a worldbuilding pursuit, especially if you’re building a non-Earth human culture or an alternate history. (Or both, in the case of my Otherworld series.) As I’ve looked into this particular topic, I’ve found a few surprises, so this is my chance to share them with you, along with my thoughts on the matter

The way it works

Like “Magic and Tech”, this series (“Future Past”; you get no points for guessing the reference) will consist of an open-ended set of posts, mostly coming out whenever I decide to write them. Each post will be centered on a specific invention, concept, or discovery, rather than the much broader subjects of “Magic and Tech”. For example, the first will be that favorite of alt-historians: electricity. Others will include the steam engine, various types of power generation, and so on. Maybe you can’t get computers in the Bronze Age—assuming you don’t count the Antikythera mechanism—but you won’t believe what you can get.

Every post in the series will be divided into three main parts. First will come an introduction, where I lay out the boundaries of the topic and throw in a few notes about what’s to come. Next is a “theory” section: a brief description of the technology as we know it. Last and longest is the “practice” part, where we’ll look at just how far we can turn back the clock on the invention in question.

Hopefully, this will be as fun to read as it is to write. And I will get back to “Magic and Tech” at some point, probably early next year, but that will have to wait until I’m more inspired on that front. For now, let’s forget the fantasy magic and turn our eyes to the magic of invention.