Nemesis Among the Machines

At LessWrong Virgil Kurkjian has a fascinating post, “Leto among the Machines”, analyzing Frank Herbert’s “Dune” chronicles, specifically its worldbuilding point of the Butlerian Jihad (the elimination of computers from human civilization), from a “rationalist” perspective, especially how the Butlerian Jihad makes perfect sense once one considers the logic of bureaucracy in our real life and Goodhart’s Law in general (a measure that becomes a target ceases to be a good measure). Honestly, go and read it. As someone who’s digested it for some time, at last I feel up to offering some thoughts on the topic.

Kurkjian recommends against “automation”, by which he means something closer to what I and most other people would call “bureaucracy”, instead believing that intellectually gifted human beings making these decisions themselves would be a better use of their time than devoting endless man-hours to generating rule sets for other men (or machines) to use. Which is a fair point.

The dark Theory of Bureaucracy

However, I think he’s off the scent by focusing on what the intellectually gifted should do with themselves. For the problem he describes is not automation, not legible signals, not guidelines as such or even bureaucracies. No, the problem is that the bureaucratic mentality is that of an automaton, unwilling to exercise any independent human judgment that we’d expect from a reasonable person. As Scott Aaronson once wrote:

In my own experience struggling against bureaucracies that made life hellish for no reason, I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone, but chose not to. It simply wasn’t the case, I don’t think, that I would’ve been equally obstinate in the bureaucrat’s place, or that any of my friends or colleagues would’ve been. I simply had to accept that I was now face-to-face with an alien sub-intelligence—i.e., with a mind that fetishized rules made up by not-very-thoughtful humans over demonstrable realities of the external world.

“The target of the Jihad was a machine-attitude as much as the machines” indeed. As long as this machine-attitude persists it won’t really matter whether geniuses spend their time automating systems, directly running them, or whatever else you can think of. While problems could always use more intellectual firepower directed at them, a lack of intellect per se isn’t the main issue; the root cause is the bureaucratic mentality.

We’re told bureaucracy is a system of rules and guidelines that fail by being applied by well-intended functionaries and still causing suffering anyway, but it’s not really their fault because they did their best. In more cynical readings bureaucrats look after their own careers instead, but the causes are still conceived to be innocent. This is one variant of “mistake theory”. But there’s just one problem: as Scott Aaronson implies in his quote, it’s not really true.

The cynical reading isn’t nearly cynical enough. Why does the bureaucrat choose not to help you? The short answer is that the bureaucrat tends to be a man of morally defective character, someone as petty as he is sadistic. He chooses not to help you for the simplest of reasons: he doesn’t want to help you, he takes pleasure in making you suffer. So why should he help you?

Sometimes the simplest Solutions are the Best

A simple reason would be because he’d be punished if he didn’t, which in a realm of personal authority, where everyone knows the decision is his and his alone, just might happen, but what about in a realm of bureaucratic authority? There nobody really knows whose decision anything is; even those nominally in charge can cite or even outright fabricate some rule or guideline that allegedly ties their hands. In this way they avoid accountability, offloading the blame for their sadism into the æther. Bureaucracy didn’t invent the excuse of “I was just following orders”, but only in bureaucracy can it be used by those who gave the orders!

So the solution is obvious: take that chance away from them. Restore within our institutions a hierarchy of superiors and subordinates who are entrusted to exercise independent human judgment; make the lines of responsibility clear, overt, and, yes, legible, so everyone knows who is responsible for what. Every decision should have someone to credit if it goes right, someone to blame if it goes wrong, someone with the authority to make it go right or go wrong. Show zero tolerance to the practice of offloading choices into the æther; whenever a choice seems to be offloaded, it’s still the emanation of a human mind, merely cloaked by so many layers of obfuscation not even a genius can pierce them.

The Computer: a Penumbra of the human Soul

Indeed, software, machine learning, and artificial intelligence itself is all an emanation of human minds; these devices ultimately make no decisions on their own, they are programmed by human beings. Frank Herbert seemed to be more aware of this fundamental fact and its implications than today’s scholars are! In his Dune books Reverend Mother Gaius Helen Mohiam says of the universe before the Butlerian Jihad:

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

Think of this anytime somebody tells you we should have an “unbiased” artificial intelligence arrest, prosecute, or judge people, among many other functions currently performed by human beings (albeit ones that are very machine-like now!). Behind the computer there is the man, better hidden in his bureaucratic labyrinth than ever before.

That doesn’t mean we should reject automation, reject computers, or reject machines; no. Just as in the hierarchy we place subordinates under the supervision of a flesh-and-blood man, so machines are placed under such supervision. Machines that use legible signals to make decisions are only a problem when they are divorced from the human part of the equation; when machines make mistakes they need humans to correct them, they need humans to take into account all the illegible qualities of the world. Machines and the rules and guidelines that function as invisible machines cannot deliver the ultimate ends we humans prize in our endeavors, for they are matters of the spirit that cannot be reduced to even mathematical formulae, let alone legible signals.

Man need not make every decision using the full faculties of his mind; that is boredom, that is tedium, and he perceives it to be a waste because that’s the truth: it is a waste of his time, a wate of his potential. But he does need to be there with the untrammeled power to make independent decisions when it’s expedient to the spirit (not the letter, the spirit) of the effort to do so. In particular to assume his proper role in life man must be cultivated to think exercising his own judgment is a good thing.

Out with Bureaucratic Man, in with Heroic Man

In our bureaucratic societies we don’t even do that for our so-called leaders anymore, let alone the general population, and it’s hollowed out our most able people’s capacity to lead as well as the opportunities we give any able man to lead, a squeeze from two sides that has left our societies the world over bereft of human initiative, of creativity, of the capacity to experiment with anything genuinely new or different from what came before.

Sure, bureaucratic man can, albeit with ever-greater ineffiency, develop incremental improvements on what came before, even wholly repurpose whole swaths of society on a good day, but true creation is beyond him. That is the preserve of the divine, as embodied in heroic man.

The Virtue of Randomness

For the machines heroic man is overseeing in the course of his fully-automated luxury-capitalist lifestyle of aristocratic leisure, I furthermore propose that not only legible signals be used but also randomness, a trait largely missing from the programs, rules, and guidelines we employ today. If a signal is too legible and too easily gamed, much of the time we don’t even have to resort to a human being making the decision directly, we just have to make the decision more random.

Vary at random the criteria for accepting the signal; don’t have a fixed threshold. Introduce random selection like my own proposal for college admission lotteries. You could even introduce an element of randomness into human decision-making: if you’re hiring people, select those who meet some minimal level of interestingness to you, and then put them into a lottery.

Randomness isn’t just an aid to machines. Modern scholars have argued that the great virtue of oracles, divination, and the like was not revealing the will of the gods, but rather introducing an independent and quasi-random element in the decision-making process, preventing bias from overwhelming judgment as thoroughly as it would otherwise. Of course, a mischievous pagan would insist that randomness itself is the medium through which the divine will is revealed.

So powerful is randomness that picking names out of a hat beats every single other method of selecting government officials, as the inarguably great results achieved by ancient Athens and the other classical democracies under that system, called sortition, demonstrate. This is likely because representative panels of ordinary citizens represent their own kind’s interests better than any other kind of body does; this principle could and I’d argue should be employed anywhere we could use a voice to represent the masses, which is a lot of places throughout society.

Confucianism? Blech.

Kirkjian says we could stand to be more Confucian, but the fact bureaucracy, with all its attendant evils, metastasized in China under Confucianism faster than it did any other civilization militates against the idea Confucianism would do us any good. If any Chinese philosophy would do us good it would be Taoism, though even then I’d leaven it with very heavy doses of our very own Nietzscheanism and Objectivism. Let a thousand flowers bloom, let’s experiment, let’s work with the flow of human nature instead of against it, let’s peer into the unknowns of existence and uncover the secrets of the universe, let’s achieve new and great things. Let’s exalt mountaineering, spaceflight, exploring, going off into the woods to make art or think over a problem really hard. Cast off the shackles of life-denial and embrace life-affirmation.

Our Butlerian Future?

Which will ultimately be necessary for the continued progress of mankind, no matter what else happens. As this fascinating comment on “Leto among the Machines” points out, in Frank Herbert’s “Dune” universe, it seems even given ten thousand years of development computers failed to achieve true volition; man ultimately failed to create a reasoning machine. At some point computers became as sophisticated, as “smart” as they were ever going to get, presenting man with a dead end: no more progress based on the computer was possible. So man rejected the path of death by comfortable stagnation and instead chose the path of life by uncomfortable growth, by removing the machines from the equation and starting over from scratch by cultivating new abilities within the human mind, abilities which could replicate what the machines provided, but with the soul providing potential for growth without limits.

This was the Butlerian Jihad, the painful removal of dependency upon a single source of prosperity, i.e. the computer, just as the Atreides Jihad ten thousand years later painfully removed dependency on spice. Virtuous, optionality is; cultivating it, no matter how uncomfortable it may seem at first, avoids putting a ceiling on your progress, averts the world of pain that is the hydraulic despot cutting off your water supply (or spice supply, as the case may be 😉 ). This is one of the more underrated lessons of “Dune”. Notice also that “Dune” itself doesn’t take a stance against computers per se; the Butlerian prohibition relaxes substantially over the course of Frank Herbert’s original six novels. The jihad only became necessary, tragically necessary perhaps, when computers became the sole source of man’s growth; when he grew enough to have other options their use wasn’t so dangerous anymore.

All the talk of the Singularity twenty years back and artificial superintelligences now might make such a future seem like a boutique vision of a man who didn’t want to write a far-future story about computers, and perhaps that supposition is correct. I suspect, however, that in the fullness of time Frank Herbert will be proven right: that computers will make excellent tools, but tools is all they’re ever going to be. Intelligence, reason, true volition will forever elude them. In the end the machine, the bureaucracy, the system, is only a simulacrum; the individual human mind, powered by the human soul, is the true superintelligence that awaits us.

Leave a Reply

Your email address will not be published. Required fields are marked *