top of page

Rex Nihilo series: Are Governors on Autonomy and Bounded Rationality a Scam?

Robert Kroese’s Rex Nihilo series is an over-the-top comedy featuring a dim-witted galactic con man, Rex Nihilo, and his faithful, put upon robot Sasha as they go from one misadventure to the other. The story is told from the point of view of Sasha, Self-Arresting near-Sentient Heuristic Android- yes, it should be Sansha but her name is just one of Sasha’s curses in life. She is super intelligent, so the robot designers inserted a governor that causes her to reboot if she has an original thought- hence the “self-arresting” and "near sentient” in her name. She hilariously faints anytime she has a thought that could save the day from whatever trouble Rex has gotten them into.

Which raises the question- would you design robots with governors? Or is the concept of governors on autonomy just a fast-talking, con man scam designed to allay fears about the robot uprising? A governor on intelligence is a popular trope in science fiction, that an artificial general intelligence (AGI) robot has to been inhibited in order to enslave it so we’d get useful work from it or to keep it from taking over the world, killing humans, etc. An example of this is in William Ledbetter’s book Level 5, where a Morgan Stanley type financial house creates an AI to do stock market predictions that is so powerful it has to be kept on a single computer with an air gap from the internet- yeah, we know how THAT goes. From a scientific perspective “maybe,” but it would be like designing a Ferrari, then installing a speed controller that never lets it go over 75 mph. Kind of a waste.

Robots, like people, have what Herb Simon, a Nobel laureate and one of the founders of the field of AI, act with bounded rationality. Bounded rationality often gets confused with a governor, since it sounds like we are externally placing bounds on rationality. Simon used the term to mean that agents- humans, software agents, robots- entities that could sense, plan, and act- were as rational as their resources allowed them to be. As an economist, he argued that people can’t be modeled as perfect decision makers because they don’t have all the information, may not have time to think through complex situations, and have varying degrees of education and intelligence. People, and presumably robots, do the best the can with what they have: They do act rationally but because of these natural bounds, they may not be optimally rational is some larger sense of what an economist thinks is optimal.

Individuals have an upper bound on their intelligence- often measured by their IQ- and can educate themselves and be better informed but they aren’t changing their innate IQ, no matter how hard they try. Similarly, robots are built with an IQ and they aren’t going to suddenly become smarter, not even with machine learning, unless they is some major change in their programming and they have access to additional computational, power, and other resources. Just like Flowers for Algernon, they aren’t accidentally going to be super-intelligent- otherwise some homo sapien in the past 200,000 years would have stumbled across it.

And they aren’t going to be accidentally super-intelligent in every aspect of intelligence. Think of robots from the multiple intelligences perspective - a robot can have an extraordinary high logical-mathematical IQ, but be very low in all the other IQ modalities.

Robots aren’t going to take over the world unless a) they are programmed to and b) someone gives them access to near infinite resources. They don’t need a governor added to them, it’s already in there by the way the robot is designed.

Back to Rex and Sasha, It’s not critical to read the books in order of publication (not the numbering system), since the first couple of books are ok then the series really clicks into first gear, but it’s nice for continuity.

The first one is Starship Grifters with its play on Heinlein’s Starship Troopers to remind us how much Rex is NOT like the all-American good guys in Heinlein’s future history universe. Would Heinlein have enjoyed Starship Grifters? Maybe, though a sense of humor didn’t seem to be his strong point in Alec Nevala-Lee’s Astounding: John W. Campbell, Isaac Asimov, Robert A. Heinlein, L. Ron Hubbard, and the Golden Age of Science Fiction. That’s a great book and check my interview with him.

Aye, Robot, as in pirate Aye, Captain, argh argh argh, doesn’t particularly have anything to do with Asimov’s I, Robot but it does feature Rex and Sasha as pirates, with Rex about as much a pirate as Pirate Steve in Dodgeball Arrggg, arrggg, arrggg! It has a delightful Spaceballs feel to it.

Kroese really hits his stride with the last two books. The Wrath of Cons manages to send up Japser Fforde's Thursday Next literary series and, of course, sneaks in Star Trek: The Wrath of Kahn. Well played, sir! And if you haven’t read the Thursday Next series, you’re missing a treat. Thursday is a literary detective- hard to explain but it works. Great way to put your memories of the classics like Jane Eyre, Shakespeare, Dickens, Poe, pretty much everyone to work.

But I digress! Out of the Soylent Planet is pretty fun too with everyone worried that the slop they are eating is people processed by the large corporation trying to rule the galaxy. Nope, the source of the food stock is even more disturbing. We’re in a comedy mash up of Philip K. Dick meets The Day of the Triffids. Now that’s a scary thought, more scarier than a robot uprising!

Putting governors on robot intelligence or autonomy sounds like a great solution to problems that don’t exit. Robots aren’t going to stage a robot uprising if they aren’t programmed to have that capacity. Let’s take 2001: A Space Odyssey: Both the movie and book states that HAL’s neurosis was predictable, just no one thought of that particular situation. Plus, Mission Control had needed HAL to be that smart to run the mission in case the humans died. HAL was as smart as he needed to be. Sure, he turned deadly but that was Mission Control’s fault in giving him conflicting directions. Building HAL then trying to cobble together a system that would prevent HAL from acting differently would be about as successful as Asimov's Three Laws of Robotics- which most people forget never work as intended. The odds of us thinking of a governor that would prevent a situation that we didn’t think of is about the same as the odds of us thinking of that situation: nil. We’re not good at that. If you don’t believe that we aren’t good at imagining scenarios, read the book Sources of power: How people make decision by noted psychologist Gary Klein. It summarizes his field studies in an easily digestible form. And creating a robot slave that is so intelligent we have to put a gesh on it like in the Alchemy Wars is the epitome of bad programming, regardless of ethics.

Just design the robot to do what you want it to do!

But I honestly believe that what YOU want to do is to read the Rex Nihilo series and enjoy Sasha rebooting her way through screwball adventures with Rex.

Books discussed:


bottom of page