Balancing Accounts (short story): A Robot Space Ship Shows How Autonomy and Initiative Works

April 1, 2019

 

Balancing Accounts is a solid short story, sort of a combination of the hard science of Vernor Vinge, the economic capitalism of Charlies Stross’ Saturn’s Children, and the robot's POV from Martha Wells’ Murderbot Diaries (though there is no snarky humor- this is more like ART’s POV). The story follows Annie, an autonomous space tug, as she has to deal with a mysterious cargo. Besides being a good read, the story is great illustration of the concepts of autonomy and initiative.

 

A major misperception about robots is that full autonomy means that robots can decide to take over the world. That’s because one definition, the common definition, of autonomy comes from social science, where it means the free will of the individual, the right and ability to politically govern one’s self or citizens, manifest destiny, that sort of thing. 

 

There’s another definition. Engineers use the words “autonomy” to mean “look Ma, no hands”- we use it to mean automatic processes. The profession started using the words “autonomy” and “self-governing” back in the 1788 when James Watt used a centrifugal governor to automatically stop steam production before a steam engine exploded. A centrifugal governor is one that uses spinning flyballs:  The more steam being produced, the more coming out of the output shaft, causing the flyball mechanism to turn faster. The faster the mechanism turns, the higher centrifugal force raises the arms with the metal balls. When the arms get high enough, they trip a switch, hopefully set to go off before an explosion. So while this is called “self-governing” and “regulating", it is not self-governing in the political meaning of the word nor does it imply that the mechanism or robot has any initiative. 

 

 

But, as with the robots in Balancing Accounts, you might want a robot to have some initiative. In the story, the robot transport ships and service robots are working around Saturn, far out of effective communication range to constantly ask a human supervisor what to do. Annie has to have some flexibility in order to adapt to new situations, make quick decisions, and, in this case, maximize the money she makes for the investors who built it- which is the reason why she exists. 

 

One of the earliest explorations of robot initiative was for the RoboCup domain, the domain of robot soccer. Just like with a human team, you’d like the individual robots to be able to opportunistically cover for a damaged or malfunctioning robots or to adapt to a never-seen-before new play. This means the robot may have to jump out of a standard script and into a different one or to react in ways that do not require explicit enumerating all possibilities to handle that specific situation- because no one has thought of that specific situation. That ability to change or go off script (within limits) is initiative.  People tend to associate initiative with intelligence and use those terms interchangeably, which can add to the confusion over robots and general artificial intelligence.

 

Colman and Han proposed five types or levels of initiative. The lowest level is no autonomy, where robots strictly play their role on the soccer team and follow the script for reaching their objective. The problem with no autonomy is that the programmer has to have thought of every possible situation. The next two levels are process autonomy and systems-state autonomy, where the robots are allowed to determine how to reach the objective, for example, sometimes going to the left instead of the right rather than follow a script by rote. In intentional autonomy the individual robots are allowed to go outside of their robot in order to better meet the objectives  of the team. For example, a robot might expand its area of coverage for defense because another defender is damaged. The highest level of initiative is constraint autonomy, where the robot can discard, or relax, constraints on the plan for reaching the individual or team goal. For example,  “My scripts says I should cover robot X and robot Y and that I should chase the ball if it becomes available, but I can’t do all three well at once. I’m going to chase the ball because that has the highest value for the team objective.”  If a robot has constraint autonomy, it can take over the world, right? No, fortunately, constraint autonomy is still subject to the robot’s bounds, we’re discarding constraints, not objectives, so unless the original objective was to take over the world, it wouldn’t.

 

Annie has constraint autonomy. She exhibits the highest level of initiative, and what pretty much everyone would agree was really intelligent behavior, but she is bounded by the task or mission of making a return on investment for her sponsors. So unless we give robots the explicit mission is explicitly of taking over the world, we aren’t going to have robots take the initiative to have the uprising. Or to become overly protective of humans as in Jack Williamson’s With Folded Hands. 

 

Balancing Accounts is in the We, Robots anthology edited by Allan Kaster. If you want to learn more about initiative, there’s a chapter in Robotics Through Science Fiction: Artificial Intelligence Explained Through Six Classic Robot Short Stories that covers this. It uses Asimov’s short story Catch That Rabbit to discuss levels of initiative. You also might want to read Vernor Vinge’s short story Long Shot, it is in RTSF:AIETSCRSS too, which you might want to read anyway— Ilse the robot spaceship has a lot in common with Annie. 

 

- Robin

 

 

 

Share on Facebook
Share on Twitter
Please reload