Shining Armor (short story): Are Some People Better Than Others at Controlling Robots?
Shining Armor is a fun little story about a near future in a town in a sort of Third World-ish country. The town is special because it has a guardian robot: A giant piloted mecha similar to a Jaeger from Pacific Rim. The robot has been parked for decades. Corporate bad guys want to acquire the local land and displace the townies- either by buying them out or by good old-fashioned violence.
Cue the robot!
The twist of the story is that access is linked to genetics- you have to be in the right family to control a particular robot- because controlling the robot requires special skills. We discover this along with the young boy who is tagging along after his grandfather.
Is it so farfetched that our genes might determine if we can operate sophisticated robots? The answer now is “yes, that’s farfetched and by applying good human-robot interaction principles any reasonably qualified person who is trained should be able to handle a robot.” But the answer in the 1970s through the 1990s, before the field of human-robot interaction was established in 2001, was “no, some people may just be better than others and we should find those people and create a Robot Academy to refine their skills."
Here’s what was happening in the 70s, 80s, and 90s. Roboticists were toiling away making advances in hardware and software- a new actuator here, a new software behavior there. Each of these advances, and this is still true today, requires specialized knowledge in that particular field of hardware or software. But robots are complex systems and so many robot products represent a couple of great advances but the rest of the system is hacked together. So the robot system was overly complicated and jury-rigged to highlight the new advances, not really designed like a system. To make matters worse, roboticsts made, and continue to make, the assumption that a user interface is something you tack on at the end of the design process. A great example of this is the DARPA Robotics Challenge, you can read the analysis that I performed for DARPA here. You had humanoid robots requiring teams of 28 engineers with a couple of operators trained just to get the robot through one of the many tasks. If you build robots like that, you in fact need a Ph.D. in robotics plus a high aptitude for 3D visualization and spatial reasoning to keep up with the robot is doing and could do at any moment.
Essentially, we thought robot operators needed to be Ender Wiggins. Or, barring that, a group of innately talented and highly trained people like astronauts.
Or that we roboticists, since we were the only ones who could run the robots we built, were actually versions of astronauts and Ender. Maybe not as physically adept, but very special and heroic and elite. The world just needed more of us.
To quote Dr. Evil, “yeah, right.”
In 2001, robotics caught up to the rest of engineering and acknowledged that robotics really required two things that it had heretofore been lacking: systems engineering (i.e., design the system, not design a cool part and than tack on other components and call it a system) and decent user interfaces. User interface research, part of the verh large field of human-computer interfaces (HCI)- think Apple- had already helped overcome problems with complexity, displaying too much information, poor representations of 3D layouts, and so on in airplanes and nuclear power and chemical plants — and in MP3 players. It was clear roboticists hadn’t taken those courses or looked at the textbooks, but we couldn’t put it off any longer.
But, to be fair, robots needed more than HCI. The challenge is that robots require more than a typical user interface on a desktop on a laptop. Your laptop is not a physically situated agent that directly moves and manipulates the environment (we’re not counting ordering things from Amazon). Plus a robot is generally providing less information to the operator than a high-end first person shooter video game, yet the stakes are much more real. As a result, working through a robot increases cognitive workload and fatigue. These are all problems for an operator working behind the robot. Now, what about someone in front of the robot or working side-by-side, say a person trying to follow a robot tour guide or a victim being found by a robot. That’s a very different set of interactions than a user interface and touch on social interactions. Social interactions include things like how people react negatively to shiny black, badass-looking robots and no matter how cool the roboticists think they look. You want people to play nice with a robot, you need the robot to look cuddly and non-threatening. The Robocop look is great for a police robot, not so good for a healthcare robot for grandma.
The point is that, in 2001, we all began to realize that it was time for robotics to catch up with human-computer interaction, but that robotics was significantly different than HCI And thus the field of human-robot interaction was created at a workshop sponsored by DARPA and the National Science Foundation. I was a co-organizer and that workshop remains one of the pinnacles of my career.
So by 2010 the Robot Academy idea had faded away. User interfaces hadn’t necessarily improved, as roboticists still put that last on the To Do list, but we all know that with good human-robot interaction design (even though we might believe it is someone else’s problem), a mundane person should be able to use or engage with a robot..
And that mundane someone might be a mischievous grandfather. Or not. You need to read the story to find out! You can find Shining Armor in the We, Robots anthology edited by Allan Kaster. It’s a quick fun read.