Would You Trust An Intelligent Robot?
Providence delivers a science fiction primer on why assured autonomy and explainable AI are critical to the success of real world robots.
Why would you trust an intelligent robot to make the right decision? That’s the existential question in a 2020 science fiction book by Max Berry, Providence- where the answer is delightfully ambiguous. Providence has jumped into my best books list, as nailing the science, being thought-provokingly dark, and being really hard to put down because you don’t know how it will end. It is about a ship with AI (2001: A Space Odyssey) that is fighting aliens swarms (Enders’ Game or Starship Troopers). The ship is either brilliant and playing a long game that the human crew isn’t capable of understanding, or it is making a disastrous mistake and going to get everyone killed, like an earlier version of the AI did.
What do roboticists know about people trusting robots? The Science Robotics article describes how Providence gets it right: the trust would depend on
· transparency or explainable AI- does the AI system let a person see how it is working and why it is making decisions or it is a totally opaque black box? Deep learning and neural networks are notoriously not transparent.
· what the testing and evaluation procedures were for assured autonomy, but what if the manufacturer says those procedures are intellectual property? And Gary Klein in Sources of Power has shown that simulation tends to miss black swan events.
· the human’s previous experiences with AI were (passive trust calibration), its interactions with the AI (active trust calibration), and the general human-robot teaming (relationship equity)- following the longitudinal trust model by Visser, Peeters, Jung 2020
In terms of the longitudinal trust model, I created a graphic translating the model into a spectrum and showing where each of the four crew members fell.
If you’d like to learn more about trust in robotics, I have a
· section on trust in Introduction to AI Robotics, second edition