top of page

Artificial Condition (2018): Follow up to Martha Well’s Hugo Awards Winner!

Robots: service ‘bots, drones, humanoids

Recommendation: Get it! The good news is that there is no sophomore slump, volume 2 of The MurderBot Diaries is as fun as the first one. So reading it is not murdering time, it’s a stay-cation!

MurderBot is back in time for summer reading! Artificial Condition is the second novella in the witty MurderBot Diaries series and answers questions like …

  • What is the origin story of MurderBot?

  • How will he escape detection?

  • Will he continue to show a soft spot for clueless humans?

  • Can an intelligent agent learn good character from reruns of Sanctuary Moon?

and the most important question of all:

  • Is Artificial Condition as good as All Systems Red?

And the answer to that last question is "yes, absolutely- it is just as fun as All Systems Red." It starts out a bit pensive but then rapidly accelerates to the mayhem we know and love.

Once again the plot is driven by malware and software hacks, see the teachable moments for software engineering in my review of All Systems Red. From a general science perspective, bad cybersecurity is almost as good a plot generator as Asimov’s Three Laws. And it is more likely to occur. Asimov had to work hard to sculpt his three laws so that they would sound reasonable but would have subtle ambiguities, thus creating plots where unexpected robot behaviors occurred. In the MurderBot Diaries we have a distant future where programming is so slack that even the malware has bugs and lazy humans unthinkingly accept updates.

Oh, is that the future? Or it that now? I get confused...

No matter, the point is that bad cybersecurity is the gift that keeps on giving us MurderBot!

The book is an origin story, starting a few cycles after All Systems Red ended. MurderBot, it turns out, wasn’t just walking out of a suffocating human guardian relationship, he was also running towards discovering his past. Along the way he meets up with a surprisingly intelligent robot transport called ART, even more clueless scientists to save, and dishes out quite a bit of snarky running commentary on his surroundings.

While not exactly a major plot point, the idea of intelligent agents- be they robot or human- learning valuable lessons from watching entertainment feeds like Sanctuary Moon (MurderBot’s favorite) does comes up. MurderBot repeatedly bemoans that humans are scared of security robots because in the serials security bots are shown as either merciless enforcers or going rogue and killing everyone.

Certainly it is feasible that MurderBot and ART can learn about human-human interaction and teamwork from watching these programs; there’s a whole sub-discipline of robotics called “learning by demonstration.” But do people really learn and form opinions about robots based on dumb TV shows?

It is highly possible. It turns out there is evidence that people experience social influence about robots. Influence is a concept explored by social scientist Dr. Bob Cialdini, long before we had terms like “social influencers.” His book, Influence: The Psychology of Persuasion, summarizes his academic studies for the business mass market- it’s readable and useful and I highly recommend it. One aspect of persuasion is how social influence motivates people to do dumb things like get into bidding wars- I used the results of one of his studies to sell a car for a higher price that I would have gotten otherwise. But another more relevant aspect is that it explains what happens when people encounter something new or unexpected. Being tribe-oriented, humans look around to see how everyone else, presumably in their tribe, is reacting. Unfortunately, everyone else is also standing around looking at everyone else. This explains the Kitty Genovese case, where a woman was murdered in New York and no one came to the rescue or called the police, or accidents where everyone is slowing down but not stopping to help. The subconscious part of the human brain is asking “what is the herd doing?” The answer is usually “nothing" when it is a new event. And, unfortunately, the subconscious portions of our brain tend to win out in new, emotionally threatening situations unless we put significant effort into overcoming that tendency. So nobody does anything except look at each other confused. It’s not callousness, it’s a brain quirk.

I heard Bob say his research had saved his life. He was in a bad car accident and saw that people were just standing there, so he managed to stay awake enough to directly order a person “you, call 911. you, take care of her. you, deal with traffic. you, help me.” At that point people snapped out of their mental trance and began busily doing all the right things. He then passed out but woke up in a hospital alive.

Social influence appears to impact how people deal with new-fangled robots, as seen by my research with Dr. Dylan Shell. We were part of a theater production of Shakespeare’s A Midsummer Night’s Dream covered by WIRED where the fairies all had miniature robot helicopters as their alter egos. Unfortunately the little helis were mechanically unreliable and difficult to teleoperate, especially with sudden changes in the air conditioning flow rate in the theater. Whoosh! As a result at least one robot crashed during each performance, and often crashed into the rather startled audience. If a robot crashed during the play on the set or deviated from what it was supposed to do, usually the robot’s “mother” would pick it up and either improvise scolding it or cooing reassuringly over it, like it was a baby or a pet. If it crashed during the opening dance routine, the actors had to pretend nothing happened and we would have an extra try to signal the audience to pass it to the aisle to be repaired and readied for the next scene.

Here’s the interesting part. If the first crash was during the play with the actors treating the helicopter like a child, and then there was a crash into the audience, the audience would gently pick up the robot, fuss over it, and carefully hand it back to the actor or pass it to an operator. If the first crash was into the audience and the audience had not seen how the actors treated the robots, we got all sorts of bizarre responses. One guy literally threw the helicopter overhand like a baseball back onto the stage (that robot was retired due to the damage). Another kept trying to relaunch it, getting more annoyed (hint- if the propellers on a rotorcraft aren’t spinning, it’s not going to fly). The response to the crash was influenced by how theatergoers saw a group of student actors in tights and wigs and speaking in prose- about as far from real life as you can get- treated the robots! So fiction can influence reality.

MurderBot was probably right in how the entertainment feeds generated suspicion and fear of security robots. The first encounter with a robot- either in person or in a “virtual” world- sets the expectations of how to behave in future encounters. But what is really fun about social influence is that it adds a scientific justification for the neat way entertainment feeds influence MurderBot and ART in Artificial Condition.

- Robin

To buy Artificial Condition, click here!

For a video review of 'Artificial Condition', head over to the official RTSF YouTube channel or simply click below...

bottom of page