top of page

I, Robot (2004): Much More Fun Than Being Run Over by the Trolley Problem


Robots: humanoid


Recommendation: Watch it with the family. It’s fast, fun, shallow and totally unrealistic but a good opportunity to talk about ethics and the Trolley Problem



The I, Robot movie (2004) comes across like some hack did a quick scan of Asimov’s classic collection of short stories and whipped up an entertaining smart-mouth Will Smith movie spiced with "blah-blah-blah Three Laws" inserted every now and then. Actually I, Robot was an original screenplay called Hardwired that the execs at Fox (yes, the studio who screwed up Firefly and Almost Human, go team!) ordered rewritten with an I, Robot veneer. While not exactly Asimov canon, the movie is, as one of my former grad students put it, “not as bad as we had feared.” As a scifi action movie, I, Robot holds up quite well and is definitely one of the better big-budget blockbusters of the 2000’s. The special effects still look good, Will Smith is his usual amusing self, and the as yet unknown Alan Tudyk is the robot Sonny.


The science in I, Robot is negligible, which is fortunate as it gets most things wrong. The closest it approaches technical accuracy is that the internal motivation for Will Smith’s animus towards robots is the Trolley Problem. The Trolley Problem is a classic dilemma in ethics, sort of philosophy majors' version of the Kobayashi Maru no-win scenario in Star Trek. Ethicists have been discussing it for over 100 years with each generation getting to argue anew about what is right.


The problem goes something like this: a trolley is careening down the rails and can’t be stopped. It is heading for five people laying on the track who can’t get out of the way and will certainly be killed. But you- lucky you- happen to be standing by a lever and could- if you so chose- flip it to switch the trolley to a track where it will kill a worker. Do you switch the tracks? Great time to bring out the Spock “the needs of the many outweigh the needs of the one” sort of banter while sipping pretentious craft beer. More recent variations include: What if it wasn’t a worker but a child- would you sacrifice 1 child for how many people? What if the 5 people included a family? Your family? What if the 5 people laying incapacitated on the track were laying down because they were loser drug addicts? What if we added Bayesian probabilities to the mix (not that people actually use Bayesian reasoning as pointed out by Tversky and Kahneman)?


In I, Robot, Will Smith is very angry and very anti-robot because he was in a multi-car wreck. The rescue robot could rescue only one person, him or a cute little girl in another car. The robot rescued Smith because of the higher probabilities of success based on age, condition, type of injuries, the accident, etc. One of my students who was watching the preview of I, Robot (we got comped tickets and I took my entire lab in an event that probably could have doubled as an episode from The Big Bang Theory) hissed “And how the f*** would the robot know that?”


And that brings us right up to modern times where ethicists bring up the Trolley Problem as some sort of barrier to the adoption of autonomous cars. At conferences such as We Robots (full disclaimer, I’m on the program committee) that brings together lawyers, robotics, and ethicists, we get conversations on the Trolley Problem that go like this.



Ethicist: 'The Trolley Problem is the major barrier to fully autonomous cars.'


Roboticist: 'Why is this a problem for autonomous cars?'


Ethicist: 'Because lots of social science studies show people are uncomfortable with cars that might crash into their car instead of someone else.'


Roboticist: 'People are uncomfortable with dying in general, and yet they get into cars every day, put their kids in car seats the wrong way, and text and drive and do other crazy things. I, for example, get into cars with complete strangers that could kill or rape with me with no consequences for Travis Kalanick- but Uber is so cheap and convenient that I don’t care.'


Ethicist: 'People will care. You’re an engineer, you don’t understand people.'


Roboticist: 'OK, you realize that the robot car would not have any way of knowing that one car had 2 kids and a dog in it and the other car had a Nobel Prize winner whose best years may still be ahead? Or at least without violating privacy laws, assuming there was some way of a system knowing who was in each car?'


Ethicist: 'Assume people will waive the privacy laws so all cars talk to each other and know the occupants.'


Roboticists: 'OK, we will roll with the insane assumption about privacy violations. You realize that robots that avoid obstacles aren’t actually reasoning about what the obstacle is? That they do what people and animals do- they react at the spinal cord “oh crap!” level to not hit whatever is in the way?'


Ethicist: 'Well, robots *should* reason.'


Roboticist: 'Recognition and reasoning takes more compute time than is available, that’s why people and animals react at the spinal cord level and why we roboticists started doing this in the late 1980s.'


Ethicist: 'Moore’s Law will do away with any concepts of latency.'


Roboticist: (to self) Where is Gordon Moore when we need him? (to Ethicist) 'You realize that the robot doesn’t have any sensors that could classify the car’s interior content- that its sensors are mostly detecting things like “biggest opening is that way” and “road is this way"?'


Ethicist: 'You’re missing the point, it won’t be long before robots can do this.'


Roboticist: 'With what sensors and what faster than light network?! Is there some magic fairy dust we’ve been missing? Wouldn’t I, as a scientist with a Ph.D. in robotics, actually know if this was possible?'


Ethicist: 'As a Ph.D. scientist, you are blinded to disruptive technologies and have been inculcated into the status quo corporate culture. We are thinking outside of the box. Plus I talked to an undergraduate from MIT once and he said this was all possible.' (BTW, I actually had a philosophy professor justify one of their arguments with the line about the undergraduate from MIT.)



But no matter, group discussions of the Trolley Problem are pretty life-affirming because each group walks away convinced that the others are total idiots. It’s a good feeling and similar to the unreasonable glow you get by watching I, Robot and its burst of high energy special effects.


Buy or rent I, Robot today on Amazon Prime Video – follow the link below:

- Robin


For an audio version of this review, click below...



bottom of page