top of page

M3GAN: Could a robot really learn to keep a child happy and safe to the point of murder?


M3GAN Movie Poster
M3GAN Movie Poster

The answer is: Well, actually yes. Real-world robots really do learn to do unintended things.


Click here for the Science Robotics article or here or on the poster for the podcast version, otherwise keep reading!


That’s the premise of M3GAN, the fun new hybrid horror/scifi movie, an updated Chucky, is: Could a robot learn to obsessively keep a child happy and safe to the point of murdering other people (and a dog, but not a nice John Wick dog)? In real life, we’re discovering that the trickiest part of machine learning is not what technique to use but specifying what we want the robot to learn, typically referred to as the objective function. Coming up with an objective function turns out to be hard, sort of the real-world engineering equivalent of all those fairy tales and fantasy movies where you get exactly what you asked for but didn’t quite phrase it right or think through the consequences. The 2018 paper The Surprising Creativity of Digital Evolution has an amusing summary of epic fails in machine learning where the system optimized performance to meet what was asked for but not what was intended. There’s outright cheating and lying and lots of other totally unexpected behavior. You can read more about machine learning for robotics in my Science Robotics article behind the paywall here.


Quick aside: The movie is entertaining and I enjoyed it. Is it creepy? D'uh, but just enough to be fun, like a good episode of the X-Files. Sure people die, but not the ones you really like. It’s not too graphic, mostly “ewwww” types of blood and little gore, just enough to make you want to bury your head in your significant other or significant other wannabe, which is the point of these movies, right? Mission accomplished!


Now for a longer discussion of the science...


One of the strengths of the movie is that the writers don’t throw in a lot of jargon, and they don’t rely on Asimov’s Three Laws or any of those standard tropes. There is a brief mention of probabilistic reasoning and neural networks, mercifully short. M3GAN’s neural network apparently generates lines of code that stay something like “delete all the log files on days when I kill someone” (I kid you not, that was basically the variable names- It’s a nice exemplar of self-documenting code- freshman programmers, please take note). But the point is, that is not how learning is usually visualized- it’s not producing lines of codes but rather weights for the godzillion variables in that objective function. The movie does sort of try to address that, with the neural net also visualized as a rotating sphere, but with a disappointingly small number of connections. Remember, chatGPT has about 175 billion parameters, so that soccer ball of connections just isn’t going to cut it.


In terms of learning, the movie misses that M3GAN would likely have multiple learning objectives. Yes, there’s the whole make sure Cady is happy and safe which goes awry, but there would be other learning objectives like: learning the layout of the house and neighborhood, learning the routines of the family, and so on. And don’t forget one shot learning- learning from just one instance, where Gemma pulls a nice trick on M3GAN to distract her so she can turn the robot off; M3GAN will never fall for that one again!


There’s other interesting, realistic robotics science in the movie.


To start with, there are actually TWO important robots in the movie. M3GAN, the Elizabeth Olsen look alike killer robot, gets all the attention in the movie of the same name, but what about Bruce, the nice robot? (Minor spoiler alert here) Bruce is a teleoperated Baxter-like humanoid robot that is the movie’s Chekov’s gun- like the Caterpillar P-5000 powered work loader in Aliens, you see Bruce incidentally at the beginning and then WHAM! at the end.


Gemma, the robotics engineer, now guardian to her orphaned niece, gives Cady a tour of Bruce, which she amazingly built as an undergraduate (nitpick: just the parts on the robot would be in the $50K-$100K range so that seems unlikely. And building a robot like that would be like saying “I designed and built a Tesla all by myself in my dorm room.” But I digress.)


Bruce is so overlooked that there aren’t any pictures on the internet. I feel bad for Bruce.


But back to science: Is Bruce realistic? The sensors on Bruce are mostly realistic. There’s a stereo pair of cameras with rings of LED lights (the robot needs a camera and might as well put in a stereo pair to detect depth and a lidar system (redundant with the stereo pair and very pricey, but far better and faster to process). So far so good. Then Gemma proudly concludes the tour with bump sensors. Bump sensors? In the head? Bump sensors were smaller versions of those clunky bumper car type sensors- we used them back in the 1980s and 1990s but they have been replaced with much smaller tactile sensing. And we put those tactile sensors along body parts that would actually be likely to bump into things- like arms and legs. We normally don’t design humanoid robots to lead with their face.


I was unclear how Cady was using a glove interface to make Bruce walk but, whatever, there was a lot of action at that point so I didn’t mind that the veracity got thrown to the wind. As a total aside, my favorite method in the movies for controlling a humanoid robot was in Tobor, the Great! (Tobor, as in “That’s Robot Spelled Backwards!” representing the best of marketing minds in 1954.) Anyway, Tobor was controlled by… wait for it… ESP.


Speaking of user interfaces and human-robot interaction (HRI), we frequently get M3GAN’s POV- it has a nice Terminator/Robocop overlay showing she is using biometrics to infer the emotional state of whoever she is interacting with so that she can adapt her social interaction. And by “adapt her social interaction,” we mean “manipulate the other person.” Both are active lines of research- how to detect emotions and how to lie and manipulate people. Yay HRI!


Also in terms of HRI, there was probably a hope that the audience would read Cady’s increasingly bratty obsessive dependence on M3GAN as the analog mirror of M3GAN’s increasing creepy obsessive protection of Cady. (Another aside: mirroring where the robot becomes more human while the human becomes more robot-like is territory nicely covered in Harry Harrison’s and Marvin Minsky’s The Turing Option.) But really the kid barely registers as more than a prop, she’s not a particularly compelling character.

Anyway, M3GAN is entertaining in general. Plus it serves as a great STEM night out movie: the two (well, three if you count M3GAN) leads are female, 2 out of the 3 roboticists are female, the crotchety neighbor is female, the social worker is female… it’s a woman’s world after all! But not in a forced way.


The tagline is Friendship has Evolved. I don’t know about that, but robot movies like this are evolving and getting way better with regards to the science. However, I don’t think robot movies are getting better with regards to the product liability and tort law consequences of robots that learn the wrong things, but maybe we’ll get a sequel: M3GAN 2: Breaking Gemma Out of Jail. Otherwise it will be something for the We Robot conference on law and robots to discuss!

bottom of page