top of page

A.I. Artificial Intelligence (2001): the Strong AI hypothesis in a Weak AI methods movie

Robots: Humanoid


What it gets right about robotics: nothing; it does set up the strong AI hypothesis but gets the methods wrong.


Recommended watching: Watch it- It’s Spielberg, after all. And then think about how you would have edited it.



Philosophers starting with Searle back in the 1980’s have always been fascinated by the strong AI hypothesis: that roboticists (or roboticists with the help of a short circuit or some unforeseen emergent property of cloud computing or quantum physics) could build a robot that really, truly was thinking. The inverse of the strong AI hypothesis, the weak AI hypothesis, is that roboticists can build a robot that seems to think but really doesn’t. There’s a tendency for the public to favor the weak AI hypothesis, after all the very name “artificial intelligence” implies it is faking intelligence. Which is unfortunate because when the term was coined at the 1956 Dartmouth Workshop, it was used simply to distinguish it from biological intelligence, not to say artificial intelligence would never be equivalent or even superior (especially not if the government was going to pump money into it).


There’s no better movie than Spielberg’s A.I. Artificial Intelligence to illustrate the difference between the weak and the strong AI hypotheses. We, the audience, buy into the strong AI hypothesis; we know that robot David, the blue-eyed Haley Joel Osmet, is really intelligent and has real emotions. The dramatic tension is that his adopted family literally buys into the weak AI hypothesis; they are certain that the David they purchased is just a really clever fake.


Unfortunately, the strong versus weak AI hypothesis is about the only vaguely accurate aspect of the movie in terms of A.I. I typically give my undergraduate AI class a take home final that basically says: Watch the movie and tell me at least 10 ways in which it is wrong.


Most AI researchers that I know don’t really think about the strong AI hypothesis. After all, in most cases, we can’t determine if our elected officials are actually thinking or just going through the motions. But we do think about strong and weak methods.


It turns out that while Searle was debating the strong AI hypothesis, Allan Newell, a founder of the field of AI and computer science, was starting to label research as using either weak or strong AI methods to achieve some aspect of intelligence. Weak methods are methods which are universal or general purpose; they do not take advantage of any domain knowledge. Weak methods are the Holy Grail of AI because once you had a weak method for planning, you could just apply it to any planning problem (after all, Newell and Simon tried to create a General Problem Solver) and-presto- get great results, even though there are significant differences between planning, scheduling, resource allocation, managing constraints, etc. In practice, AI has typically been successful with strong methods that do exploit knowledge about the domain or application. Expert systems were the first money making applications of AI and they are synonymous with exploiting knowledge of a specific domain (the expertise).


What is really confusing is that believers in the strong AI hypothesis typically assume that the strong AI will use weak (general purpose) methods. Meanwhile, robots that aren’t fully intelligent (the weak AI hypothesis) but actually work are usually built with strong methods.


Fortunately, A.I. Artificial Intelligence can serve once again as an example of this conundrum, this time to help illustrate strong versus weak methods. One of the reasons why the movie is so bad is that it was built with the generic blockbuster formula:

  • spend a lot on special effects,

  • load it with stars, such as Robin Williams, Jude Law, William Hurt, and

  • throw in some technobabble, such as “neuronal networks” as an all-encompassing explanation for intelligence.


The above formula is a weak method for film making; it is a general-purpose method that doesn’t require any understanding, or even respect, of the source material (in this case, the brilliant short story Supertoys Last All Summer) or of the genre. On the other hand, a strong method for movie making would take into consideration that the movie was about robots and try to make the portrayal of robotics as accurate as possible, the way 2001: A Space Odyssey approached space exploration.


A.I. Artificial Intelligence tests the hypothesis of whether a strong movie about robots can be made with weak movie making methods.


And it fails the test.


But there’s always hope for a strong, really intelligent movie about robots. And if nothing else, maybe people will be inspired to read the brilliant short story Supertoys Last All Summer that was the basis for the movie.


- Robin

bottom of page