The Strong AI hypothesis is that humans can create a software agent or robot that is really thinking and self-aware. If the Strong AI hypothesis proves true, then this leads to an issue of robot rights. However, all AI systems to this point have been weak AI and the related concept of bounded rationality suggests that it will be much harder than the popular press believes to achieve Strong AI. AI researchers also use the terms "strong" and "weak" methods to describe actual algorithms, which can confuse discussions.

April 2, 2018

Robots: Humanoid

What it gets right about robotics: nothing; it does set up the strong AI hypothesis but gets the methods wrong. 

Recommended watching: Watch it- It’s Spielberg, after all. And then think about how you would have edited it.

Philosophers starting with Searl...

Please reload