The Strong AI hypothesis is that humans can create a software agent or robot that is really thinking and self-aware. If the Strong AI hypothesis proves true, then this leads to an issue of robot rights. However, all AI systems to this point have been weak AI and the related concept of bounded rationality suggests that it will be much harder than the popular press believes to achieve Strong AI. AI researchers also use the terms "strong" and "weak" methods to describe actual algorithms, which can confuse discussions.