top of page

Autonomy

Learn about the difference between automation and autonomy, levels of initiative, shared autonomy, and bounded rationality

Autonomy in robotics means computation autonomy but many people think autonomy means political autonomy. Computational autonomy refers to the capability to independently perform a task, whether the task is performed without any supervision, is permitted any initiative to modify the constraints on how to perform the task, or is permitted to modify the task (also called levels of initiative). Political autonomy is the “you’re not the boss of me” connotation of autonomy probably best exemplified by Gort in the original The Day The Earth Stood Still. The choice of the word autonomy for robots came about because autonomy has been used in engineering since the days of the fly ball governor or centrifugal governor, which enabled the Industrial Revolution. The fly ball governor was autonomous in that it controlled steam engines without human intervention, but it was not politically autonomous. In robotics, there is a difference between automation and autonomy as they represent different modeling assumptions- the difference is somewhat akin to using cartesian or spherical coordinates in calculus or using model-based or neural networks in computer vision. While robots can perform factory automation tasks, like welding, robots rarely are complete the entire process by themselves. For example, most self-driving cars expect the owner to step in from time to time, specify where to go, etc. That is often called shared autonomy, with overlaps with teleoperation and human-robot interaction is critical. That also explains why studies such as the 2012 Defense Science Board study on autonomous systems recommended considering autonomy as an enabling capability: that you wanted to formulate the problem as “what functions do you want the robot to do (e.g., take over navigation and flying and alert the pilot so that the pilot can focus on target recognition and scene understanding)” versus “I want an autonomous plane.” While computer programming for AI has produced unintended consequences, they never exceeded their bounds (bounded rationality)- the bounds were just poorly stated. So a robot would not take over the world or start the uprising unless it was programmed to, but a great deal of science fiction has been devoted to whether robots would have sufficient self-awareness to merit (or demand or take) political autonomy.  Arthur C. Clarke’s Rendezvous with Rama is an excellent start to explore how a robot spaceship could be autonomous (behavioral intelligence) but without deliberative-level or social intelligence or politically autonomous. 

Index:

For further reading:

  • Chapter 3: Autonomy and Automation, Introduction to AI Robotics, 2nd Edition, R. Murphy,  MIT Press 2019 (available Aug)

  • Chapter 4: Software Organization of Autonomy, Introduction to AI Robotics, 2nd Edition, R. Murphy,  MIT Press 2019 (available Aug)

  • The Role of Autonomy in DoD. Defense Science Board study (2012).

 

bottom of page