top of page

The Scarecrow’s Boy (short story): Lifelong Learning and the New Term Problem


Recommendation: Read this, then get the audio version and play it as a cautionary tale for your rebellious pre-teens.


Robots: humanoid, autonomous car, networked robots


The Scarecrow’s Boy is the deeply moving short story of a boy being rescued by an intelligent personal assistant robot who has been displaced from the house to be a scarecrow in the fields. The story explores what has happened to the robot over the years and how it has adapted. Stanwick is one of my favorite writers, especially his Nebula award winning Stations of the Tide. Stations of the Tide is a combination of Daphne Du Maurier's Rebecca (like Rebecca, we never learn the name of the protagonist) and Cordwainer Smith’s Instrumentality of Mankind universe. Stanwick hasn’t written a large volume of work, but even his short stories, such as The Scarecrow’s Boy, are densely packed with interesting ideas and emotions.


From a robotics viewpoint, The Scarecrow’s Boy touches on two important topics of research in artificial intelligence: lifelong learning and semantic labeling. Lifelong learning in robots, sometimes called persistent robotics, is concerned with how robots will continue to function correctly over time in a world where things change. There are at least three major ways that the world changes for a robot. The physical environment can change; for example, the furniture in household can be rearranged or new roads built. Another type of change is the set of tasks can change, with a robot being given new tasks or old tasks modified for new situations. For example, the grocery list of a family with children will change over time. A third type is that the robot itself might change, for example, have physical degradation, and thus need to compensate. Imagine all the changes a personal assistant robot would encounter in 30 to 40 years of existence.


Lifelong learning is certainly hard but semantic labeling is also one of the hardest problems in artificial intelligence. It addresses how a system converts a signal into a label with meaning. A computer vision system may be able to identify a coffee cup, but how does it recognize that it is my coffee cup or that it is my favorite coffee cup or that it is the one my daughter gave me? Semantics is about meaning. Meaning can be hard to figure out because we typically infer meaning; for example, observations over time will reveal that I use a particular coffee cup more often and thus that would be my favorite. And so, semantic labeling is related to lifelong learning.


Another important aspect of lifelong learning and semantic labeling is the "new term problem." When does a large coffee cup become a beer stein? If a computer system is presented with coffee cups, from expresso cups to big mugs, and then sees a beer stein, it would generally expand the range of what it calls “coffee cups” to include this larger, taller object. After all, they are physically similar and both can hold liquids. But there is a difference, especially semantically, so the beer stein represents a totally new category of cups. New categories are called “new terms” in AI research. But how would the computer learn this? Most pattern classification systems, including neural nets and deep learning, are given the number of terms to learn (e.g., there are 3 different objects in this dataset, now learn how to recognize each). If a designer doesn’t know, they set up the learning system to repeat an increase the number of classes until it finds a number that statistically best matches the dataset. But what if we’ve learned that there are 10 classes or terms and then over time an 11th term begins to appear? Generally, the classifier would have no way of knowing that there is an 11th class, instead, it would try to force it into one of the existing 10 classes. You can see the research challenges.


The title of the story, and its road trip to safety, is a subtle reference to the Wizard of Oz, whose Scarecrow feels he lacks a brain but does just fine as he travels the yellow brick road. In this story, the robot experiences no angst over its limited autonomy, but as with the Wizard of Oz, what intelligence it has is just enough. But the real heart of the story is a flip of the question of how a robot can be smart enough to manage lifelong learning. Instead it is about how do we humans manage lifelong learning in ourselves: Would a robot recognize the child you were at age 10 in what you grew up to be?


The Scarecrow’s Boy is a must read. It’s available as part of the We, Robots anthology by Allan Kaster and audible.com has a great recording- which you might want to listen to with your family as a cautionary tale for a rebellious pre-teen. If you want to learn more about lifelong learning and semantic labelling, there is not a single best introduction but the chapters on Social Robotics and Socially-Assistive Robotics in the Springer Handbook of Robotics, second edition, covers most of the motivation and challenges.


- Robin



bottom of page