Rendezvous with Rama (1973): Direct perception controls an intelligent alien generation ship

June 25, 2018

 

What it gets right about robots: Direction perception as the foundation for intelligent behavior. 

 

Recommendation: Read this lesser known book of how an alien generation ship could be plausibly controlled using the same biological robot paradigm as a Roomba.

 

 

Rendezvous With Ramais is Arthur C. Clarke’s OTHER book about a robot spaceship. It was written in 1973 but lives in the long shadow of 1968's 2001: A Space Odyssey (which was written to explain Kubrick’s movie which was in turn based on a 1948 Clarke short story). Many scifi aficionados may not be aware of it- even though it won both the Hugo and Nebula award. In this case, Rama is an alien spaceship, not a human-built one.  But like 2001, the book offers a highly plausible examination of a way to reliably control a generation ship for millennia, though in a very different manner than HAL or Ilse in Long Shot.  

 

The backstory is that an asteroid or comet, is detected moving toward the sun, possibly on a long elliptical orbit, possibly to collide with the Earth. It is named Rama because all the names from the Greek and Roman pantheon have been taken.

 

And then someone notices that it is a perfect cylinder. Alrighty then, it’s a first contact scenario. Game on! 

 

But the cylinder doesn’t respond to any type of signals. Are the aliens so different that we can’t make first contact or are they out to take over? A space transport crew is re-tasked to intercept and make first contact directly. Still no response. They are able to get onboard and ride with the spaceship for a few days as it continues its approach to the sun. Inside the humans witness a transformation as the ship is one big biosphere (a bit like Niven’s Ringworld). The ecology transforms as the interior of the spaceship gets warmer nearer the sun and starts triggering course corrections. There are no Ramans onboard but the ecology itself acts like a computer, making Rama a biologically-controlled robot probe. Meanwhile the humans are either ignored as they gingerly explore the interior or treated as a biological infection. So much for first contact. Eventually the humans have to leave and Rama slingshots out of the solar system for the Magellan Clouds. If Rama was exploring for life, it wasn’t looking for it in our solar system.  

 

By the way, that wasn’t a spoiler because there really isn’t a plot with twists or character development to spoil. It is one of Sir Arthur’s hardest hard science novels, where the science narrative trumps the minimal fiction narrative. The fun and wonder of the book is imaging the world of the Ramans. Who cares about who is sleeping with whom or what political plotting is going on? Clarke knew if we wanted that sort of thing, we’d be spending the summer re-reading Shakespeare’s plays. He knew that what we really want is the look and feel of a new world and an alien way of thinking and, boy, Rendezvous With Rama provides that in full.  And, since Sir Arthur was the consummate scientist- he patented geosynchronous satellites after all, this new world is technically very accurate and offers insights into artificial intelligence for robotics. 

 

Artificial intelligence used in most robots derives in some part from biological intelligence, particularly the concept of direct perception. Direct perception merits a bit of a roundabout explanation. 

 

A common assumption is that intelligent behavior is complex, that it requires a detailed representation of the situation or state of things (often called a world model) and that deliberative algorithms constantly examine and reason over that world model to produce an optimal action. That is certainly true for intelligent actions such as playing chess or solving mathematical puzzles. Other types of intelligence actions don’t necessarily require a world model or reasoning. 

 

Consider a cockroach. If you have ever encountered a cockroach and then tried to squash it, they exhibit a remarkable amount of intelligence. Most species flee when the lights are turned on. This is called a phototropic behavior and it drives them away, or repulses them, from bright light. So they have an urge to go to where they perceive is the darkest place in their perceptual range. If they encounter an obstacle along the way, perhaps the surface of a chair leg, a wall, a person’s shoe, they feel it on one side and just follow it on that side while trying to get to the darkest spot. If they encounter a set of surfaces along the way that they can feel on both sides, like a crack, they try to go in and get completely surrounded; this is called an ideothetic behavior. Then they hide for a while with an internal timer. If the lights are still on after the internal timer goes off, the process begins the fleeing process again but it the lights are still on since the cockroach is already in a nice crack, nothing visible happens beyond a twitch or two. 

 

The cockroach did not have to reason about why the sudden change in light intensity, the location of the chairs, walls, cracks, etc. The concurrent urges from phototropism, surface following, and ideothetism are sufficient for action without having to understand what a chair is (or a person’s foot). Did the cockroach take the optimal path to the crack? No. Did it even know, in any meaningful sense of the word, that a crack existed or that it could squeeze behind books? No. Did it escape? Probably. Did it seem smarter than you? I’ll let your significant other answer that.  

 

The point is that everything the cockroach needed to do the right thing to escape was there in the environment. It wasn’t anything the cockroach had to model or learn a map- it was just like a light switch. If there’s a sudden bright light intensity, flee. If there’s a surface on your left, stay next to it and keep moving. If there’s a surface on your left and right, crawl into it.  

 

In behavioral robotics, we say the cockroach is directly perceiving the world, that its behavior is an example of direct perception. The phenomenon of direct perception was first noticed and researched by psychologist J.J. Gibson during World War II. He led the team that discovered pilots did better at landing planes if there were visible texture cues on the runway— those stripes— rather than reading instruments. A pilot could quickly and reliably estimate the angle and speed of approach if there were visible stripes and lines of lights but had much more trouble if they had to do a lot of math in their head to interpret the instrument panel readings (e.g., is 23 degrees too high? Am I going too fast?)  

 

He called his work an ecological approach to psychology because he considered human cognition in terms of ecology: the human, what task they were trying to do, and what the environment afforded them in conducting that task.  

 

This idea of the environment providing direct affordances, rather than people sensing and then performing complex internal processes to recognize what was going on and what to do next, was very controversial because psychologists were heavily invested in the idea of people being very deliberative and complicated. Certainly people were “above” being influenced by their environment- that’s what makes us special, right? But study after study bore Gibson out and eventually one of the “classic” psychologists, Ulrich Neisser, essentially said “Yo! Gibson was right, there’s definitely a part of the human brain that is reacting and exploiting direct perception without explicit reasoning. Not all of the brain, but some, and while the direct perception track of the brain is older and primitive and doesn’t let us play chess or find our car like the new part, it’s pretty useful. So we need to get over it and invite Gibson to eat lunch at our table.”  

 

Let’s all be clear that I’m paraphrasing.  

 

Anyway, Neisser said something to that effect in his book Cognitive Psychologyin 1967 which essentially created the field of cognitive psychology and is still a classic text. 

 

Meanwhile over in the engineering, the first mobile robot Shakey was breaking new ground in artificial intelligence. It focused on reasoning and after an initial set of breakthroughs in 1967, robotics went nowhere for over twenty years. The field got very good at planning optimal paths over a world model but not at actually moving along that path or building world models.  

 

At the same Michael Arbib, a mechanical engineer who had attended the Dartmouth workshop that created the field of artificial intelligence, began to explore animal intelligence and brain science. His rationale was “hey look, wouldn’t we be thrilled to create a robot as smart as a fish or a frog?” The answer was pretty similar to what Gibson got. A resounding “No, we would NOT be thrilled and leave us alone you traitor to engineering, you’ve gone too biological.”  

 

But fortunately a new generation of graduate students, in particular Rodney Brooks, Ronald Arkin, and David Peyton, read Gibson and Arbib and others and said, “Wow, this seems like it would work great for some classes of robots. Like a vacuum cleaner (yes, the Roomba code was designed in the 1980s, it just took another 20 years for the mechanical components to get cheap enough to market).”  And thus began the breakthroughs in the late 1980s, early 1990s got the field moving again. 

 

But even now most people, especially engineers, expect robots to need complex world models and algorithms that require high performance computing and, well, just more work. There is an assumption by aerospace, mechanical, and electrical engineers trained in control theory that intelligence should be hard. Most engineers are unaware of Gibson and Arbib and, if they are aware, tend to dismiss them saying it’d never work for anything beyond the most trivial application. 

 

EnterRendezvous With Rama. It illustrates how a generation ship, which seems pretty darned complicated and requires precise navigation, could be controlled with direct perception. The ship gets near the sun, starts warming up the ship outside and in, ice in the biosphere melts into a sea, lifeforms start waking up or sprouting and giving off gases and other lifeforms emerge from sea and start patching up holes, and all this biomass causes the trajectory to be adjusted the way a sunflower tracks the sun. Saying direct perception won’t work for robotics is like saying sunflowers cannot track the sun unless they have sextants.  

 

But to be fair to hardcore physical engineers: there is one twist that Gibson didn’t see coming. Since Rama is a generation ship, Clarke considered that Rama's biological entities would have to work over millennia. They needed to be resistant to genetic drift and thus some entities were “biots’- part biological, part mechanical or hybrid biological robots. The biots got recycled, so they were part of the ecology, but they were partially mechanical. And that requires a lot of physical engineering!

 

My recommendation is to read Rendezvous with Rama and marvel over the enduring genius of Arthur C. Clarke. Then, if you are intrigued by the idea that world is its own best representation, pick up a copy of Gibson’s The Ecological Approach to Visual Perception or find a copy of Rod Brooks classic paper “Intelligence without Representation” on the web.

 

- Robin

Share on Facebook
Share on Twitter
Please reload