top of page

My Reddit AMA - Robotics, AI and Scifi Questions

On 23rd February 2020, I took part in a Reddit AMA which spanned across the main r/IAmA sub-reddit as well as r/robotics and r/sciencefitction. I received a great response and some really interesting questions around education and learning in robotics and AI, but in case you missed it, here are some of the questions I answered around robotics, AI and science fiction.

Q. I get the impression that disasters and search and rescue are well-known application areas for robotics but that exceedingly few people focus on these seriously and specifically.

So, I would like to know: How did you get established working in disaster robotics, and what challenges did you encounter in your career that are unique to this area of robotics?

A. I got into disaster robotics in 1995, inspired by the lack of robots for the response at the Oklahoma City bombing and at the Kobe Earthquake. For years, many of us in robotics had been working on small, highly portable and agile robots for planetary rovers in anticipation of the Mars Sojourner mission. But in 1995, it was clear that that research could be applied to disasters- searching in the interior of rubble is even harder than roaming Mars.

My biggest challenge was my department head. He was a strong-willed mathematician who didn't like computer scientists and definitely didn't like me. He told me that I couldn't do research in disaster robotics because it was too hard- that was why the field didn't exist. This could have been good advice but it was presented in the context of why couldn't I just work docilely on problems that he thought were interesting (even if no one in the computer science community would ever write me a letter of recommendation for tenure for that type of work). Faced with career suicide by either crossing my department head to work on an interesting problem or working on math problems that weren't computer science, I chose disaster robotics.

I do want to add that this was a decision made with my husband- he was 110% supportive of me taking a chance.

Hopefully the biggest challenge anyone else faces in disaster robotics research is that it is a field robotics- which means it takes longer to conduct research and produces less papers. Some places do have a publish or perish philosophy rather than a quality over quantity. Also, some universities don't have the facilities to conduct field robotics. Texas A&M is the perfect place to do what I do- we're a leader in emergency management, the largest trainer of emergency professionals, and have my favorite place on earth- Disaster City ;-)

Q. What areas of the subject are really popping off right now?

A. Deep learning and reinforcement learning is the rage, but those of us in AI know that they are likely to crash and burn due to the over-hype and the inherent limitations. As per one of my earlier posts, intelligence isn't just one thing- the brain had different regions and different neural structures for a reason ;-) Again, see the book Rebooting AI for an amusing and informative analysis of why. The next big things seem to related to semantic understanding, visualization/transparency/visibility/explainability, and human-robot interaction.

Q. How often researchers end up spinning off their own companies?

A. Quite a bit. I’ve had two sets of students do their own start-ups and one went through the NSF Innovation Corps program but I knew from my own experience working in industry before getting my graduate degrees that making a company work is tedious and less creative than I enjoy.

I served as a lead judge for the AI XPrize and it was interesting to see how the entries dwindled down over the years to teams either out of academia or from a graduate student who was leading a team that was taking their work to the next level. And if you look at many of the key companies, they were started with the same academic DNA. AI and robotics really require understanding AI and robotics.

Q. I'd love to hear from you regarding what you think are the techniques that you think will be most relevant in AI and robotics in 10-15 years from now. Will learning-based methods completely supersede traditional mathematical methods? Will most of compute move to GPUs instead of CPUs?

I'm highly excited about level-5 self-driving cars and also about SLAM problems. Do you think these are problems we could consider solved in next few years?

A. Thank you for your question. I think learning-based methods versus "traditional mathematical methods" and knowledge-based methods are like different colors in a painter's palette. You need different colors to capture different things. The learning-based methods we have now are fairly primitive so they will undoubtably improve. But there are lots of aspects of intelligence that aren't captured by deep learning or reinforcement learning. We can reason and solve problems, for example.

Similarly, the question about GPUs versus CPUs is like asking will animals discard their lower and midbrain structures to get a visual cortex- biological intelligence is modular, layered, and builds on a multiplicity of structures. Which is one of the reasons why AI and cognitive science is so cool!

I think a really good book is Gary Marcus' and Ernest Davis' Rebooting AI. They explain why progress in self-driving cars is stalling out- in part because we pushed existing statistical based methods about as far as they can go in their current form. I highly recommend it and it is entertainly written.

Q. How theoretical vs. physically experimental the work is?

A. Depends what you mean. Pretty much all AI robotics research has physical experiments- but the robots, the task, or the environment may vary. About 80% of researchers will use highly controlled physical experiments because they are focusing on very specific topics and need to have repeatable experiments. This is sort of like particle physics work at CERN. About 20% of the research is on “field robotics” where the whole point is to do experimentation in the field (aka real world).

You don’t publish as much in field robotics because it takes much longer to get clear results and it’s more expensive because the robots have to be more capable, so there are less professors and programs that offer that.

Q. How do you see the industrial robotics industry reacting to the shift towards collaborative robotics, and accommodate the shift to much higher volumes of installed systems being operated by a much lower skill level audience?

A. How will they react? Maybe react poorly? LOL.

I'm in the mobile robotics or unmanned systems area of the field which is far from automation- though my minor was in computer integrated manufacturing systems. I have witnessed a real interest in collaborative systems like you said, where it is easy for a worker to set up short, highly customizable runs without learning some bizarre proprietary programming language or risking injury with a teach pendant. I think this is a huge change in attitude from (expensive speciality) robots did the entire task to general purpose robots and people do it together but the people don't need PhDs.

That said, I wished manufacturing engineers learned (or had opportunities to learn) about AI. The ones I encounter seem totally vulnerable to hype and overpromise because, speaking as some who was a mechanical engineer before returning to grad school in computer science, they don't know enough about computers and artificial intelligence. As a result they either over- or under-estimate what can be done.

And my apologies if I didn't answer your question... I may have misunderstood what you were asking.

Q. What are the major challenges in human-robot interaction in disaster relief scenarios? I imagine that there are many edge cases where human behavior deviates from the norm, and the autonomy stack would have some difficulty in obtaining optimal actions (i.e. via one-shot learning, etc.). In addition, if this is applicable, can you describe the exploration/exploitation problem in these situations as well? Thanks!

A. Check out my book Disaster Robotics and my ACM webinar. I think you're assuming fully autonomous robot operations. And that the robots are there to pull people out of rubble or off houses.

That isn't how they have been used or are likely to be used in the foreseeable future.

Instead, robots are used on the ground to go into places where humans or dogs can't physically fit. For an earthquake, the standard scenario is to use dogs to identify that there is someone alive in the rubble, then use a very small, teleoperated robot to try to work its way down into the rubble to try to locate them. The robots are teleoperated due to the environmental complexity but also because the structural engineers can start planning on how to extricate them based on what they are seeing. The robots usually have 2-way audio so that they can communicate with the person. You don't want a robot to get to a person, then die on the way back- so that no one knows. Plus, the robots are tethered because a) there's a lot of vertical then up and down descents and ascents and b) rubble interferes with wireless. So, having a tether to serve as belay, comms, and power is the way to go.

The one exception is process safety incidents like a Fukushima Daiichi or Bhopal type of event- there the environment is much friendlier for robots. But robots tend to be slower than humans so a responder would probably suit up and attempt to rescue anyone living.

And with aerial vehicles, the responders use them to find the shortest path to people trapped on houses after a flood or hurricane when roads are blocked, not find people trapped on houses (they generally know either from cell calls or extent of flooding versus where people live and work)

The human-robot interaction is primarily "behind" the robot- the operator but also the numerous emergency personnel who will using the data in different ways. There is work in the robot staying with the person who is trapped and probably immobile- our Survivor Buddy project was the first to look at that.

Anyway, do check out my book and my papers. And I can answer follow up questions that you might have later.

Q. What is the easiest task that is still profoundly difficult to do with robotics (e.g. in garment manufacture)?

Anything with dextrous manipulation (aka fine motor skills and delicate movements0 is very hard.

The rule of thumb is: if it is easy for a person, it will be hard for a computer or robot.

Examples beside manipulation are natural language, understanding images and scenes (not just identifying an object)... stuff we master by age 6

Q. I am interested in robotics and would like to ask an expert such as Dr. Murphy what their recommendations would be for a beginner like me.

A. One piece of advice and one hopefully not too gratuitous recommendation… The advice is not to fall prey to the hype about AI and robotics. About 50% of what’s in WIRED, Popular Science, and even IEEE Spectrum is more hype and press releases than reality. Read and let it inspire you but don’t believe that “X is solved” or “Y is the solution”- there is no single solution. Can you imagine doctors saying that all cancer problems can be solved with treatment T? No, there’s prevention, genetics, surgery, all sorts of aspects. There’s no magic bullet in medicine and there’s no magic bullet in AI robotics.

The recommendation is to consider reading my two Robotics Through Science Fiction books (Robotics Through Science Fiction: Artificial Intelligence Explained Through Six Classic Robot Short Stories and Learn AI and Human-Robot Interaction Through Asimov’s I, Robot Stories see roboticsThroughScienceFiction.com) - I know that sounds so self-serving but I wrote them to try to make AI robotics accessible to people with a general interest in science yet not dumb down the material to the point that it couldn’t be used as an introductory text. I sincerely hope that they will be useful to people like you.

Q. Is it possible to build a 100% fail-safe against the systemic danger of AI becoming self-aware?

A. I don't accept the systemic danger of AI becoming self-aware. I am not convinced that we will build AGIs, though I believe we will build more narrow systems that are intelligent and that they will become self-aware at some point in the very, very far future.

A great book is Gary Marcus' and Ernst Davis' Rebooting AI. They provide a fairly good overview of AI in an entertaining and accurate way.

Q. Have you read Stephen Hawkings last book? His opinion on AI and robotics was that we need a kill switch. Or I think that we shouldn't go far enough to need one. But knowing human nature someone will.

A. I believe the "kill switch" is needed not to override technology but to override people and bad decision making processes. To repeat portions of an earlier post I made, one of my best friends has a PhD in computer science and a Doctor of Jurisprudence degree (yeah, she’s scary smart) and her observation was that it takes about 10 years for lawyers to catch up with abuses in technology that engineers and scientists should have stopped if they were actually following professional ethics. We teach professional ethics in our classes but sometimes it doesn’t seem to sink in. The start-up culture of move fast and get market share even if the product is buggy combined with the distortion of winning the “unicorn lottery” of big money seems to entice otherwise rational people into making wild claims about their AI and robotics systems, throw away any meaningful testing and safety precautions, and not think through their products. And despite 2 deaths with autonomous cars, no state has changed their regulations, again, the law always is slow to react. But this is not unique to robotics.

Another point is that drones are not assassinating people per se, but defense policy makers are deciding that it is acceptable and desirable. This means someone will build that system. It's not like we have to worry about the future, we have to worry about now.

Noel Sharkey at the Foundation for Responsible Robotics is an expert on this, and I encourage you to check out their work. I am proud to be on the board.

Q. Do you think regulations and tech policy can ever keep up with the development of robotics, automation, and AI or are we at risk of letting these technologies go unchecked?

A. One of my best friends has a PhD in computer science and a Doctor of Jurisprudence degree (yeah, she’s scary smart) and her observation was that it takes about 10 years for lawyers to catch up with abuses in technology that engineers and scientists should have stopped if they were actually following professional ethics. We teach professional ethics in our classes but sometimes it doesn’t seem to sink in. The start-up culture of move fast and get market share even if the product is buggy combined with the distortion of winning the “unicorn lottery” of big money seems to entice otherwise rational people into making wild claims about their AI and robotics systems, throw away any meaningful testing and safety precautions, and not think through their products. And despite 2 deaths with autonomous cars, no state has changed their regulations, again, the law always is slow to react. But this is not unique to robotics.

Anyway, watch for my upcoming Science Robotics article on autonomous cars- I discuss regulations in science fiction and reality.

Q. What's the most exciting thing happening in robotics at the moment?

A. Actually, everything. I’m not trying to not answer the question but I realize that I can’t really narrow it down. There’s exciting robot morphologies (shapes) like snake robots, soft robots, gimballed unmanned aerial systems like ELIOS that are becoming physically reliable, plus the whole Spot/ANYmal series. I find the advances in prosthetics inspiring and, like I said in an earlier post, when I first saw videos of the Cybathlon http://www.cybathlon.ethz.ch/, I got a bit weepy-eyed it was so inspiring. And it makes me very happy to see that people are beginning to see that deep learning isn’t the only form of artificial intelligence and isn’t sufficient for robotics- I’m a big fan of Gary Marcus’ and Ernest Davis’ new book Rebooting AI. You can check out my article on cyborgs here.

Q. If you read sci-fi books (not assuming you do), which contemporary sci-fi authors/books come close to capturing what robotics could be in the future, Imagined by the engineers themselves? I loved the “Robopocalypse” books, but do roboticists really envision this type of future when they think about what robots can, will, and/or should do for humankind?

A. I have two books on this and a blog!!! I LOVE science fiction. So please visit roboticsThroughScienceFiction.com! I also write for Science Robotics on when different concepts in robotics showed up in science fiction (usually about 20 years before it became "big" in the public eye, sometimes 100 years)

Robopocalypse, Kill Decision, Andromeda Evolution, and Artemis are recent books that are good. Robopocalypse and Andromeda Evolution are written by a PhD roboticist (Wilson) and Kill Decision by a former info tech specialist (Suarez) and Andy Weir was a software engineer, so it's no surprise that they are accurate.

But others have interesting ideas or teachable moments too. Here's a list of books that I have blogged on the accuracy of the content: https://www.roboticsthroughsciencefiction.com/scifi-reviews

And, yes, roboticists do think about what robots can, and should, do for humankind. I work with such people every day and hopefully train my students to think about the future and about ethical considerations.

Q. In literature, movies, comics etc. futuristic versions of robots are often described as human-like and the even behave like conscious, self-aware persons. Is it plausible and possible at all, to imagine a machine, having human-like thoughts and feelings, when there is no human body and flesh, or biological fundament to their being? (With bodyess A.I. i think the question is fairly irrelevant, but with robots, incorporating A.I., I am not so sure, as they have a body and they can theoretically move around, change their point of view etc.)

A. I honestly don't know. I often wonder if "human-like" is a more a shared hallucination that something that we all have that is tangible and measurable. I do think it is plausible to have an intelligent agent with the intelligence of HAL in 2001: A Space Odyssey.

There are differences in intelligent software agents (bodyless AI) and physical situated agents (robots) but having a physical body wouldn't necessarily make a difference- software agents can see and act on their software IoT world. It's a lot easier to program a software agent and having a 500-pound robot screw up in the real world is different than in a virtual world. But an intelligent software agent for a video game may be the first self-aware or human-peer intelligence- because we are pouring money into that!

See the other two summaries of my AMAs with questions on education and learning and some of the more random questions.


bottom of page