Monday, November 21, 2016

Interview: Manuela Veloso Head of Machine Learning, Carnegie Mellon University

Our readers probably know Carnegie Mellon more for the  top-ranked financial engineering program (Master of Science in Computational Finance) but artificial intelligence was pretty much invented at CMU by Herbert Simon and Allen Newell. Simon received the Nobel in Economics but it actually could have been for any of four or five subjects, he was quite the polymath.

Newell had to settle for the Turing award (along with Simon) from the Association for Computing Machinery, probably the root'in-tootin high-falootinest tchotchke in the computer biz.
The Association for the Advancement of Artificial Intelligence along with the ACM subsequently named an award in Newell's honor. Ditto for CMU.

The University's machine learning department was the first in the world to offer a doctorate and as far as I know is still the largest.
A department, for one branch of AI.

Carnegie-Mellon used to have a world class robotics Institute but Uber gutted it with a combination of cash and stock options leaving a Dean and a couple robots to rebuild.
One of the robots is said to be in advanced negotiations with the Ube-sters.

Anyhoo, from The Verge:

Humanity and AI will be inseparable
By 2021, everyday software will be vastly more intelligent and powerful, replacing humans in more and more tasks. How will we keep up?

While some predict mass unemployment or all-out war between humans and artificial intelligence, others foresee a less bleak future. Professor Manuela Veloso, head of the machine learning department at Carnegie Mellon University, envisions a future in which humans and intelligent systems are inseparable, bound together in a continual exchange of information and goals that she calls “symbiotic autonomy.” In Veloso’s future, it will be hard to distinguish human agency from automated assistance — but neither people nor software will be much use without the other.

Veloso is already testing out the idea on the CMU campus, building roving, segway-shaped robots called “cobots” to autonomously escort guests from building to building and ask for human help when they fall short. It’s a new way to think about artificial intelligence, and one that could have profound consequences in the next five years.

We sat down with Veloso in Pittsburgh to talk about robots, programming spontaneity, and the challenge artificial intelligence poses for humanity.
The Interview
One of the big trends we’ve seen over the last five years is automation. At the same time, we’re also seeing more intelligence built into tools we already have, like phones and computers. Where do you see this process in five years?

In the future, I believe that there will be a co-existence between humans and artificial intelligence systems that will be hopefully of service to humanity. These AI systems will involve software systems that handle the digital world, and also systems that move around in physical space, like drones, and robots, and autonomous cars, and also systems that process the physical space, like the Internet of Things.

You will have more intelligent systems in the physical world, too — not just on your cell phone or computer, but physically present around us, processing and sensing information about the physical world and helping us with decisions that include knowing a lot about features of the physical world. As time goes by, we’ll also see these AI systems having an impact on broader problems in society: managing traffic in a big city, for instance; making complex predictions about the climate; supporting humans in the big decisions they have to make.

Right now, some of those systems can seem very ominous. When an algorithm or a robot makes a decision, we don’t always know why it made that decision, which can make it hard to trust. How can technologists address that?

One of the things I’m working on is that I would like these machines to be able to explain themselves — to be accountable for the decisions they make, to be transparent. A lot of the research we do is letting humans or users query the system. When Cobot, my robot, arrives to my office slightly late, I can say, "Why are you late?" or "Which route did you take?"

So we are working on the ability for these AI systems to explain themselves, while they learn, while they improve, in order to provide explanations with different levels of detail. We want to interact with these robots in ways that make us humans eventually trust AI systems more. You would like to be able to say, "Why are you saying that?" or "Why are you recommending this?" Providing that explanation is a lot of the research that I am doing now, and I believe robots being able to do that will lead to better understanding and trust in these AI systems. Eventually, through these interactions, humans are also going to be able to correct the AI systems. So we’re also doing research trying to incorporate these corrections and have the systems learn from instruction. I think that’s a big part of our ability to coexist with these AI system

Why do you think these systems are improving so quickly now? What was holding us back over the last 50 years of AI research?

You have to understand, for an AI system to know what’s a cell phone or what’s a cup or whether a person is healthy, you need knowledge. A lot of [AI] research in the early days was actually acquiring [that] knowledge. We would have to ask humans. We would have to go to books and manually enter that information into the computer.

Magically, in the last few years, more and more of this information is digital. It seems that the world reveals itself on the internet....MUCH MORE