Uncategorized
AI / Robotics / Transportation

NYU’s Gary Marcus is an artificial intelligence contrarian

He sold his startup to Uber and helped the ride-hailing company launch its AI lab. But he's not sold on the rise of the machines just yet.

It's not how intelligent the machines are; it's how much control we give them, says Gary Marcus. (Photo by Flickr user normalityrelief, used under a Creative Commons license)

As we’ve reported, NYU Tandon is making a bid for New York City to become the capital city of artificial intelligence. One graduate of its Future Labs incubator, Geometric Intelligence, has already made a big splash in the field: last year, it was acquired by Uber, where its founder, Gary Marcus, launched the ride-hailing company’s R&D lab for artificial intelligence.

Based upon that accomplishment, you might think Marcus, a professor of psychology at NYU, would be one of the biggest cheerleaders for the potential of AI. Instead, he’s consistently thrown cold water on the grandiose notions that are commonly disseminated in the media.

IBM’s Watson might have beaten Ken Jennings at Jeopardy!, but in Marcus’s view, the average AI system isn’t smarter than a fifth-grader: it can’t make abstractions, and it can’t converse naturally.

Marcus left Uber last month, and he’s currently in the process of deciding his next steps, he told Technical.ly. In the meantime, he has plenty to say about all the hype associated with his industry.

We spoke with him about what he thinks it will take to bring about a legitimate rise of the machines.

###

Technical.ly Brooklyn: While others acknowledge that artificial general intelligence is a long way off, the overall sentiment seems to be a lot more optimistic than yours. Do you find yourself feeling like a bit of a contrarian?

Gary Marcus: Yeah, I mean, there’s no question that I’m taking a contrarian view. My view is that people are very enthusiastic about something that represents only a small part of what we need to actually accomplish. Obviously, different people are going to emphasize different things. I come from a background in language acquisition, where the core question is really how a two- or three-year-old child can learn to understand the world and learn to talk.

I feel like machines just haven’t made progress on those kinds of things. They have made progress on, for example, speech recognition. But that’s not language understanding; that’s just transcription.

TB: How do you think the field needs to adjust its approach in order to move closer to artificial general intelligence?

GM: The approach that I’m urging on the field is to take the cognitive sciences more seriously. Especially developmental psychology, developmental cognitive science. I think that human children do a lot of things that machines haven’t been able to do yet. They’re able to draw inferences from small amounts of data. They’re able to learn a very complex language. They seem to have a lot of — at least in my view — innate structures that help them get started. The dominant approach in machine learning right now is to find statistical approximations without a lot of prior knowledge, and I don’t think it’s competitive with human children in domains like language and everyday, common-sense reasoning.

TB: In your talk at the AI Summit, you pointed out self-driving cars as something we haven’t made much progress on. Is that something you’re optimistic about in the near future?

GM: I have no doubt that self-driving cars will eventually be safe enough and reliable enough that they actually replace human drivers. I don’t think it’s as close as some people might think. It’s certainly not going to happen, say, in the next year. I think it might take a decade.

The problem there is that there are lots of edge cases.

It’s pretty easy to train a neural network to drive straight down a highway in good traffic conditions. But that doesn’t mean that we understand how to robustly engineer things for all the unusual edge cases that don’t happen all the time, like a truck making a left turn on a highway, which is what happened in the fatal accident with Tesla. [Editor’s note: Tesla was cleared of fault by the National Highway Traffic Safety Administration.] So I think there’s a long way to go to get the reliability to where we’re really comfortable with it.

TB: Along with all the positive potential of AI, there’s been alarm about its potential downsides. One in particular is the potential for AI to amplify bias against certain demographics in applications such as, say, finance. Could the altered, more human-brain-like approach you advocate address that?

GM: I mean, there’s no magic bullet there. Every algorithm has bias. I think if you can learn some of the techniques that humans use for generalization, that’s going to be really useful in medical discovery, but it’s not a magic cure-all for all of the things that are troubled in AI. Logically speaking, every system has a bias. That’s just the nature of the game. That includes people, that includes machines.

TB: Do you think AI has the potential to mitigate that bias a lot more effectively than humans can?

GM: Well, you can mitigate the biases that you know about. So I think computers can be useful tools for mitigating bias, but I don’t think people should be naive in thinking that it can be magically eliminated. There may be always be biases we don’t know about, for example. I don’t think there’s a magic bullet there.

TB: In addition to concrete drawbacks like bias, there are plenty of doomsday predictions about what might happen if and when the machines take over. It seems like from your perspective, that’s not likely to happen.

GM: I wouldn’t say that. I mean, a lot of people think that whatever AI risk there is tied to super-intelligence. And I’d say it’s not really about how intelligent the machines are; it’s about how much power they have, how much they can directly control things like the energy grid and the stock market and so forth. There’s some risk even if they’re not that intelligent.

The analogy I use is, teenagers may not be the most intelligent, and they’re certainly not the most emotionally intelligent, but they’re pretty powerful. So maybe we should be worried about teenage machines at some level: the machines that have very strong cognitive ability but limited ethical abilities, for example.

TB: One last question: what problems are you working on right at this moment?

GM: I’m fundamentally interested in a pair of questions right now. One of them is definitely artificial general intelligence and how we can make such strong inferences from little data, and the other is essentially how the brain works. I think that there’s a lot of work to be done in both fields, and I’m choosing where I’m going to fit into that next.

Series: Brooklyn
Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending
Technically Media