Last summer, NYU Tandon and ff Venture Capital announced the launch of their jointly run artificial intelligence incubator, AI NexusLab. The stated goal then, as ffVC founding partner John Frankel put it, was to make New York City a prime destination for technologists pursuing one of the hottest areas in the industry.
Frankel reiterated that mission Wednesday at the Future Labs AI Summit, held at NYU’s Skirball Center, where the incubator’s inaugural class of five startups (which we’ll be covering in a separate post) unveiled their work.
“We had a thesis coming into this that New York should be an artificial intelligence hub,” Frankel said, rattling off a series of data points about the city’s tech cred. Among those data bytes: there are some 2,400 open positions in data science in the metro area right now.
The afternoon event was bookended with plenty of excitement about the field’s growth. Anand Sanwal, the CEO of CB Insights, which co-sponsored the event, ended the summit with a presentation showing just how much funding for AI has swelled in the past few years: $14.9 billion since 2012. The momentum has continued into this year, with the first quarter of 2017 bringing in the most funding for AI since CB Insights began tracking it.
Machine learning, computer vision, you name it — the money is flowing freely.
“You should throw [those words] all over your pitch decks,” Sanwal told the audience, which featured a significant number of entrepreneurs. “VCs love it.”
https://twitter.com/ShawnVo/status/849662724078530560
Indeed, talk of AI is everywhere these days, from better managers and even robotic composers. And with it, inevitably, comes doomsday scenarios about machines taking over the world, for ultimate ill. But is the artificial intelligence takeover really all that imminent? Several speakers at the Future Labs AI Summit suggested that there are several caveats to all the hype.
Among them was NYU professor Gary Marcus, who founded Geometric Intelligence, the AI company out of Future Labs that Uber acquired last year. In his talk, Marcus threw cold water over the grandiose predictions of technologists such as Ray Kurzweil that artificial general intelligence — in other words, computers that think like humans — is anywhere close to arrival. That’s not just in terms of the “singularity”: machines are still fairly unreliable at tasks such as language processing and market prediction, he said, even though technologists have predicted the advent of such capabilities for decades.
“We keep getting promised it but it never arrives,” he said.
Marcus pointed to some shortcomings of artificial intelligence as it stands today. Namely, it is reliant on data from the past but largely unable to make inferences about the future. (Especially in the justice system, this leads to all sorts of problems.) To move toward the goal of artificial general intelligence, Marcus called for greater interdisciplinary collaboration, looping in experts from fields such as linguistics and cognitive science, the latter of which is his own area of scholarship.
The keynote speaker, Yann LeCun, NYU professor and Facebook’s director of AI research, reinforced several of Marcus’s caveats but took a more optimistic tone regarding the advancement of artificial general intelligence. In fact, he laid out exactly how researchers are working to make it happen, in a slideshow full of schematics and mathematical equations. The Holy Grail for AI, he said, is known as unsupervised learning, in which computers can make predictions based on an understanding of the world.
“To me, that’s the essence of intelligence: to predict the future, to fill in the blanks, etc.,” he said.
How could machines acquire common sense? @ylecun of @facebook at @NYUFutureLabs #flsummit #ai #machinelearning pic.twitter.com/jDC0HCJdna
— Daren McKelvey (@DarenMcKelvey) April 5, 2017
LeCun walked through an approach known as adversarial training, in which one AI system makes a range of predictions and another trains it to distinguish plausible predictions from implausible ones. With this approach, computers have been trained to draw completely original images of discrete objects and add frames to a video based upon what was previously recorded. But those capabilities only go so far: ask the computer to create a high-resolution image of a dog or a full-length video, and as he showed the audience, the results are pretty comical.
LeCun acknowledged as much, echoing Marcus’s opinion that artificial general intelligence will take much longer than a matter of years. He also dismissed the notion of rogue machines taking over the world. Adversarial training, he said, could also be used to align machines with human values. Plus, he added, it’s not a given that smart machines would even have such inclinations.
“Our basic drives are hardwired by evolution, but AI doesn’t have those,” he said. “It’s not clear that machines will have a preservation instinct.”
Before you go...
Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!