Quantcast
Channel: 100% Solutions: robotics
Viewing all articles
Browse latest Browse all 3882

Barack Obama Talks AI, Robo Cars, and the Future of the World

$
0
0

IT’S HARD TO think of a single technology that will shape our world more in the next 50 years than artificial intelligence. As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. A whole lot of worry emerges as well. Who controls this technology? Will it take over our jobs? Is it dangerous? President Obama was eager to address these concerns. The person he wanted to talk to most about them? Entrepreneur and MIT Media Lab director Joi Ito. So I sat down with them in the White House to sort through the hope, the hype, and the fear around AI. That and maybe just one quick question about Star Trek. —Scott Dadich Scott Dadich: Thank you both for being here. How’s your day been so far, Mr. President? Barack Obama: Busy. Productive. You know, a couple of international crises here and there. Dadich: I want to center our conversation on artificial intelligence, which has gone from science fiction to a reality that’s changing our lives. When was the moment you knew that the age of real AI was upon us? Obama: My general observation is that it has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture. There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? Computers start getting smarter than we are and eventually conclude that we’re not all that useful, and then either they’re drugging us to keep us fat and happy or we’re in the Matrix. My impression, based on talking to my top science advisers, is that we’re still a reasonably long way away from that. It’s worth thinking about because it stretches our imaginations and gets us thinking about the issues of choice and free will that actually do have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks. We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages. Joi Ito: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us. Obama: Right. Ito: But they underestimate the difficulties, and I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence1. Because the question is, how do we build societal values into AI? Obama: When we had lunch a while back, Joi used the example of self-driving cars. The technology is essentially here. We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transpor­tation grid, and help solve things like carbon emissions that are causing the warming of the planet. But Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules? Ito: When we did the car trolley problem2, we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car. [Laughs.] Dadich: As we start to get into these ethical questions, what is the role of government? Obama: The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the govern­ment needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups. Ito: I don’t know if you’ve heard of the neurodiversity movement, but Temple Grandin3 talks about this a lot. She says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. Obama: They might be on the spectrum. Ito: Right, on the spectrum. And if we were able to eliminate autism and make everyone neuro-­normal, I bet a whole slew of MIT kids would not be the way they are. One of the problems, whether we’re talking about autism or just diversity broadly, is when we allow the market to decide. Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit. Obama: That goes to the larger issue that we wrestle with all the time around AI. Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it’s static. And part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised. One of the challenges that we’ll have to think about is, where and when is it appropriate for us to have things work exactly the way they’re supposed to, without surprises? Dadich: When we’re talking about that extended intelligence as it applies to government, private industry, and academia, where should the center of that research live, if there even is a center? Ito: I think MIT would argue that it should be at MIT. [Laughs.] Historically it probably would have been a group of academics with help from a government. But right now, most of the billion-dollar labs are in business. Obama: I’ve tried to emphasize that just because the government is financing it and helping to collect the data doesn’t mean that we hoard it or only the military has it. To give a very concrete example: Part of our project in precision medicine is to gather a big enough database of human genomes from a diverse enough set of Americans. But instead of giving money to Stanford or Harvard, where they’re hoarding their samples, we now have this entire genetic database that everybody has access to. There is a common set of values, a common architecture, to ensure that the research is shared and not monetized by one group. Dadich: But there are certainly some risks. We’ve heard from folks like Elon Musk and Nick Bostrom4 who are concerned about AI’s potential to outpace our ability to understand it. As we move forward, how do we think about those concerns as we try to protect not only ourselves but humanity at scale? Obama: Let me start with what I think is the more immediate concern—it’s a solvable problem in this category of specialized AI, and we have to be mindful of it. If you’ve got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight. And if one person or organization got there first, they could bring down the stock market pretty quickly, or at least they could raise questions about the integrity of the financial markets. Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who might deploy these systems are going to be a lot better now. Ito: I generally agree. The only caveat is that there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen. Obama: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man. Ito: What’s important is to find the people who want to use AI for good—communities and leaders—and figure out how to help them use it. Ito: It’s actually nonintuitive which jobs get displaced, because I would bet if you had a computer that understood the medical system, was very good at diagnostics and such, the nurse or the pharmacist is less likely than the doctor to be replaced—they are less expensive. There are actually very high-level jobs, things like lawyers or auditors, that might disappear. Whereas a lot of the service businesses, the arts, and occupations that computers aren’t well suited for won’t be replaced. I don’t know what you think about universal basic income6, but as we start to see people getting displaced there’s also this idea that we can look at other models—like academia or the arts, where people have a purpose that isn’t tied directly to money. I think one of the problems is that there’s this general notion of, how can you be smart if you don’t have any money? In academia, I see a lot of smart people without money. Obama: You’re exactly right, and that’s what I mean by redesigning the social compact. Now, whether a universal income is the right model—is it gonna be accepted by a broad base of people?—that’s a debate that we’ll be having over the next 10 or 20 years. You’re also right that the jobs that are going be displaced by AI are not just low-skill service jobs; they might be high-skill jobs but ones that are repeatable and that computers can do. What is indisputable, though, is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated—the computers are doing a lot of the work. As a consequence, we have to make some tougher decisions. We underpay teachers, despite the fact that it’s a really hard job and a really hard thing for a computer to do well. So for us to reexamine what we value, what we are collectively willing to pay for—whether it’s teachers, nurses, caregivers, moms or dads who stay at home, artists, all the things that are incredibly valuable to us right now but don’t rank high on the pay totem pole—that’s a conversation we need to begin to have. Scott Dadich (@sdadich) is the editor in chief of WIRED. This article appears in the November 2016 issue. This interview has been edited and condensed.

Viewing all articles
Browse latest Browse all 3882

Trending Articles