News

How can we harness the power of superintelligent AI while ensuring strategically super-human systems are not used in ways we’d regret? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

AI systems need to help humans and humanity. I believe that for them to do that well, we need a new definition of AI — one that takes humans and humanity into account explicitly. That’s why I’m excited to share that I’m leading the upcoming AI4ALL education program at UC Berkeley, BAIR Camp, where high school students will explore human-centered — or humanistic — AI.

Oren Etzioni, a well-known AI researcher, complains about news coverage of potential long-term risks arising from future success in AI research (see “No, Experts Don’t Think Superintelligent AI is a Threat to Humanity”). After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom’s “main source of data on the advent of human-level intelligence” consists of surveys on the opinions of AI researchers. He then surveys the opinions of AI researchers, arguing that his results refute Bostrom’s. It’s important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it’s important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent. He writes, “It is no part of the...

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence. The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies. That has already led to an array of academic, governmental and private efforts to explore a technology that until recently was largely the stuff of science fiction.

The world’s biggest technology companies are joining forces to consider the future of artificial intelligence. Amazon, Google’s DeepMind, Facebook, IBM and Microsoft will work together on issues such as privacy, safety and the collaboration between people and AI. Dubbed the Partnership on Artificial Intelligence, it will include external experts. One said he hoped the group would address “legitimate concerns”. “We’ve seen a very fast development in AI over a very short period of time,” said Prof Yoshua Bengio, from the University of Montreal. “The field brings exciting opportunities for companies and public organisations. And yet, it raises legitimate questions about the way these developments will be conducted.”

When we look at the rise of artificial intelligence, it’s easy to get carried away with dystopian visions of sentient machines that rebel against their human creators. Fictional baddies such as the Terminator’s Skynet or Hal from 2001: A Space Odyssey have a lot to answer for. However, the real risk posed by AI – at least in the near term – is much more insidious. It’s far more likely that robots would inadvertently harm or frustrate humans while carrying out our orders than they would become conscious and rise up against us. In recognition of this, the University of California, Berkeley has this week launched a center to focus on building people-pleasing AIs.

The Open Philanthropy Project awarded a grant of $5,555,550 over five years to UC Berkeley to support the launch of a Center for Human-Compatible Artificial Intelligence (AI), led by Professor Stuart Russell. We believe the creation of an academic center focused on AI safety has significant potential benefits in terms of establishing AI safety research as a field and making it easier for researchers to learn about and work on this topic.

UC Berkeley artificial intelligence (AI) expert Stuart Russell will lead a new Center for Human-Compatible Artificial Intelligence, launched this week. Russell, a UC Berkeley professor of electrical engineering and computer sciences and the Smith-Zadeh Professor in Engineering, is co-author of Artificial Intelligence: A Modern Approach, which is considered the standard text in the field of artificial intelligence, and has been an advocate for incorporating human values into the design of AI. The primary focus of the new center is to ensure that AI systems are beneficial to humans, he said.

As the man who co-wrote the definitive textbook on artificial intelligence, Stuart Russell is well qualified to speculate about the future of AI. The UC Berkeley computer science professor is confident that the field will continue to advance at a breakneck pace. With the prospect that computers and robots will become as smart as humans, he says it’s time to begin working out how to get these intelligent machines to share our values. Progress in artificial intelligence is accelerating rapidly, said Russell, as evidenced by the Google DeepMind machine AlphaGo teaching itself to play the notoriously complex game of Go, to a standard where it was recently able to best the world champion Lee Sedol.