News RSS feed

What can AI Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration

16 Dec 2023

In their paper, that was selected for oral presentation at IMOL@NeurIPS2023, the authors compare human and AI agent exploration in a complex, open-ended environment.

AI heralds a ‘fourth industrial revolution.’ Why isn’t America regulating it?

11 Dec 2023

The current approach to AI is a reflection of enormous power imbalances between the tech giants and national governments. What happens when a globe-spanning corporation becomes so powerful that even nations must answer to it?”

Mitigating Generative Agent Social Dilemmas

04 Dec 2023

The authors of this paper find evidence that social dilemmas involving generative agents can be mitigated with contracting and negotiation.

Orienting AI Toward Peace

21 Nov 2023

Jonathan Stray presented a talk that outlined a three part strategy to ensure that AI systems do not inadvertently escalate political conflict as a result of misaligned optimization and are resistant to bad conflict actors.

Human Compatible has been Reissued in the UK in 2023

15 Nov 2023

On September 28th, 2023, Stuart Russell’s book “Human Compatible: AI and the Problem of Control” has been updated and reissued in UK.

CS Student at UC Berkeley Develops Tech to Combat Social Media Harms

06 Nov 2023

Sana Pandey, who is an intern at CHAI, was featured on CBS News in the Bay Area for her work in recommender system alignment. She discussed what drove her to enter the world of recommender systems and her ongoing work with Jonathan Stray on integrating alternatives to engagement into optimization frameworks. The interview also featured Mark Nitzberg who explained the real-world applications and relevance of the project.

Prominent AI Scientists from China and the West Propose Joint Strategy to Mitigate Risks from AI

31 Oct 2023

Ahead of the highly anticipated AI Safety Summit, leading AI scientists from the US, the PRC, the UK and other countries agreed on the importance of global cooperation and jointly called for research and policies to prevent unacceptable risks from advanced AI.

AI Safety Summit by UK Government

26 Oct 2023

As Artificial Intelligence rapidly advances, so do the opportunities and the risks.

Managing AI Risks in an Era of Rapid Progress

24 Oct 2023

In this short consensus paper, the authors outline risks from upcoming, advanced AI systems. They examine large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems. In light of rapid and continuing AI progress, they propose urgent priorities for AI R&D and governance.

Expertise Trees Resolve Knowledge Limitations in Collective Decision-Making

12 Oct 2023

Experts advising decision-makers are likely to display expertise which varies as a function of the problem instance. In practice, this may lead to sub-optimal or discriminatory decisions against minority cases.

« Previous PageNext Page »