News RSS feed

Mitigating Generative Agent Social Dilemmas

04 Dec 2023

The authors of this paper find evidence that social dilemmas involving generative agents can be mitigated with contracting and negotiation.

Orienting AI Toward Peace

21 Nov 2023

Jonathan Stray presented a talk that outlined a three part strategy to ensure that AI systems do not inadvertently escalate political conflict as a result of misaligned optimization and are resistant to bad conflict actors.

Human Compatible has been Reissued in the UK in 2023

15 Nov 2023

On September 28th, 2023, Stuart Russell’s book “Human Compatible: AI and the Problem of Control” has been updated and reissued in UK.

CS Student at UC Berkeley Develops Tech to Combat Social Media Harms

06 Nov 2023

Sana Pandey, who is an intern at CHAI, was featured on CBS News in the Bay Area for her work in recommender system alignment. She discussed what drove her to enter the world of recommender systems and her ongoing work with Jonathan Stray on integrating alternatives to engagement into optimization frameworks. The interview also featured Mark Nitzberg who explained the real-world applications and relevance of the project.

Prominent AI Scientists from China and the West Propose Joint Strategy to Mitigate Risks from AI

31 Oct 2023

Ahead of the highly anticipated AI Safety Summit, leading AI scientists from the US, the PRC, the UK and other countries agreed on the importance of global cooperation and jointly called for research and policies to prevent unacceptable risks from advanced AI.

AI Safety Summit by UK Government

26 Oct 2023

As Artificial Intelligence rapidly advances, so do the opportunities and the risks.

Managing AI Risks in an Era of Rapid Progress

24 Oct 2023

In this short consensus paper, the authors outline risks from upcoming, advanced AI systems. They examine large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems. In light of rapid and continuing AI progress, they propose urgent priorities for AI R&D and governance.

Expertise Trees Resolve Knowledge Limitations in Collective Decision-Making

12 Oct 2023

Experts advising decision-makers are likely to display expertise which varies as a function of the problem instance. In practice, this may lead to sub-optimal or discriminatory decisions against minority cases.

Announcement of Working Group on AI

03 Oct 2023

The Partnership on Information and Democracy have acknowledged the pressing need to develop democratic principles and rules to govern AI in the information space. Democracy and our democratic institutions must decide the ethical use and safeguards of the development, deployment and use of AI. This cannot be left to the private sector who are currently setting the rules of the game. The history of social media illustrates the danger of allowing tech companies to set the rules and ethical uses. Countries must act to safeguard a democratic and trustworthy information space.

ACROCPoLis: A Descriptive Framework for Making Sense of Fairness

26 Sep 2023

Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities.

« Previous PageNext Page »