News RSS feed

Social Media is Polluting Society. Content Moderation Alone Won’t Fix the Problem

10 Oct 2022

In Social media is polluting society. Content moderation alone won’t fix the problem published in the MIT Technology Review, CHAI’s Thomas Krendl Gilbert argues that if content moderation on social media were implemented perfectly, it would still miss a whole host of issues that are often portrayed as moderation problems but really are not. He explains that in order to address those non-speech issues, we need a new strategy: treat social media companies as potential polluters of the social fabric, and directly measure and mitigate the effects their choices have on human populations. That means establishing a policy framework—perhaps through something akin to an Environmental Protection Agency or Food and Drug Administration for social media—that can be used to identify and evaluate the societal harms generated by these platforms. If those harms persist, that group could be endowed with the ability to enforce those policies. But to transcend the limitations of content moderation, such regulation would have to be motivated by clear evidence and be able to have a demonstrable impact on the problems it purports to solve.

Relational Abstractions for Generalized Reinforcement Learning on Symbolic Problems

03 Oct 2022

In Relational Abstractions for Generalized Reinforcement Learning on Symbolic Problems, CHAI’s Siddharth Srivastava argues that reinforcement learning in problems with symbolic state spaces is challenging due to the need for reasoning over long horizons. This paper presents a new approach that utilizes relational abstractions in conjunction with deep learning to learn a generalizable Q-function for such problems. The learned Q-function can be efficiently transferred to related problems that have different object names and object quantities, and thus, entirely different state spaces. We show that the learned, generalized Q- function can be utilized for zero-shot transfer to re- lated problems without an explicit, hand-coded curriculum. Empirical evaluations on a range of problems show that our method facilitates efficient zero-shot transfer of learned knowledge to much larger problem instances containing many objects.

Building Human Values into Recommender Systems: An Interdisciplinary Synthesis

26 Sep 2022

The paper catalogues the values that seem most relevant to AI-driven content personalization algorithms.

Discovering User-Interpretable Capabilities of Black-Box Planning Agents

09 Sep 2022

Several approaches have been developed for answering users’ specific questions about AI behavior and for assessing their core functionality in terms of primitive executable actions

Discovering User-Interpretable Capabilities of Black-Box Planning Agents

24 Aug 2022

Pulkit Verma and Siddharth Srivastava from CHAI co-wrote this paper with Shashank Marpally. Both Verma and Srivastava are affiliates of CHAI.

Four New PhD Students at CHAI

10 Aug 2022

Berkeley this upcoming fall semester. Erik, Shreyas, and Johannes will be advised by our faculty director, Stuart Russell. Jakub will be co-advised by Stuart Russell and Sergey Levine.

RL as a Model of Agency Workshop at RLDM 2022

28 Jul 2022

Brian Christian gave a talk titled “Normalization and Value” at the RL as a Model of Agency Workshop.

Simons Institute, AI and Humanity Workshop at UC Berkeley 

16 Jul 2022

As a part of the Simons Institute’s Summer Cluster, a workshop, AI and Humanity, took place at UC Berkeley from July 13 to July 15.

Sixth Annual CHAI Workshop

08 Jun 2022

On the first weekend in June, CHAI held its 6th annual workshop in person at Asilomar Conference Grounds in Pacific Grove, CA.

CHAI Papers Published

04 Jun 2022

Here are some of the papers that have been published by CHAI students and affiliates recently: 

« Previous PageNext Page »