CHAI's mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
These videos introduce some of the problems that we work on.

Donate to CHAI here.

Recent News

Mark Nitzburg Publishes WIRED Article Advocating for an FDA for Algorithms

CHAI’s Executive Director Mark Nitzburg, along with Olaf Groth, published an article in WIRED Magazine that advocates for the creation of an “FDA for algorithms.”

Rohin Publishes 'Learning Biases and Rewards Simultaneously'

Rohin Shah published a short summary of the CHAI paper “On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference”, along with some discussion of its implications on the Alignment Forum.

CHAI Releases Imitation Learning Library

Steven Wang, Adam Gleave, and Sam Toyer put together an extensible and benchmarked implementation of imitation learning algorithms commonly used at CHAI (Notably GAIL and AIRL) for public use. You can visit the Github here.

CHAI Paper Featured in New Scientist Article

A recent New Scientist article features a paper that Tom Griffiths and Stuart Russell wrote along with David D. Bourgin, Joshua C. Peterson, and Daniel Reichman. The article discusses how the researchers were able to make a machine learning model that took into account human biases, like risk adversion, that are usually hard for computer systems to model.

Subscribe to our mailing list

Our mailing list has been inactive for a while as we've been focusing on research, but please subscribe if you'd like to get updates when we resume it!