What Can We Learn About AI in the 2024 Election?

23 May 2024

Are social media feed algorithms dividing us politically? Could we use AI to bring us together instead? We’re going to find out.

Progress to next algorithm

$5,000 out of $60,000

we can currently test

3 out of 8 algorithms

Amid election-year concerns about deepfakes and disinformation, CHAI is doing something unique: testing new AI algorithms for social media that aim to reduce polarization and increase well-being and news knowledge. We ran an international competition which created eight AI algorithms to test prior to the 2024 election.

Today’s recommender algorithms optimize for clicks. We know this tends to amplify the most outrageous content, polarizing and radicalizing us. We think we can reprogram AI to do better. Teams from all over the world competed by creating prototypes, then our interdisciplinary panel of scientists selected eight algorithms to test. They use LLMs to do things like detect credible news, downrank toxicity, uprank nuance, and to select content that crosses political divides — an important idea called bridging-based ranking.

This is not a theoretical study. We have real code already created, waiting to be tested. We will be recruiting up to 15,000 people for a large randomized controlled trial, testing as many algorithms as we can on real platforms using a custom browser extension that modifies what consenting participants see on Facebook, X, and Reddit. This will be the largest real-world test of the effects of social media algorithms on polarization ever conducted.

Currently we have enough funding to test three of these eight algorithms. We’ve already built all the software, which means that every additional dollar we raise goes directly to science. It costs about $35 to recruit and pay a single participant for our five month study, and we need about 1,500 people per algorithm to get enough statistical power. With server and other costs, it comes to about $60,000 to test one of these algorithms.

We think that’s a steal to test the potential of AI to help us cooperate despite our differences, during one of the most polarizing events of the decade: the 2024 election. For many more details, see the project FAQ.

Please help us realize the potential of this historic moment to do science. Contact jonathanstray@berkeley.edu.