Stuart Russell spoke at the Governance Of and By Digital Technology conference on November 18, hosted by the EPFL International Risk Governance Center (IRGC) and the Trigger Project.
The conference brought together researchers and policymakers to discuss the regulation of technology, increased institutional reliance on technology, and the danger that algorithms will reduce or remove societies’ decision-making power.
Prof. Russell gave a keynote and participated in a roundtable discussion on the challenges of regulating machine learning. In his keynote, titled “Governing AI: A few suggestions,” he urged action to establish basic rights for mental security to counter purposeful and destructive misuse of AI. Prof. Russell summarized potential downsides of AI systems, including racial bias, job losses, increased surveillance, autonomous lethal weapons, risk of genocide, and digital impersonation. He focused on the latter to illustrate the dehumanizing, “inevitable consequence” of how disinformation manipulates users to be more predictable, modifying the human brain to maximize reward.
He concluded that institutions should consider banning reinforcement learning systems in human interaction, especially without users’ informed consent. Instead, AI could advocate for users and negotiate with corporations for more privacy. Prof. Russell envisioned an aligned AI without the fixed, incorrect objectives that proximally cause loss of control. Quoting Alan Turing: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers … At some stage therefore, we should have to expect the machines to take control.”