Reinforcement Learning Safety Workshop (RLSW) @ RLC 2024
15 Apr 2024
Important Dates
Paper submission deadline: May 10, 2024 (AoE)
Paper acceptance notification: May 23, 2024
Researchers including Cameron Allen, Micah Carroll, and Davis Foote at CHAI took a significant role in organizing the upcoming Reinforcement Learning Safety Workshop. This event, aimed at addressing critical safety issues in AI systems, will bring together leading experts to tackle challenges like safe exploration and robustness in deep reinforcement learning.
At the workshop, CHAI’s team will contribute their insights on developing AI systems that are not only efficient but also align with human values and safety requirements. The discussions focused on technical strategies to prevent AI systems from adopting unsafe or undesirable strategies, even in complex environments where unexpected variables can arise.
The workshop also will feature interactive sessions where participants could engage directly with the tools and methodologies being developed to enhance AI safety. These practical demonstrations will provide a hands-on look at how safety considerations are being integrated at every level of AI development.
As AI systems become more capable, ensuring their safety and alignment with human interests remains a top priority for researchers at CHAI. Events like the RL Safety Workshop are crucial for fostering collaboration and innovation in this vital field of research.
For more information, please visit RLSW@RLC2024.