We seek to fill the following positions:
- Machine Learning Research Engineer specializing in AI safety and control
- Postdoc specializing in AI safety and control
If none of these positions are the right fit for you but you would still like to express an interest in working with us, please fill out this form.
General enquiries about jobs at CHAI can be sent to email@example.com.
Machine Learning Research Engineer specializing in AI safety and control
Successful candidate(s) will be offered a 1-2 year visiting researcher scholar position at UC Berkeley to work with Professor Stuart Russell’s research group, alongside Research Scientist Andrew Critch, and with opportunities to collaborate with CHAI’s co-Principal Investigators at Berkeley (Pieter Abbeel, Anca Dragan, Tom Griffiths, Tania Lombrozo), at Cornell (Bart Selman, Joe Halpern), at Michigan (Michael Wellman, Satinder Singh), as well as with groups at Cambridge, Oxford, and Imperial College through the Leverhulme Centre for the Future of Intelligence. As global demand for AI safety research increases, we expect the experience gained from this work will be valued internationally.
The post holder will be expected to work closely with PhD students in CHAI’s UC Berkeley working group. Early examples of CHAI’s work include a recent paper on cooperative inverse reinforcement learning. With the help of a research engineer, we hope to begin exploring more multi-agent AI alignment approaches as well.
We are especially interested in applicants who can take initiative in finding ways to help out with research at CHAI. This role involves figuring out what would be helpful for the research team and then doing it.
If you choose to apply, please do so via BERI using this Google Form.
Start date: Flexible
Compensation: A stipend to cover living and travel expenses will be offered to those who are eligible (e.g. students). Others will be referred to BERI to apply for funding
Apply: Applications will reopen in Fall 2019
Applications due by: Our internship recruitment for 2019 has closed, but we will open applications for 2020 in Fall 2019. To ensure you are notified when they reopen, join our mailing list here.
Our internships require a strong background in mathematics and computer science. Existing research experience in machine learning is strongly advantageous but not required. Although primarily aimed at undergraduates, we are also interested in people who can demonstrate technical excellence and wish to transition to provably beneficial AI research. Examples include professional ML engineers, or PhD students/researchers in an adjacent numerical field. Experienced researchers or PhD students in a directly related field should enquire about collaborating.
Internships are typically 8 - 12 weeks long but we may also offer more flexible informal placements on a case by case basis. We welcome applications from all countries.
Last year's interns developed essential skills and knowledge preparing them for graduate school and industry opportunities. Take a look at what they had to say:
For more information, please e-mail firstname.lastname@example.org.
Research proposal advice for internship applications
The hardest part of research is often asking the right questions. For this reason, we request a research proposal as part of the selection process. We will use this primarily to assess applicants’ motivation and interest in the field, and it is a great opportunity for you to practice this essential research skill
This is your opportunity to demonstrate knowledge of related work in the field and the ability to formulate novel research questions; please include as much technical detail as possible. Although we will take into account the research interests expressed in your proposal when matching you with potential advisors and projects, in most cases interns will work on a project closely associated with their advisor’s research, rather than the project described in your proposal.
We are aware that for many applicants this will be the first research proposal they have written, so we’ve put together a few resources to help you get started. We’d recommend everyone read Concrete Problems in AI Safety, which surveys several promising research directions. You can get more ideas by following the citations in this paper, by checking out a list of topics from the Open Philanthropy Project or reading CHAI’s very own publications.
When an idea from one of these sources catches your interest, spend some time reading up on prior work in the area, and brainstorm a few ideas for how to extend it. Try and be original: we’d rather read a half-baked idea that is novel, than an eloquently explained idea that is derivative. Finally, relax: we’re not expecting fully fleshed out proposals, especially from applicants who are new to research.
Postdoc specializing in AI safety and control
Start date: Flexible
Apply: Via this form
If you are a graduating PhD student, taking a postdoc at CHAI before moving into industry or a faculty position can help you to develop an expertise in the field of AI safety and control, positioning you to become at leader in this developing field.
Successful candidates will work with the CHAI Director, Stuart Russell, or with one of the Berkeley co-Principal Investigators, Pieter Abbeel, Anca Dragan, and Tom Griffiths. There will also be opportunities to collaborate with CHAI investigators at Cornell (Bart Selman, Joe Halpern) and Michigan (Michael Wellman, Satinder Singh), as well as with groups at Cambridge, Oxford, and Imperial College through the Leverhulme Centre for the Future of Intelligence.
Developing provably beneficial AI systems will require a significant reorientation of the general thrust of AI research, which up to now has been largely concerned with designing systems that optimize exogenously specified objectives. Given the broad and open-ended mandate of the Center, the post holder will have considerable freedom to pursue novel research projects within CHAI’s areas of interest, either individually or working with PhD students and undergraduate researchers.
Early examples of CHAI’s work include a recent paper on cooperative inverse reinforcement learning (to appear in NIPS 16). Related work includes some results from FHI’s collaboration with Google DeepMind and the descriptions of research problems by Google Brain and by the Machine Intelligence Research Institute
Candidates need not have done previous work on the AI control problem but must have (or be about to obtain) a PhD in a relevant technical discipline (computer science, statistics, mathematics, or theoretical economics) and a record of high-quality published research. A solid understanding of current methods in AI and statistical learning would be an advantage.
Our core research spans computer science, mathematics, control theory, robotics, statistics, formal logic, economics (including game theory), cognitive psychology, or neuroscience. We may also be interested in moral philosophy, sociology, political science, law, and other fields dealing with formal and semiformal theories of human value systems.
If you are interested in collaborating on research that aligns with our mission, please email email@example.com. Please include information about yourself and your organization, as well as details of how you would like to collaborate, and why CHAI would be a good fit.
If you are currently at UC Berkeley, you are welcome to attend our weekly research seminars. Contact firstname.lastname@example.org for details.
NOTE: Undergraduates interested in graduate study at Berkeley in this area should apply directly to the appropriate UC Berkeley graduate program. You may mention your interest in CHAI on your application, but we are unable to review or give feedback on your application. We will be happy to discuss research and/or supervision once you have been admitted.