How many people will attend the camp? There will be 20 participants and 6 staff members.
Will there be any spare time? There will be periods of leisure and rest during the camp. However, the course is intensive and full-time - don’t plan to do anything else during the camp.
What language will the camp be in? All courses, instruction, and resources will be in English.
What do you mean by AI Safety? By “AI Safety” we mean ensuring that AI doesn’t lead to negative outcomes for sentient beings or the premature disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.
I am not sure my level of technical knowledge is sufficient. Please see the prerequisite section above to see what level of technical knowledge we are looking for. If you have particularly strong mathematics skills then we would accept less programming experience and vice-versa. If you are still unsure, feel free to contact us. Additionally, before the camp begins we will provide some preparation work.
How much do I need to know about AI Safety to apply? We expect participants to have a solid grasp of why and how an AI could become existentially unsafe for sentient beings and a general overview of proposed solutions. When it comes to theoretical AI Safety topics, we don’t expect an advanced level of knowledge. You will get more value out of the event if you have more familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.
What might an ideal candidate look like? We have particular interest in those who we can support in planning concrete actions towards working on the reduction of AI risks. Examples of promising candidates include:
- You are an undergraduate in a technical subject with an active github account and you would consider setting up an AI Safety Reading Group at your university.
- You are early in your career or are a masters student in a technical field and you are interested in exploring a future career in alignment to reduce risk from advanced AI.
- You are a professional in the field of software engineering or data science and are looking for a way to alter the trajectory of your career towards work on AI Safety. You would be happy contributing engineering talent to open source tooling or helping found a new project.
- You already have prior machine learning experience and are keen to apply your skills to reduce risk from AI and plan to act on this by e.g. changing jobs, or planning your career, or would be willing to join early stage projects.