We are replacing technical coding sessions with workshops focused on:
Governance
. Various threat models and an overview of what is happening in AI governance
. Industry-focused governance: How AI labs can contribute to AI safety
. Current legislative action: ongoing policy efforts and proposals (with a focus on the Code of Practice of the EU AI Act)
. Compute governance: as one way to monitor and enforce AI governance regimes
. International governance: Coordinating between countries
Strategy
. Develop scenario planning for AI deployment outcomes
. Analyze real-world AI governance developments and their implications
. Understand practical policy levers and intervention points
Communication
. Learn to articulate AI risks clearly and adapt messages for diverse audiences
. Public speaking training
Post-camp planning
. One on one career planning and mentorship
. Learn pathways to contribute meaningfully post-camp
The bootcamp is free. There is no fee for room, board, or tuition.
We ask participants to pay for their own travel costs - however, if this is preventing you from attending we will have the option to apply for travel support.
This bootcamp is aimed at people currently based in Europe. There will be more camps running in 2025 - please sign up on our website to be notified when these are confirmed and when applications open.
You do not need any technical knowledge to participate in this bootcamp.
We are looking for people who have some knowledge of AI safety and AI risks who want to contribute to AI safety in a substantial way, either full-time or as a side project in their spare time.
We are looking for participants from a variety of backgrounds, from technical people looking to go into governance, to those with a background in communication, law, policy or entrepreneurship. This variety also extends to career stage; we are most excited about people who are ready to actively contribute to AI safety, be that someone who has just finished their masters or PhD, someone with decades of work experience, or someone early in their career.
AI is going to impact all parts of society and will require expertise from all fields, so there is no single profile we are looking for.
Will there be any spare time? There will be periods of leisure and rest during the camp. However, the course is intensive and full-time - don’t plan to do anything else during the camp and we would recommend giving yourself a day off afterwards before returning to full-time work!
What language will the camp be in? All courses, instruction, resources, and conversations will be in English.
What do you mean by AI Safety? By “AI Safety” we mean ensuring that AI doesn’t lead to negative outcomes for sentient beings or the premature disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.
How much do I need to know about AI Safety to apply? We expect participants to have a solid grasp of why and how an AI could become existentially unsafe for sentient beings and a general overview of proposed solutions. When it comes to theoretical AI Safety topics, we don't expect an advanced level of knowledge. You will get more value out of the event if you have more familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.
What might an ideal candidate look like? We have particular interest in those who we can support in planning concrete actions towards working on the reduction of AI risks.