EffiSciences is running a special edition of the ML4Good bootcamp in France focused on European AI governance. We are looking for applications from people currently living in France, Italy, Spain, Poland and Sweden.
ML4Good is a bootcamp that aims to provide advanced training in deep learning to those who want to work towards making AI safe and beneficial to humanity.
This camp will fast-track your deep learning skills, inform you about AI safety research, allow you to explore conceptual challenges, and connect you with like-minded individuals for potential friendship and collaboration.
In the context of the EU AI Act, the development of technical standards to operationalize the desiderata of the EU AI Act is a crucial step. We're counting on ML4Good participants to help out in this task, using the technical expertise they acquired during the camp.
We will do our best to make this edition gender-balanced.
How will the days be spent?
The venue has 10 rooms with 2-4 beds and we will do our best to accommodate you in terms of your preferences.
AI Safety literature review and projects on topics such as:
We welcome applications from those who fit a majority of the following criteria:
You are between 18 and 30 residing in France, Italy, Spain, Poland, or Sweden. While you do not need to be a native of these countries, current residency is a prerequisite for eligibility. Those who do not meet these specific residency requirements are encouraged to consider applying to the UK ML4Good program*.
You are motivated to work on addressing the societal risks posed by advanced AI systems - ideally, motivated enough to consider making significant career decisions such as transitioning to technical alignment work, setting up a university AI safety group, or founding a project
You have a programming background and comfortable with at least one year of applied mathematics at university level
You have a high level of proficiency in English
You can commit to completing our prerequisite material before the bootcamp (we will send this to you upon acceptance)
We will do our best to be gender-balanced. Historically, 1/4 of the participants have been women, this time we will try to reach 1/3.
* The French bootcamp operates under sponsorship from the Erasmus+ program, which imposes certain unique constraints on eligibility not present for the UK bootcamp. The latter receives funding from Open Philanthropy, allowing for different participation requirements.
How many people will attend the camp? There will be 19 participants and 5 staff members.
Will there be any spare time? There will be periods of leisure and rest during the camp. However, the course is intensive and full-time - don’t plan to do anything else during the camp.
What language will the camp be in? All courses, instruction, and resources will be in English.
What do you mean by AI Safety? By “AI Safety” we mean ensuring that AI doesn’t lead to negative outcomes for sentient beings or the premature disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.
I am not sure my level of technical knowledge is sufficient. Please see the prerequisite section above to see what level of technical knowledge we are looking for. If you have particularly strong mathematics skills then we would accept less programming experience and vice-versa. If you are still unsure, feel free to contact us. Additionally, before the camp begins we will provide some preparation work.
How much do I need to know about AI Safety to apply? We expect participants to have a solid grasp of why and how an AI could become existentially unsafe for sentient beings and a general overview of proposed solutions. When it comes to theoretical AI Safety topics, we don’t expect an advanced level of knowledge. You will get more value out of the event if you have more familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.
What might an ideal candidate look like? We have particular interest in those who we can support in planning concrete actions towards working on the reduction of AI risks. Examples of promising candidates include: