EffiSciences is running a special edition of the ML4Good bootcamp in France focused on European AI governance. We are looking for applications from people currently living in France, Italy, Spain, Poland and Sweden.
ML4Good is a bootcamp that aims to provide advanced training in deep learning to those who want to work towards making AI safe and beneficial to humanity.
This camp will fast-track your deep learning skills, inform you about AI safety research, allow you to explore conceptual challenges, and connect you with like-minded individuals for potential friendship and collaboration.
In the context of the EU AI Act, the development of technical standards to operationalize the desiderata of the EU AI Act is a crucial step. We're counting on ML4Good participants to help out in this task, using the technical expertise they acquired during the camp.
We will do our best to make this edition gender-balanced.
How will the days be spent?
The venue has 10 rooms with 2-4 beds and we will do our best to accommodate you in terms of your preferences.
AI Safety literature review and projects on topics such as:
We welcome applications from those who fit a majority of the following criteria:
You are between 18 and 30 residing in France, Italy, Spain, Poland, or Sweden. While you do not need to be a native of these countries, current residency is a prerequisite for eligibility. Those who do not meet these specific residency requirements are encouraged to consider applying to the UK ML4Good program*.
You are motivated to work on addressing the societal risks posed by advanced AI systems - ideally, motivated enough to consider making significant career decisions such as transitioning to technical alignment work, setting up a university AI safety group, or founding a project
You have a programming background and comfortable with at least one year of applied mathematics at university level
You have a high level of proficiency in English
You can commit to completing our prerequisite material before the bootcamp (we will send this to you upon acceptance)
We will do our best to be gender-balanced. Historically, 1/4 of the participants have been women, this time we will try to reach 1/3.
* The French bootcamp operates under sponsorship from the Erasmus+ program, which imposes certain unique constraints on eligibility not present for the UK bootcamp. The latter receives funding from Open Philanthropy, allowing for different participation requirements.
How many people will attend the camp? There will be 19 participants and 5 staff members.
Will there be any spare time? There will be periods of leisure and rest during the camp. However, the course is intensive and full-time - don’t plan to do anything else during the camp.
What language will the camp be in? All courses, instruction, and resources will be in English.
What do you mean by AI Safety? By “AI Safety” we mean ensuring that AI doesn’t lead to negative outcomes for sentient beings or the premature disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.
I am not sure my level of technical knowledge is sufficient. Please see the prerequisite section above to see what level of technical knowledge we are looking for. If you have particularly strong mathematics skills then we would accept less programming experience and vice-versa. If you are still unsure, feel free to contact us. Additionally, before the camp begins we will provide some preparation work.
How much do I need to know about AI Safety to apply? We expect participants to have a solid grasp of why and how an AI could become existentially unsafe for sentient beings and a general overview of proposed solutions. When it comes to theoretical AI Safety topics, we don’t expect an advanced level of knowledge. You will get more value out of the event if you have more familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.
What might an ideal candidate look like? We have particular interest in those who we can support in planning concrete actions towards working on the reduction of AI risks. Examples of promising candidates include:
The feedback from recent camps has been excellent.
For example, to the question "How likely would you recommend the bootcamp to a friend with similar interests as you?". The average was 9.6/10.
Here is the full, unfiltered list of responses to the question "What is your overall impression of the camp?" asked to participants at the end of the last camp:
My general impression was pretty good. I think it was quite good and hard to improve.
——
best fun I've had in years, learnt loads,
——
A great atmosphere with great discussion and interactions!
——
Incredible. It is probably the best professional activity I have ever signed up for. Great vibes as well.Everything feels kind of improvised and I like that. I like that the group is small and I'm getting a good vibe from the group.
——
great vibes, probably much more positive impact
——
Very nice vibes, very tight & challenging program
——
All nice, very good
——
Great vibe, a bit chaos in organization, but that added to the charm. Extremely high impact, I feel like AIS is the way to go for me!!!
——
I think it was even more useful than CFAR mainline workshop in Prague (but maybe my opinion is skewed because I'm currently a little hyped up)
——
Very well prepared, exciting and helpful. Vibes really good, maybe a bit toooo fun even, but probably pretty close to optimum. I think it's much more likely now I'll end up on an AI Safety career trajectory, and the bootcamp contributed a lot to that.
——
Technical exercises were difficult, but probably hard to avoid, you surely can't easily bring all participants up to the proper preparation level ahead of time without significant overhead.
——
——
Can't compare to other projects, but it was great!
——
very good, a bit weird/funny the heterogeneous constellation of people in terms of level, nice/friendly vibes, love the worky atmosphere and focus
——
I thought it was great! I would say the impact was good. The vibes were also good! You did a great job!Excellent! Great vibes, well organized, high commitment, the whole structure was beneficial for making people bond and get passionate about the subject.
——
Great vibes evolved throughout the days.
——
I liked how it gave me the opportunity to learn AI safety and ML in an intensive and accelerated way. I also thought that the conversations with the other participants were really valuable in the same way that 1-1s in EA Global are valuable.
——
I think it is a great opportunity to learn skills, engage with the AI Safety community and build a network. It is also very motivating.
——
I think compared to working independently for 10 days, ML4Good has much higher value for most people, especially if they do not exactly know what topic they want to work on.I think it is less impactful than an EAG for career planning and networking, if you count the EAG as 4 days of career planning, swap card and filling out the application and around 4-5 days for the EAG including travel.
——
I liked the vibes a lot at ML4Good, it was just a lot more personal than at an EAG. Much more of a group experience.
——
Lovely people and good vibes, learned a lot but could have learned more by starting from basics