Cohort Size
24
DATES
Jun 18 - Jun 28, 2024
Duration
10 days
LOCATION
France, near Paris
APPLICATION DEADLINE
February 1, 2024

Course Description

EffiSciences is running a special edition of the ML4Good bootcamp in France focused on European AI governance. We are looking for applications from people currently living in France, Italy, Spain, Poland and Sweden.

ML4Good is a bootcamp that aims to provide advanced training in deep learning to those who want to work towards making AI safe and beneficial to humanity.

This camp will fast-track your deep learning skills, inform you about AI safety research, allow you to explore conceptual challenges, and connect you with like-minded individuals for potential friendship and collaboration.

In the context of the EU AI Act, the development of technical standards to operationalize the desiderata of the EU AI Act is a crucial step. We're counting on ML4Good participants to help out in this task, using the technical expertise they acquired during the camp.

We will do our best to make this edition gender-balanced.

Activities

How will the days be spent? 

  • Peer-coding sessions following a technical curriculum with mentors.
  • Presentations by experts in the field.
  • Review and discussion of AI Safety literature.
  • Personal career advice and mentorship.
  • Evening activities - and time to rest!
Logistics
  • The bootcamp is free. There is no fee for room, board, or tuition.
  • This bootcamp is aimed at people currently based in Italy, France, Poland and Sweden. There will be more camps running in 2024 - please sign up on our website to be notified when these are confirmed and when applications open.
  • We ask participants to pay for their own travel costs - however, if this is preventing you from attending we will have the option to apply for travel support.

The venue has 10 rooms with 2-4 beds and we will do our best to accommodate you in terms of your preferences.

Curriculum

First part of the camp (7 days)

  • Implement ResNet from scratch in PyTorch, implementing all the layers from scratch and loading weights from a trained model.
  • Implement interpretability techniques on the ResNet.
  • Implement SGD and other local optimization algorithms, run remote hyper-parameter searches on a simple architecture.
  • Implement a simple clone of some of PyTorch, with particular focus on the implementation of back-propagation.
  • (Optional) CUDA programming day–write various CUDA kernels, see how close to the performance of PyTorch’s kernels you can get.
  • Implement GPT-2 from scratch, implement beam search.
  • Fine-tune BERT on classification, fine-tune GPT-2 on some specific corpus.
  • Look at various interpretability techniques on GPT-2.
  • Data-parallel training.
  • Conceptual lectures and discussions.

Second part of the camp (3 days)

AI Safety literature review and projects on topics such as:

  • Interpretability of language models.
  • Adversarial robustness of neural networks.
  • Mathematical frameworks for artificial agents’ behaviours.
  • Conceptual research on AI Alignment.
  • AI Governance: the semiconductors supply chain.

Eligibility

We welcome applications from those who fit a majority of the following criteria:

You are between 18 and 30 residing in France, Italy, Spain, Poland, or Sweden. While you do not need to be a native of these countries, current residency is a prerequisite for eligibility. Those who do not meet these specific residency requirements are encouraged to consider applying to the UK ML4Good program*.

You are motivated to work on addressing the societal risks posed by advanced AI systems - ideally, motivated enough to consider making significant career decisions such as transitioning to technical alignment work, setting up a university AI safety group, or founding a project

You have a programming background and comfortable with at least one year of applied mathematics at university level

You have a high level of proficiency in English

You can commit to completing our prerequisite material before the bootcamp (we will send this to you upon acceptance)

  • We expect this material to take 10 hours
  • This will include AI Safety conceptual readings and may include programming or mathematics preparation depending on your strengths

We will do our best to be gender-balanced. Historically, 1/4 of the participants have been women, this time we will try to reach 1/3.

 

* The French bootcamp operates under sponsorship from the Erasmus+ program, which imposes certain unique constraints on eligibility not present for the UK bootcamp. The latter receives funding from Open Philanthropy, allowing for different participation requirements.

FAQs

How many people will attend the camp? There will be 19 participants and 5 staff members.

Will there be any spare time? There will be periods of leisure and rest during the camp. However, the course is intensive and full-time - don’t plan to do anything else during the camp.

What language will the camp be in? All courses, instruction, and resources will be in English.

What do you mean by AI Safety? By “AI Safety” we mean ensuring that AI doesn’t lead to negative outcomes for sentient beings or the premature disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.

I am not sure my level of technical knowledge is sufficient. Please see the prerequisite section above to see what level of technical knowledge we are looking for. If you have particularly strong mathematics skills then we would accept less programming experience and vice-versa. If you are still unsure, feel free to contact us. Additionally, before the camp begins we will provide some preparation work.

How much do I need to know about AI Safety to apply? We expect participants to have a solid grasp of why and how an AI could become existentially unsafe for sentient beings and a general overview of proposed solutions. When it comes to theoretical AI Safety topics, we don’t expect an advanced level of knowledge. You will get more value out of the event if you have more familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.

What might an ideal candidate look like? We have particular interest in those who we can support in planning concrete actions towards working on the reduction of AI risks. Examples of promising candidates include:

  • You are an undergraduate in a technical subject with an active github account and you would consider setting up an AI Safety Reading Group at your university.
  • You are early in your career or are a masters student in a technical field and you are interested in exploring a future career in alignment to reduce risk from advanced AI.
  • You are a professional in the field of software engineering or data science and are looking for a way to alter the trajectory of your career towards work on AI Safety. You would be happy contributing engineering talent to open source tooling or helping found a new project.
  • You already have prior machine learning experience and are keen to apply your skills to reduce risk from AI and plan to act on this by e.g. changing jobs, or planning your career, or would be willing to join early stage projects.
You still hesitate to come?

The feedback from recent camps has been excellent.

For example, to the question "How likely would you recommend the bootcamp to a friend with similar interests as you?". The average was 9.6/10.

Here is the full, unfiltered list of responses to the question "What is your overall impression of the camp?" asked to participants at the end of the last camp:

My general impression was pretty good. I think it was quite good and hard to improve.

——

best fun I've had in years, learnt loads,

——

A great atmosphere with great discussion and interactions!

——

Incredible. It is probably the best professional activity I have ever signed up for. Great vibes as well.Everything feels kind of improvised and I like that. I like that the group is small and I'm getting a good vibe from the group.

——

great vibes, probably much more positive impact

——

Very nice vibes, very tight & challenging program

——

All nice, very good

——

Great vibe, a bit chaos in organization, but that added to the charm. Extremely high impact, I feel like AIS is the way to go for me!!!

——

I think it was even more useful than CFAR mainline workshop in Prague (but maybe my opinion is skewed because I'm currently a little hyped up)

——

Very well prepared, exciting and helpful. Vibes really good, maybe a bit toooo fun even, but probably pretty close to optimum. I think it's much more likely now I'll end up on an AI Safety career trajectory, and the bootcamp contributed a lot to that.

——

Technical exercises were difficult, but probably hard to avoid, you surely can't easily bring all participants up to the proper preparation level ahead of time without significant overhead.

——

  • Very good vibe
  • I had the impression most or all people felt welcome and included
  • Good organization
  • Especially the last two days (career planning, red teaming, interviews) were surprisingly useful to me
  • Overall much more useful/impactful than I thought it would be
  • Also more fun than I thought it would be despite the intense workload (relative to my daily life)

——

Can't compare to other projects, but it was great!

——

very good, a bit weird/funny the heterogeneous constellation of people in terms of level, nice/friendly vibes, love the worky atmosphere and focus

——

I thought it was great! I would say the impact was good. The vibes were also good! You did a great job!Excellent! Great vibes, well organized, high commitment, the whole structure was beneficial for making people bond and get passionate about the subject.

——

Great vibes evolved throughout the days.

——

I liked how it gave me the opportunity to learn AI safety and ML in an intensive and accelerated way. I also thought that the conversations with the other participants were really valuable in the same way that 1-1s in EA Global are valuable.

——

I think it is a great opportunity to learn skills, engage with the AI Safety community and build a network. It is also very motivating.

——

I think compared to working independently for 10 days, ML4Good has much higher value for most people, especially if they do not exactly know what topic they want to work on.I think it is less impactful than an EAG for career planning and networking, if you count the EAG as 4 days of career planning, swap card and filling out the application and around 4-5 days for the EAG including travel.

——

I liked the vibes a lot at ML4Good, it was just a lot more personal than at an EAG. Much more of a group experience.

——

Lovely people and good vibes, learned a lot but could have learned more by starting from basics

Team

Charbel-Raphael Segerie
Teacher, Curriculum Designer
Charbel is the co-head of the AI Safety Unit at EffiSciences. He will be the primary instructor for the coding parts of the event and has been the lead curriculum developer for all past iterations of ML4Good. He is teaching technical AI Safety in ENS Paris-Saclay in the Mathematics, Vision and Learning master.
Edoardo Pona
Head of AI, Serac Tech
Edoardo is doing research in interpretability, and works with aisafety camp.
Bogdan-Ionut Cirstea
Conceptual Teacher
Bogdan is an independent AI safety researcher funded by the Center on Long-Term Risk currently working mostly at the intersection of neuroscience, deep learning and AI alignment. During the camp, he will be the main instructor in charge of the conceptual parts. He completed a Master’s degree in Applied Mathematics at the École normale supérieure and has a PhD in Machine Learning.