Our bootcamps offer a comprehensive overview of the AI safety landscape, technical implementations, governance frameworks, and strategy. We believe exploring diverse perspectives and approaches is crucial before specializing. From mechanistic interpretability to robustness testing, from policy development to strategic research, we want to help you find your most impactful path forward.
The program of the last camp was composed of technical content including:
. Implement SGD and other local optimisation algorithms, run remote hyper-parameter searches on a simple architecture
. Implement and run RLHF
. Look at various interpretability techniques on GPT models and the ResNet
. Implement DQN and A2C, two important reinforcement learning algorithms
. Implement adversarial attacks and defences
. Implement an LLM agent
Alongside talks, workshops and group discussions on:
. model evaluations
. AI trends
. forecasting and timelines
. risk models, risk scenarios and classifications
. landscape of solutions
. corporate governance
. international governance
There is also the opportunity to dive more into the topic of your choice during the literature review afternoon and the 2.5-day project at the end of the camp.
Several of our participants have written blog posts about their experience at ML4G.
Have a bootcamp story to share? Let us know and we'll add yours to our growing collection!
Got questions?