mentors &
advisors
Mariano Di Pietrantonio
Alejo Salles
@Flashbotsâš¡, formerly Research Lead @OpenZeppelin
Pablo Roccatagliata
Santiago Palladino
Alberto Viera

Learning
materials:
https://aisafetyfundamentals.com/
https://www.matsprogram.org/

#AIImpact
The AI Impact
-
A self-paced, 2-hour course designed for people with no technical background to learn how AI will reshape our world. No application required!
TAI
Intro to Transfomative AI
-
You'll follow a curriculum designed to accelerate your understanding of key AI safety concepts, supported by a trained facilitator in live small group classes.
#Econ
Economics of Transformative AI
-
These courses are designed for economists who want to rapidly develop their understanding of transformative AI and its economic impacts.
#Alig
AI Alignment Course
Despite AI’s potential to radically improve human society, there are still open questions about how we build AI systems that are controllable, aligned with our intentions and interpretable.
You can help develop the field of AI safety by working on answers to these questions.
The AI Alignment course introduces key concepts in AI safety and alignment, and will give you space to engage with, evaluate and debate these ideas.
#Gov
​
By taking this course, you’ll learn about the risks arising from future AI systems, and proposed governance interventions to address them. You’ll consider interactions between AI and biosecurity, cybersecurity and defence capabilities, and the disempowerment of human decision-makers. We’ll also provide an overview of open technical questions such as the control and alignment problems – which posit that AI itself could pose a risk to humanity.
#BSNSS
Oversight & Control
As model develop potential dangerous behaviors, can we develop and evaluate methods to monitor and regulate AI systems, ensuring they adhere to desired behaviors while minimally undermining their efficiency or performance?
#Evals
Evaluations
Many stories of AI accident and misuse involve potentially dangerous capabilities, such as sophisticated deception and situational awareness, that have not yet been demonstrated in AI. Can we evaluate such capabilities in existing AI systems to form a foundation for policy and further technical work?
#Agency
As models continue to scale, they become more agentic and, as such, we need methods to study their newfound agency. How do we study modeling optimal agents, how those agents interact with each other, and how some agents can be aligned with each other?
#MECH
Mechanistic Interpretability
Rigorously understanding how ML models function may allow us to identify and train against misalignment. Can we reverse engineer neural nets from their weights, or identify structures corresponding to “goals” or dangerous capabilities within a model and surgically alter them?
AGENDA

Junio
Boske + Venten
​​
Julio
AI Alignment Course
Boske + Venten
Agosto
Oversight & Control
Boske + Venten
Septiembre
Evaluations
Boske + Venten

Come Join Us
You’ll learn about the foundational arguments. It is difficult to know where to start when trying to learn about AI safety for the first time. The programme will give you the structure and accountability you need to explore the impacts of future AI systems, and give you a rich conceptual map of the field.
Your learning is facilitated by experts. Your facilitator will help you navigate the course content, develop your own views on each topic, and foster constructive debate between you and fellow participants.
You’ll learn alongside others. Your group will be made up of people who are similarly new to AI safety, but who will bring a wealth of different expertise and perspectives to your discussions. Many participants form long-lasting and meaningful connections, that support them to take their first steps in the field.
You’ll be supported to take your next steps. This could involve pursuing an end-of-course project, applying for programmes and jobs, or doing further independent study. We maintain relationships with a large network of organisations and will share opportunities with you. Additionally, with your per