top of page
  • Negro del icono de YouTube
  • Twitter
  • Negro del icono de Blogger

AI Safety 
OPEN, FREE & IMPACT ORIENTED
Communities of Learning for LATAM
​


Open, Free, Accessible, High Quality
Communities of Learning

from Latam
F

 

mentors &

advisors

mariano.jfif

Mariano Di Pietrantonio

diegosle.jfif

Diego Fernandez Slezak

Director at Applied Artificial Intelligence Laboratory, Universidad de Buenos Aires

santiago.jfif
alejo.jfif

Alejo Salles

@Flashbotsâš¡, formerly Research Lead @OpenZeppelin

pablo.jfif

Pablo Roccatagliata

Santiago Palladino

alberto.jfif

Alberto Viera

Speakers

Learning
materials: 

https://aisafetyfundamentals.com/
https://www.matsprogram.org/

#AIImpact

The AI Impact

  • A self-paced, 2-hour course designed for people with no technical background to learn how AI will reshape our world. No application required!

TAI

Intro to Transfomative AI

  • You'll follow a curriculum designed to accelerate your understanding of key AI safety concepts, supported by a trained facilitator in live small group classes.

#Econ

Economics of Transformative AI

  • These courses are designed for economists who want to rapidly develop their understanding of transformative AI and its economic impacts.

#Alig

AI Alignment Course

Despite AI’s potential to radically improve human society, there are still open questions about how we build AI systems that are controllable, aligned with our intentions and interpretable.
You can help develop the field of AI safety by working on answers to these questions.

The AI Alignment course introduces key concepts in AI safety and alignment, and will give you space to engage with, evaluate and debate these ideas. 

#Gov

​

By taking this course, you’ll learn about the risks arising from future AI systems, and proposed governance interventions to address them. You’ll consider interactions between AI and biosecurity, cybersecurity and defence capabilities, and the disempowerment of human decision-makers. We’ll also provide an overview of open technical questions such as the control and alignment problems – which posit that AI itself could pose a risk to humanity.

#BSNSS

Oversight & Control

As model develop potential dangerous behaviors, can we develop and evaluate methods to monitor and regulate AI systems, ensuring they adhere to desired behaviors while minimally undermining their efficiency or performance?

#Evals

Evaluations

Many stories of AI accident and misuse involve potentially dangerous capabilities, such as sophisticated deception and situational awareness, that have not yet been demonstrated in AI. Can we evaluate such capabilities in existing AI systems to form a foundation for policy and further technical work?

#Agency

As models continue to scale, they become more agentic and, as such, we need methods to study their newfound agency. How do we study modeling optimal agents, how those agents interact with each other, and how some agents can be aligned with each other?

#MECH

Mechanistic Interpretability

Rigorously understanding how ML models function may allow us to identify and train against misalignment. Can we reverse engineer neural nets from their weights, or identify structures corresponding to “goals” or dangerous capabilities within a model and surgically alter them?

Workshops

AGENDA

Junio

Boske + Venten

​​

Julio

AI Alignment Course

Boske + Venten

Agosto

Oversight & Control

Boske + Venten

Septiembre

Evaluations

Boske + Venten

Agenda

Come Join Us

You’ll learn about the foundational arguments. It is difficult to know where to start when trying to learn about AI safety for the first time. The programme will give you the structure and accountability you need to explore the impacts of future AI systems, and give you a rich conceptual map of the field.

Your learning is facilitated by experts. Your facilitator will help you navigate the course content, develop your own views on each topic, and foster constructive debate between you and fellow participants.

You’ll learn alongside others. Your group will be made up of people who are similarly new to AI safety, but who will bring a wealth of different expertise and perspectives to your discussions. Many participants form long-lasting and meaningful connections, that support them to take their first steps in the field.

You’ll be supported to take your next steps. This could involve pursuing an end-of-course project, applying for programmes and jobs, or doing further independent study. We maintain relationships with a large network of organisations and will share opportunities with you. Additionally, with your per

BE IN THE KNOW

Thanks for submitting!

© 2021 by BOSKE

bottom of page