AI Safety Unconference at NeurIPS 2019

The AI Safety Unconference brings together persons interested in all aspects of AI safety, from technical AI safety problems to issues of governance and responsible use of AI, for a day during the NeurIPS week. As an unconference, the event aims to maximize valuable social interactions between participants, through free-form discussions, moderated discussion groups, one-on-ones, lightning talks.


10:00-12:00 - Welcome, one-on-ones, free discussion, posters
12:00-13:00 - Lunch, lightning talks, explanation of afternoon
13:00-13:50 - Moderated discussions 1
14:00-14:50 - Moderated discussions 2
15:00-15:30 - Coffee/tea break and poster session
15:30-16:20 - Moderated discussions 3
16:30-17:20 - Moderated discussions 4
17:30-18:00 - Closure

A vegan lunch will be provided, along with snacks & coffee/tea. This agenda is tentative and we welcome feedback from participants.


Registration: closed.

Join the email group, and…

  • Start a thread for any topic you are interested in discussing.
  • Indicate if you’re interested in moderating or leading a discussion group.
  • Suggest background readings or share your thoughts on existing topics… What are you curious about? What opinions or perspectives do you want to promote or challenge?
  • Post in the “one-on-one” thread and let other participants know which topics you are excited to discuss individually. As examples, last year’s discussion topics included “Establishing trust in advanced AI systems”, “Concrete threat models”, and “Governance, policy, strategy”.

Give a Lightning Talks: Lightning talks (5-6 minutes in length) will take place during the lunch hour.

Background readings: We recommend Victoria Krakovna’s AI safety resources and the Alignment newsletter as outlooks of the field.


  • A great way to meet the best people in the area and propel daring ideas forward. — Stuart Armstrong
  • The event was a great place to meet others with shared research interests. I particularly enjoyed the small discussion groups that exposed me to new perspectives. — Adam Gleave


  • David Krueger (Mila)
  • Orpheus Lummis (Effective Altruism Québec)
  • Gretchen Krueger (OpenAI)
  • Richard Mallah (FLI)
  • Joe Collman

If you have any questions or thoughts, reach out to


Thanks to the following organisations for their support.

  • Effective Altruism Foundation
  • Survival and Flourishing Fund
  • Future of Life Institute