AI Safety Unconference at NeurIPS 2018

The potential benefits and risks of AI technologies loom large on the future of the world. Creating safe AI, both in the near-term and long-term, is a multifaceted and complex endeavour. By bridging perspectives and coordinating better we can increase the likelihood of desired outcomes.

The AI Safety Unconference brings together people interested in AI safety for an afternoon of participant-driven moderated discussions. The focus of the event being to maximize valuable social time, supported by a minimal structure. Discussion groups will self-assemble to cover multiple topics and intersections of the field — from specific technical AI safety problems to longer-term governance issues.


1200-1300 - Welcome and lightning talks
1300-1315 - Explanation of Cycles and the Wall of topics
1315-1515 - Discussion cycles
1515-1530 - Break and lightning talks
1530-1730 - Discussion cycles
1730-1800 - Closure

A vegan lunch will be provided, along with snacks & drinks.



Background readings

See Victoria Krakovna’s up-to-date resource list on AI safety.


Based on early signups, here’s a sample of the people you’ll get to meet:

  • Adam Gleave (CHAI, UC Berkeley)
  • Alex Ray (OpenAI)
  • Arushi Jain (Mila, McGill)
  • Daniel Filan (UC Berkeley)
  • Dylan Hadfield-Menell (UC Berkeley, CHAI)
  • Eric Langlois (University of Toronto, Vector Institute)
  • Ethan Perez (New York University)
  • Gillian Hadfield (University of Toronto, OpenAI, Vector Institute, CHAI)
  • Jacob Hilton (OpenAI)
  • Jan Leike (DeepMind)
  • Matthew Rahtz (CHAI)
  • Rob Graham (McGill University)
  • Siddharth Reddy (UC Berkeley)
  • Sören Mindermann (FHI, Vector Institute)
  • Stuart Armstrong (FHI)
  • Susmit Jha (SRI International)
  • Victoria Krakovna (DeepMind)
  • Yarin Gal (University of Oxford)

Lightning talks


  • Orpheus Lummis (Effective Altruism Québec)
  • Vaughn DiMarco (MTLDATA)
  • David Krueger (Mila)

For questions or more info, reach out to the organizers:


Thanks to the following organisations or people for their support.

  • Long-Term Future Fund part of EA Funds for funding
  • Future of Life Institute for funding
  • Nexalogy for funding
  • MAIC for guidance
  • Victoria Krakovna (DeepMind) for feedback