The AI Safety Unconference brings together persons interested in aspects of AI safety, from technical AI safety problems to issues of governance of AI. As an unconference, it aims to foster valuable social interactions between participants, through moderated discussion groups, one-on-ones, lightning talks, free-form interactions.
Date: Monday November 28 from 9:00 to 16:00, in New Orleans, alongside NeurIPS 2022.
Location: Near the Convention Center - exact location to be communicated directly with registered participants.
Fill the Application form.
The event is private, free, with a maximum of 100 participants.
Join the chat room on Matrix, to discuss online before or during the event.
Contribute facilitated discussion (30 min) or a lightning talk (10min). We also welcome other ideas of activities - feel free reach out to the organizers about that.
You are also invited to the ML Safety Social. It is happening on Tuesday at the Convention Center. Visit the event’s website for further information and to apply.
- 09:00-09:30 - Event opening, breakfast is served
- 09:30-12:00 - Facilitated discussions and 1:1s
- 12:00-13:30 - Lightning talks, lunch is served
- 13:30-15:30 - Facilitated discussions and 1:1s
- 15:30-16:00 - Event closing
Vegan breakfast and lunch are provided, along with all-day drinks and snacks.
The Swapcard app is used for scheduling 1:1 meetings with other participants and registering to facilitated discussions.
We have confirmed participants from the following organizations: Mila, Uni. of Stanford, Anthropic, OpenAI, UC Berkeley, Uni. of Toronto, ETH & Max Planck Institute, Uni. of Cambridge, Vector Institute, NYU, ETH Zurich, DeepMind, Oxford, MIT, and more.
Testimonial of past events (2018, 2019):
- A great way to meet the best people in the area and propel daring ideas forward. — Stuart Armstrong
- The event was a great place to meet others with shared research interests. I particularly enjoyed the small discussion groups that exposed me to new perspectives. — Adam Gleave
- Center for AI Safety’s About AI Risk
- Krakovna’s AI safety resources
- Alignment newsletter
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
The event is also published on the EA forum and the LW forum.
Please read and respect AISE’s code of conduct.
The event is organized in partnership with the Center for AI Safety.
- Orpheus Lummis
- Mauricio H. Luduena
Thanks Nisan Stiennon for funding the event.
For any questions or feedback, reach out to firstname.lastname@example.org.