EA - How Educational Courses Help Build Fields: Lessons from AI Safety Fundamentals by Jamie B
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Categories:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Educational Courses Help Build Fields: Lessons from AI Safety Fundamentals, published by Jamie B on March 25, 2024 on The Effective Altruism Forum.Cross-posted as there may be others interested in educating others about early-stage research fields on this forum. I am considering taking on additional course design projects over the next few months. Learn more abouthiring me to consult on course design.Introduction / TL;DRSo many important problems have too few people with the right skills engaging with them.Sometimes that's because nobody has heard of your problem, and advocacy is necessary to change that. However, once a group of people believe your problem is important, education is the next step in empowering those people with the understanding of prior work in your field, and giving them the knowledge of further opportunities to work on your problem.Education (to me) is the act of collecting the knowledge that exists about your field into a sensible structure, and transmitting it to others in such a way that they can develop their own understanding.In this post I describe how the AI Safety Fundamentals course helped to drive the fields of AI alignment and AI governance forward. I'll then draw on some more general lessons for other fields that may benefit from education, which you can skip straight tohere.I don't expect much of what I say to be a surprise to passionate educators, but when I was starting out with BlueDot Impact I looked around for write-ups on the value of education and found them lacking. This might help others who are starting out with field building and are unsure about putting time into education work.Case study: The AI Safety Fundamentals CourseRunning the AI Safety Fundamentals CourseBefore running the AI Safety Fundamentals course, I was running a casual reading group in Cambridge, on the topic of technical AI safety papers.We had a problem with the reading group: lots of people wanted to join our reading group, would turn up, but would bounce because they didn't know what was going on. The space wasn't for them. Not only that, the experienced members in the group found themselves repeatedly explaining the same initial concepts to newcomers. The space wasn't delivering for experienced people, either.It was therefore hard to get a community off the ground, as attendance at this reading group was low. Dewi (later, my co-founder) noticed this problem and got to work on a curriculum with Richard Ngo - then a PhD student at Cambridge working on the foundations of the alignment problem. As I recall it, their aim was to make an 'onboarding course for the Cambridge AI safety reading group'. (In the end, the course far outgrew that remit!)Lesson 1: A space for everyone is a space for no-one.You should feel okay about being exclusive to specific audiences. Where you can, try to be inclusive by providing other options for audiences you're not focusing on. That could look like:To signal the event is for beginners, branding your event or discussion as "introductory".Set clear criteria for entry to help people self-assess, e.g. "assumed knowledge: can code up a neural network using a library like Tensorflow/Pytorch".There was no great way to learn about alignment, pre-2020To help expose some of the signs that a field is ready for educational materials to be produced, I'll briefly discuss how the AI alignment educational landscape looked before AISF.In 2020, the going advice for how to learn about AI Safety for the first time was:Read everything on the alignment forum. I might not need to spell out the problem with this advice but:A list of blog posts is unstructured, so it's very hard to build up a picture of what's going on. Everyone was building their own mental framework for alignment from scratch.It's very hard for amateurs ...