EA - EA and Longtermism: not a crux for saving the world by ClaireZabel
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Categories:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA and Longtermism: not a crux for saving the world, published by ClaireZabel on June 3, 2023 on The Effective Altruism Forum.This is partly based on my experiences working as a Program Officer leading Open Philâs Longtermist EA Community Growth team, but itâs a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it.Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (âweâ or âusâ in this post, though I know it wonât apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the âmost important centuryâ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach.A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use âEAsâ or âlongtermistsâ as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, Iâm concerned that this is a reason weâre failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the âexistential risk reductionâ frame [this seemed more true in 2022 than 2023].This is in the vein of Neel Nandaâs "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexanderâs âLong-termism vs. Existential Riskâ, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA.EA and longtermism: not a crux for doing the most important workRight now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think thatâs likely the most important thing anyone can do these days. And I donât think EA or longtermism is a crux for this prioritization anymore.A lot of us (EAs who currently prioritize x-risk reduction) were âEA-firstâ â we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about the importance of the far future and potential technologies and other changes that could influence it. Some of us were âlongtermists-secondâ; we prioritized making the far future as good as possible regardless of whether we thought we were in an exceptional position to do this, and that existential risk reduction would be one of the core activities for doing it.For most of the last decade, I think that most of us have emphasized EA ideas when trying to discuss X-risk with people outside our circles. And locally, this worked pretty well; some people (a whole bunch, actually) found these ideas compelling and ended up prioritizing similarly. I think t...