EA - There and back again: reflections from leaving EA (and returning) by LotteG

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Categories:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There and back again: reflections from leaving EA (and returning), published by LotteG on March 18, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.Commenting and feedback guidelines:This is a Forum post that I wouldn't have posted without the nudge of Draft Amnesty Week, and is indeed my first ever forum post. Fire away! (But be nice, as usual)In Autumn 2016, as a first year undergraduate, I discovered Effective Altruism. Although I don't remember my inaugural meeting with EA, it must have had a big impact on me, because in a few short months I was all in. At the time, I was a physics student who had grown up with a deep - but not yet concrete - motivation to "make the world a better place".I had not yet formed any solid career ambitions, as I was barely aware of the kinds of careers that even existed for mathsy people like me - let alone any that would make me feel morally fulfilled. When I encountered EA, it felt like everything was finally slotting together. My nineteen year old brain was buzzing with the possibilities ahead.But by the following summer, barely a single fraying thread held me to EA. I had severed myself from EA and its community.Several years on, I have somehow found myself even more involved in EA than I was before (and, once again, I'm not fully sure how this happened). Now, I work in an EA job, engage with EA content, and even have EA friends (!). I genuinely believe that if I had not left EA when I did, then I wouldn't be able to describe my current relationship with EA in the two ways I do now: sustainable and healthy.Reflecting back on this transition, I have three key takeaways, specifically aimed at EA-aligned grads who are making their entry into the workforce.Disclaimers:These reflections probably do not apply in all cases. Most likely, there is variation in applicability by cause area, type of work, person, organisation, etc. This post is from my own perspective. For context, I work in operations.None of my commentary below is intended as a criticism of any specific org or institution. I simply hope to open people's minds to paths which go against what is seen as the default route to impact for many EAs coming out of university.(1) Skill building >> impressiveness factorMy reservations with elite private institutionsI often hear career advice in the EA space along the lines of:"Aim for the most impressive thing that you can get on your CV as quickly as possible, and by impressive we mean something like working somewhere elite in the private sector."I disagree with this advice on two levels:1. Effort pay-off??Emphasising the impressiveness-factor of a career move shifts focus away from what actually should be the priority: the skills gained.During my time away from EA, I saw many of my non-EA peers seek extremely prestigious roles at elite institutions - think Google, Goldman Sachs, PwC, and so on. Something that really struck me was how competitive, high-effort, time-consuming, and stressful the hiring rounds for these jobs were.And if they were lucky enough to beat the huge amounts of competition and get the job, yeah it would look great on their LinkedIn - but the tradeoff was often working long hours in a pressure-cooker environment, in a role that sometimes involved a high proportion of donkey work.The bias towards prestigious-sounding jobs is widespread across society, so it is no surprise that this has also proliferated EA. Among EAs, I suppose, the allure of such jobs is based on the assumption that the more prestigious an establishment, the better they will train you due to having greater resources.But think about it this way: given how much time, effort and (as you are probabilistically l...