EA - Ingroup Deference by trammell
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ingroup Deference, published by trammell on May 26, 2023 on The Effective Altruism Forum.Epistemic status: yes. All about epistemicsIntroductionIn principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good. But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus.When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record. This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want†with some conception of “the goodâ€.And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropyâ€[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging†(if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting one’s philanthropic strategy accordingly.But treating EA thought as generic may not be a good first approximation.Seeing the “EA consensus†be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is. The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge†of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.)If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution o...