EA - A moral backlash against AI will probably slow down AGI development by Geoffrey Miller
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A moral backlash against AI will probably slow down AGI development, published by Geoffrey Miller on May 31, 2023 on The Effective Altruism Forum.Note: This is a submission for the 2023 Open Philanthropy AI Worldviews contest, due May 31, 2023. It addresses Question 1: “What is the probability that AGI is developed by January 1, 2043?â€OverviewPeople tend to view harmful things as evil, and treat them as evil, to minimize their spread and impact. If enough people are hurt, betrayed, or outraged by AI applications, or lose their jobs, professional identity, and sense of purpose to AI, and/or become concerned about the existential risks of AI, then an intense public anti-AI backlash is likely to develop. That backlash could become a global, sustained, coordinated movement that morally stigmatizes AI researchers, AI companies, and AI funding sources. If that happens, then AGI is much less likely to develop by the year 2043. Negative public sentiment could be much more powerful in slowing AI than even the most draconian global regulations or formal moratorium, yet it is a neglected factor in most current AI timelines.IntroductionThe likelihood of AGI being developed by 2043 depends on two main factors: (1) how technically difficult it will be for AI researchers to make progress on AGI, and (2) how many resources – in terms of talent, funding, hardware, software, training data, etc. – are available for making that progress. Many experts’ ‘AI timelines’ for predicting AI development assume that AGI likelihood will be dominated by the first factor (technical difficulty), and assume that the second factor (available resources) will continue increasing.In this essay I disagree with that assumption. The resources allocated into AI research, development, and deployment may be much more vulnerable to public outrage and anti-AI hatred than the current AI hype cycle suggests. Specifically, I argue that ongoing AI developments are likely to provoke a moral backlash against AI that will choke off many of the key resources for making further AI progress. This public backlash could deploy the ancient psychology of moral stigmatization against our most advanced information technologies. The backlash is likely to be global, sustained, passionate, and well-organized. It may start with grass-roots concerns among a few expert ‘AI doomers’, and among journalists concerned about narrow AI risks, but it is likely to become better-organized over time as anti-AI activists join together to fight an emerging existential threat to our species.(Note that this question of anti-AI backlash likelihood is largely orthogonal to the issues of whether AGI is possible, and whether AI alignment is possible.)I’m not talking about a violent Butlerian Jihad. In the social media era, violence in the service of a social cause is almost always counter-productive, because it undermines the moral superiority and virtue-signaling strategies of righteous activists. (Indeed, a lot of ‘violence by activists’ turns out to be false flag operations funded by vested interests to discredit the activists that are fighting those vested interests.)Rather, I’m talking about a non-violent anti-AI movement at the social, cultural, political, and economic levels. For such a movement to slow down the development of AGI by 2043 (relative to the current expectations of Open Philanthropy panelists judging this essay competition), it only has to arise sometime in the next 20 years, and to gather enough public, media, political, and/or investor support that it can handicap the AI industry’s progress towards AGI, in ways that have not yet been incorporated into most experts’ AI timelines.An anti-AI backlash could include political, religious, ideological, and ethical objections to AI, sparked by vivid, outrageous, newsworthy fai...