EA - Australians are concerned about AI risks and expect strong government action by Alexander Saeri
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Categories:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Australians are concerned about AI risks and expect strong government action, published by Alexander Saeri on March 8, 2024 on The Effective Altruism Forum.Key insightsA representative online Survey Assessing Risks from AI (SARA) of 1,141 Australians in Jan-Feb 2024 investigated public perceptions of AI risks and support for AI governance actions.Australians are most concerned about AI risks where AI acts unsafely (e.g., acting in conflict with human values, failure of critical infrastructure), is misused (e.g., cyber attacks, biological weapons), or displaces the jobs of humans; they are least concerned about AI-assisted surveillance, or bias and discrimination in AI decision-making.Australians judge "preventing dangerous and catastrophic outcomes from AI" the #1 priority for the Australian Government in AI; 9 in 10 Australians support creating a new regulatory body for AI.To meet public expectations, the Australian Government must urgently increase its capacity to govern increasingly-capable AI and address diverse risks from AI, including catastrophic risks.FindingsAustralians are concerned about diverse risks from AIWhen asked about a diverse set of 14 possible negative outcomes from AI, Australians were most concerned about AI systems acting in ways that are not safe, not trustworthy, and not aligned with human values. Other high-priority risks include AI replacing human jobs, enabling cyber attacks, operating lethal autonomous weapons, and malfunctioning within critical infrastructure.Australians are skeptical of the promise of artificial intelligence: 4 in 10 support the development of AI, 3 in 10 oppose it, and opinions are divided about whether AI will be a net good (4 in 10) or harm (4 in 10).Australians support regulatory and non-regulatory action to address risks from AIWhen asked to choose the top 3 AI priorities for the Australian Government, the #1 selected priority was preventing dangerous and catastrophic outcomes from AI.Other actions prioritised by at least 1 in 4 Australians included (1) requiring audits of AI models to make sure they are safe before being released, (2) making sure that AI companies are liable for harms, (3) preventing AI from causing human extinction, (4) reducing job losses from AI, and (5) making sure that people know when content is produced using AI.Almost all (9 in 10) Australians think that AI should be regulated by a national government body, similar to how the Therapeutic Goods Administration acts as a national regulator for drugs and medical devices. 8 in 10 Australians think that Australia should lead the international development and governance of AI.Australians take catastrophic and extinction risks from AI seriouslyAustralians consider the prevention of dangerous and catastrophic outcomes from AI the #1 priority for the Australian Government. In addition, a clear majority (8 in 10) of Australians agree with AI experts, technology leaders, and world political leaders that preventing the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war 1.Artificial Intelligence was judged as the third most likely cause of human extinction, after nuclear war and climate change. AI was judged as more likely than a pandemic or an asteroid impact. About 1 in 3 Australians think it's at least 'moderately likely' AI will cause human extinction in the next 50 years.Implications and actions supported by the researchFindings from SARA show that Australians are concerned about diverse risks from AI, especially catastrophic risks, and expect the Australian Government to address these through strong governance action.Australians' ambivalence about AI and expectation of strong governance action to address risks is a consistent theme of public opinion rese...