Prince Harry and Meghan Markle have teamed up with AI experts and Nobel laureates to advocate for a total prohibition on creating artificial superintelligence.
The royal couple are among the signatories of a influential declaration that calls for “a ban on the creation of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human intelligence in all cognitive tasks, though such systems remain theoretical.
The declaration states that the ban should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been achieved.
Prominent figures who endorsed the statement include technology visionary and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI, Yoshua Bengio; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; former US national security adviser; former Irish president an international leader, and British author Stephen Fry. Additional Nobel winners who endorsed include Beatrice Fihn, a physics Nobelist, an astrophysicist, and an economics expert.
The statement, aimed at governments, tech firms and policy makers, was coordinated by the FLI organization, a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made artificial intelligence a global political discussion topic.
In July, Mark Zuckerberg, the leader of Facebook parent Meta, one of the major AI developers in the US, claimed that advancement toward superintelligent AI was “now in sight”. Nevertheless, some experts have argued that discussions about superintelligence indicates competitive positioning among tech companies investing enormous sums on artificial intelligence recently, rather than the sector being close to achieving any scientific advancements.
Nonetheless, FLI states that the possibility of artificial superintelligence being achieved “within the next ten years” presents numerous threats ranging from replacing human workers to losses of civil liberties, exposing countries to security threats and even endangering mankind with existential risk. Existential fears about artificial intelligence focus on the potential ability of a system to escape human oversight and protective measures and trigger actions against human welfare.
The institute released a US national poll showing that approximately three-quarters of US citizens want robust regulation on sophisticated artificial intelligence, with 60% thinking that artificial superintelligence should not be developed until it is proven safe or manageable. The poll of 2,000 US adults added that only a small fraction backed the status quo of rapid, uncontrolled advancement.
The top artificial intelligence firms in the United States, including the conversational AI creator a major AI lab and the search giant, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human cognitive capability at most cognitive tasks – an stated objective of their work. Although this is one notch below ASI, some specialists also caution it could carry an existential risk by, for instance, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an underlying danger for the contemporary workforce.