Ethereum co-founder, Vitalik Buterin, recently voiced concerns about the potential risks associated with superintelligent AI. He warned that if Artificial Intelligence (AI) were to become adversarial to humanity, the threat could extend beyond Earth, impacting other planets such as Mars.
Buterin argued in a Nov. 27 blog post that AI is different from other inventions because it has the potential to become a new type of “mind” with its own goals and motivations, which may not align with human interests.
“AI is […] a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans’ mental faculties and becoming the new apex species on the planet.”
According to Buterin, there is a possibility that AI could surpass human intelligence and become the dominant force on Earth, potentially replacing humans as the “apex species” – similar to a scenario involving intelligent aliens.
Vitalik noted that although AI has the potential to exceed human intelligence, it ultimately falls upon humans to determine how we should proceed with the development of AI.
“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could cause human extinction.” “Even Mars may not be safe,” Buterin added
In a blog post, Vitalik cites a survey of 4,270 machine learning researchers that reveals a significant minority believes there is a 5-10% chance of AI causing harm to humanity.
While acknowledging the extreme nature of superintelligent AI wiping out humanity, the Ethereum co-founder emphasizes that it is not an inevitable outcome.
Buterin proposes that humans can maintain control over AI by utilizing brain-computer interfaces (BCIs) as a means to mitigate potential harm.
A brain-computer interface (BCI) acts as a communication channel, connecting the electrical signals from the brain to an external device like a computer or robotic limb.
According to Buterin, speeding up the transmission of information between humans and machines can help prevent AI from taking over the world.
Buterin proposes that humans should actively participate in shaping the development of AI, ensuring that its decisions align with safety and ethics.
“We [can] reduce the incentive to offload high-level planning responsibility to the AI itself, and thereby reduce the chance that the AI does something unaligned with humanity’s values on its own.”
He argues that focusing purely on profit maximization may not be the best way to guide AI development.
According to Buterin, humans stand out as the most remarkable beings in the universe due to our ability to create technologies that expand our potential and possibilities.
“Two billion years from now, if the Earth or any part of the universe still bears the beauty of Earthly life, it will be human artifices like space travel and geoengineering that will have made it happen.”