Ethereum co-founder Vitalik Buterin has warned about the risks associated with super-intelligent AI and the need for a strong defense mechanism.
Buterin’s comments come in the context of the rapid development of artificial intelligence, causing concerns about AI safety to increase significantly.
Buterin’s Plan to Regulate AI: Responsibility, the Pause Button, and International Control
According to one blog posts on January 5, Vitalik Buterin introduced the idea of ’d/acc or defense promotion,’ emphasizing that technology should be developed to protect rather than harm. This is not the first time Buterin has discussed the risks associated with artificial intelligence.
“One of the ways AI could worsen the situation is the worst possible way: it could cause human extinction,” Buterin speak in 2023.
Buterin continues to develop his theories from 2023. He believes that super intelligence could appear in just a few years.
“It’s possible we have a three-year roadmap for AGI and an additional three years for superintelligence. So, if we don’t want the world to be destroyed or fall into an irreversible trap, we need to not only promote the good but also slow down the bad,” Buterin wrote.
To mitigate the risks associated with AI, Buterin proposes creating decentralized AI systems that are closely linked to human decision-making. Ensuring AI remains just a human tool helps minimize the risk of catastrophic consequences.
Buterin explains that the military could be the responsible actors in an ‘AI apocalypse’ scenario. Military use of AI is increasing globally, as seen in Ukraine and Gaza. Buterin also said that any regulations related to AI would likely exempt the military, making them a serious threat.
Ethereum co-founder has launched a plan to regulate the use of AI. He said that the first step to avoiding risks associated with AI is to make users responsible.
“While the connection between how the model is developed and how it is ultimately used is often unclear, users decide exactly how AI is used,” Buterin explains, emphasizing the role of users.
If liability rules are ineffective, the next step is to implement “soft pause buttons” that allow AI adjustments to slow the progress of potentially dangerous developments.
“The goal is to be able to reduce the amount of computers available worldwide by about 90-99% in 1-2 years at a critical time, to give humanity more time to prepare.”
He said that this pause button can be implemented through AI location verification and registration.
Another approach is to control the AI’s hardware. Buterin explains that AI hardware can be equipped with chips to control it.
This chip will allow the AI system to operate only when it receives three signatures from international organizations every week. At least one of these organizations must not have military connections.
Still, Buterin admits that his strategies have flaws and are only ‘temporary intermediate measures.’