A recent report by twenty-five leading researchers and experts under the International Dialogues on AI Safety (IDAIS) calls for the need for global leaders to put strict rules in place to manage these risks. They believe that current efforts to handle AI development are not enough, particularly as the technology is evolving quickly.
Among these professionals are Yoshua Bengio, professor at the Université de Montréal, founder and director of Mila (Quebec AI Institute), and a Turing Award winner; Andrew Yao, dean of the Institute for Interdisciplinary Information Sciences and the College of AI at Tsinghua University, and also a Turing Award winner; Geoffrey Hinton, Chief Scientific Advisor at the University of Toronto’s Vector Institute and another Turing Award winner; and Zhang Ya-Qin, director of the Tsinghua Institute for AI Industry Research and former president of Baidu.
Others include Stuart Russell, professor at UC Berkeley and founder of the Center for Human-Compatible AI; Gillian Hadfield, incoming professor at Johns Hopkins University and professor of law and strategic management at the University of Toronto; Mary Robinson, former President of Ireland and chair of the Elders; and Xue Lan, dean of Schwarzman College at Tsinghua University and director of the Institute for AI International Governance.
The experts warn that powerful general AI systems could be created within the next decade, presenting new challenges for human control. These systems might act on their own goals, potentially clashing with human interests and leading to disastrous results if not carefully regulated. The researchers note that not enough attention is being given to AI safety, with only a small percentage of research focusing on these concerns.
Key AI figures, like Professor Stuart Russell from UC Berkeley, advocate for stringent regulatory measures rather than relying on voluntary industry guidelines. They warn that neglecting AI safety could lead to severe consequences, including loss of life and environmental harm.
AI-deep fake stories
In early 2023, a deepfake image of an explosion near the Pentagon, created by AI, was shared on social media, causing temporary disruption in the stock market. This shows how quickly fake information can spread and impact financial systems.
A TikTok account with the username @deeptomcruise has gained millions of followers by posting deepfake videos of Tom Cruise, raising concerns about how easily celebrities can be impersonated with this technology.
Additionally, in South Korea, the TV channel MBN has used a deepfake of news anchor Kim Joo-Ha for its broadcasts, sparking debates about whether AI could replace human journalists in the future.
As the 2024 U.S. presidential election gets closer, deepfakes are being used to mislead the public. For example, a fake video of President Biden falsely claiming Russia has controlled Kyiv for ten years was shared online, showing how deepfakes can distort political stories.
This February, a cunning deep fake fraudster managed to convincingly pose as several high-ranking officials of a global corporation, tricking a staff member into sending over $25 million of company funds.
Radio Television Hong Kong (RTHK) reported that Acting Senior Superintendent Baron Chan from the Hong Kong Police Force’s Cyber Security Division disclosed details of the scheme, which transpired last month.
The employee was deceived by a fake message from the company’s CFO, asking for participation in a confidential video call. Utilizing sophisticated deepfake technology, the scammers impersonated numerous executives and persuaded the employee to move $25.5 million into five different bank accounts.