Nigeria, represented by NITDA, has joined forces with 17 other nations to adopt the latest AI guidelines developed by the UK. This collaboration comes a few weeks after the UK hosted its inaugural global artificial intelligence (AI) Summit, which aimed to tackle issues related to the global use of AI.
The guidelines, led by GCHQ’s National Cyber Security Centre (NCSC) and developed in partnership with the US’s Cybersecurity and Infrastructure Security Agency (CISA), offer a comprehensive framework for enhancing the cybersecurity standards of artificial intelligence. They place significant emphasis on the secure design, development, and deployment of AI systems.
There are mounting concerns surrounding AI, ranging from fears of existential threats to data privacy issues, potential job losses, the spread of misinformation, lack of algorithm transparency, and emerging crimes like deepfakes. All of these factors make regulation necessary.
Striking a balance between harnessing the benefits of AI while addressing its risks is crucial. This requires building solidarity, fostering community, and developing collective solutions. It is essential to maintain human control and ensure transparency in AI decision-making processes for effective management.
In Nigeria, steps have been taken to address such issues as training talent that can develop and deploy infrastructure that is based on AI. Recently, NITDA called for a close partnership with the Central Bank of Nigeria for the integration of AI into payment systems as well as forming a regulatory framework to guide the industry and usage. The speaker of the House of Representatives has also called for regulation, saying that he’ll sponsor a bill to address the dangers AI poses to society.
Details of the AI guidelines
Developed collaboratively with industry experts and 21 international agencies, the guidelines emphasize global collaboration on AI security. Key features include a ‘secure by design’ approach, informed decision-making for developers, and endorsement by 18 countries.
It is also broken down to address four key areas – secure design, secure development, secure deployment, and secure operation and maintenance.
The official launch, featuring a panel discussion with industry and government partners, shows the commitment to transparency and secure practices in AI development around the world.
NCSC CEO Lindy Cameron noted the need for international action to keep up with the rapid pace of AI development. She said that the guidelines are a significant step toward a global understanding of cyber risks and mitigation strategies.
“The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a common-sense path to designing, developing, deploying, and operating AI with cyber security at its core,” said Secretary of Homeland Security Alejandro Mayorkas.
“By integrating ‘secure by design’ principles, these guidelines represent a historic agreement that developers must invest in, protecting customers at each step of a system’s design and development.”
A full list of international signatories includes agencies from Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, the Republic of Korea, Singapore, the UK, and the US.