Site icon Discover Africa News

The Case for an AI Governance Policy in Nigeria, By Kelechi Okoronkwo

 

The Case for an AI Governance Policy in Nigeria, By Kelechi Okoronkwo Ph.D

 

Adaptability is a key survival strategy for humans, animals and organizations. To survive quickly, we must adapt quickly. The focus of this article is on Artificial Intelligence (AI). What is the trend, and how are countries and multinational corporations adapting to it?

There are alarming ethical issues associated with AI. Top on the list is developers’ bias. While many developing countries seem to care less about this, countries and organizations such as the United States, Canada, China, and European Union are already taking measures to protect their citizens and customers.

AI has come to stay with us. It is rapidly changing our society. The world is no longer what it used to be; and it will not remain what it is. Predictions on the future of AI and digital communication are that humans will soon depend on AI helplessly.

Let us briefly look at current AI trends in a few areas: customer service, transportation, production, education, and research.

In the area of customer service, for instance, big corporations have moved from relying on AI chatbots to virtual assistants like Amazon Alexa. Now, AI is trained to learn from its conversations, update its memories, and make predictions. This is done round the clock, 24/7.

AI can make phone calls, do sentiment analysis by analyzing customer feedback and social media conversations to gauge satisfaction and improve services. It can as well personalize interactions by recommending products or services based on customer behavior and preferences.

In transportation, autonomous vehicles like Tesla’s Autopilot and Waymo’s autonomous taxis use AI for navigation, obstacle detection, and traffic management. They have Driver Assistance Systems which are AI-based systems like adaptive cruise control, lane-keeping assistance, and collision detection enhance driver safety. AI is deployed for fleet management to predict vehicle maintenance needs and optimize delivery routes.

In production, factories use robots like the Ameca who can speak many languages and interact with co-workers. They use predictive maintenance AI sensors to monitor machinery and predict failures before they happen. Robotics have automated quality control which is vision-based AI systems that inspect products for defects.
Collaborative Robots (Cobots) work alongside human workers to perform repetitive tasks like welding, assembly, and packaging.

In education and research, AI is championing personalized learning platforms which utilize AI-powered tools like Khan Academy and Coursera that recommend learning materials based on student progress. AI currently serves as virtual tutor providing one-on-one support and feedback to students. AI is also handling difficult tasks such as grading using automated grading systems. Tools like Gradescope can evaluate assignments and exams.

AI is deployed in research for data analysis. It can sift through massive datasets to identify patterns and trends. It can do literature review and summarization. AI systems like Semantic Scholar assist in finding relevant papers and summarizing key findings. They as well can be used for scientific discovery using AI-driven models that help in drug discovery, climate change modeling, and genetic research.

As awesome as AI capacity is currently, experts believe the world is still at the periphery of AI capacity, utilizing on the weak or narrow AI. Future AI capacity is likely to do more. The world is already aware of this. And they are getting prepared to at least safe-rail AI to their benefit. International corporations such as the International Business Machines Corporation (IBM), the European Union (EU) and the Organization for Economic Co-operation and Development (OECD) and developed economies like United States, Canada, China among others have already created AI Governance Policies. These are policies at Government levels for states or top management levels for organizations, created to protect themselves and their citizens or customers.

This is because there are enormous ethical issues in AI development. Ethical issues arise because of the human factors present in AI development. To monitor these issues, various governments and bodies set up AI governance policies top guide issues such as empathy, bias control, transparency and accountability. There must be clarity and openness in how AI algorithms operate and make decisions, with organizations ready to explain the logic and reasoning behind AI-driven outcomes.

Organizations should proactively set and adhere to high standards to manage the significant changes AI can bring, maintaining responsibility for AI’s impacts.

In the European Union and for countries under the Union, there is the EU AI Act also known as the or the AI Act. It governs the development or use of AI in the EU. The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose.

Considered the world’s first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.

The act also creates rules for general-purpose AI models, such as IBM® Granite™ and Meta’s Llama 3 and 4 open-source foundation model. Penalties for infraction can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance.

The United States’ SR-11-7 is the US regulatory model governance standard for effective and strong model governance in banking. The regulation requires bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation or recently retired. Leaders of institutions also must prove that their models are achieving the business purpose they were intended to solve, and that they are up-to-date and have not drifted. Model development and validation must enable anyone unfamiliar with a model to understand the model’s operations, limitations and key assumptions.

Canada’s Directive on Automated Decision-Making is a law that describes how the country’s government uses AI to guide decisions in several departments. The directive uses a scoring system to assess the human intervention, peer review, monitoring and contingency planning needed for an AI tool built to serve citizens.

Organizations creating AI solutions with a high score must conduct two independent peer reviews, offer public notice in plain language, develop a human intervention failsafe and establish recurring training courses for the system. As Canada’s Directive on Automated Decision-Making is guidance for the country’s own development of AI, the regulation doesn’t directly affect companies the way SR-11-7 does in the US.

There are other evolving AI regulations in Europe such as the AI package which include statements on categorization of AI based on “minimal risk,” and “high risk”. High risk AI systems will be required to adhere to stricter requirements, and systems deemed “unacceptable risk” will be banned. Organizations must pay close attention to these rules or risk fines.

In the Asia-Pacific region, China in 2023 issued its Interim Measures for the Administration of Generative Artificial Intelligence Services. Under the law, the provision and use of generative AI services must “respect the legitimate rights and interests of others” and are required to “not endanger the physical and mental health of others, and do not infringe upon others’ portrait rights, reputation rights, honor rights, privacy rights and personal information rights.”

Other countries in the Asia-Pacific region have released several principles and guidelines for governing AI. In 2019, Singapore’s federal government released a framework with guidelines for addressing issues of AI ethics in the private sector and more recently, in May 2024, released a governance framework for generative AI. India, Japan, South Korea and Thailand are also exploring guidelines and legislation for AI governance.

The OECD has developed AI principles that promote human-centered AI, transparency, accountability, and fairness. These guidelines serve as a global standard for ethical AI development and encourage international cooperation.

Data shows that some African countries, including Nigeria are making efforts towards AI governance leveraging the African Union AI Strategy of July 2024. However, Rwanda is the only African country that has reached the level of full implementation backed by law. At the last count, therefore, 54 other African countries have not made definite policy to guide the use of AI. This means that there is no measure to protect their citizens from the indiscretions of AI developers. There is need for more African countries as a region or continent to look the way of creating AI Governance policy. Nigeria, often regarded as the giant of Africa and the second or third economy in the African continent should look this direction. Does it mean that Nigeria is not being affected adversely by the ethical issues in AI development, or do we think that what is good for the Europe, America and Asia is also good for Africa? The answer is NO. As AI continues to shape the future, Nigeria cannot afford to stay behind. Establishing a comprehensive AI governance policy is imperative to protect citizens’ rights, promote innovation, and position Nigeria as a leader in Africa’s digital economy.

By learning from global frameworks and tailoring them to its unique socio-economic landscape, Nigeria can harness the power of AI for sustainable development and inclusive growth.

Despite the global progress in AI governance, Africa remains largely absent from the conversation. Nigeria, with its booming tech sector and vibrant startup ecosystem, stands at the forefront of Africa’s digital transformation. However, the lack of a regulatory framework for AI poses significant risks, including data privacy violations, algorithmic bias, and cyber threats.

AI Governance policy is needed in Nigeria to establish ethical guidelines to prevent discrimination, bias, and privacy violations, protect citizens data in hospitals and finance institutions, spur economic growth by regulating AI development and encouraging innovation, Nigeria can attract investments and foster technological advancements.

It is as well needed for international collaboration for aligning with global AI governance standards will enhance Nigeria’s participation in the international AI ecosystem.

We should start thinking about creating legislative frameworks that could enact laws that define AI usage, ethical standards, and compliance requirements, establish a national AI regulatory authority to oversee implementation and enforcement, collaborate with tech companies, research institutions, and civil society organizations and invest in AI education, training programs, and research initiatives.

Dr. Kelechi Okoronkwo is of the Department of Digital Media and Global Communication, University of Niagara Falls, Canada

Exit mobile version