Does AI pose existential threat to humanity? Here’s what experts told us
The evolution of Artificial Intelligence (AI) has been raising debates just as it is generating mixed feelings among professionals across the world.
Currently valued at $100 billion, the AI industry is expected to grow twentyfold by 2030, up to nearly $2 trillion. It is an industry that, like the evolution of the digital era in the late 1990s, has the attention of many, with a feeling of excitement and panic.
When ChatGPT was launched by OpenAI, an American company, on November 30, 2022, it became the fastest-growing consumer software application in history, gaining over 100 million users by January 2023. In May 2023, when Google made its AI platform – Bard, available in over 180 countries and territories, it further heightened speculation about AI and the future. At the heart of this swift adoption of AI is an underlying fear, rightly placed, of how the technology can become a weapon against its creators. An AI threat to the existence of mankind may sound bizarre and imaginary. It might perhaps spark memories of the widely watched action thriller, Judgment Day, which portrayed how Skynet, an AI, suddenly became self-aware. An attempt by humans to deactivate the network sparked fear in the now-conscious machine, leading it to launch an all-out nuclear attack on Russia in order to provoke a nuclear counterstrike against the United States, knowing this would eliminate its human enemies.
In May 2023, tech leaders at top AI firms like OpenAI, Google DeepMind, Anthropic, and others signed an open letter hinting at the potential catastrophic consequences of an AI-powered future.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the statement read in part.
Sam Altman, the CEO of OpenAI, Demis Hassabis, the CEO of Google DeepMind, Geoffrey Hinton, an Emeritus Professor of Computer Science at the University of Toronto, Bill Gates, one of the world’s richest men and Co-Founder and Chief Scientist of OpenAI, Ilya Sutskever, and Shane Legg, the Chief AGI Scientist and Co-Founder of Google DeepMind, are some of the notable experts who signed the letter, indicating that if AI is mismanaged, the possible impact it could have on humanity can be likened to a pandemic or a nuclear war threat.
While the world is already taking actions on the potential threats of AI to humanity, it appears Nigeria and other African nations are yet to start paying full attention. However, some African tech leaders are already lending their voices to the debate and offering advisories on how AI may also affect the continent.
Ojo Ademola, a UK-based Professor of Cybersecurity and a redoubtable Information Technology expert, told Neusroom that the existential threat posed by AI tools cannot simply be wished away.
“The problem with the threat AI poses to humanity is that it has not been fully conceptualized. While AI can be used to create opportunities and jobs and perhaps solve employment issues, what happens when AI acquires general knowledge like humans and begins to make decisions from the vast data available to it?” Ademola said.
AI, trained by a large volume of data, is as intelligence demonstrated by computers, as opposed to human or animal intelligence. But ever since the field of AI was born at a workshop at Dartmouth College in 1956, its breadth of study has grown to include the making of systems that recognise, interpret, process, or simulate human feelings, such as emotion and mood, to general intelligence aimed at solving a wide variety of problems with versatility similar to human intelligence.
Ademola, who is Nigeria’s first Professor of Cybersecurity and Information Technology Management, obtained his Ph.D. in Cyber Security from Atlantic International University and has been in the field of general management, cyber security, information and communication technology (ICT), and management since 1990.
“If AI becomes perfectly developed, acquiring general intelligence by being able to surf the internet and have access to big data, it can begin to act in ways even the creators cannot understand,” he warned. “The threat AI poses is not just in taking jobs; in fact, AI can be used to create more jobs than it replaces. The threat lies in some of the unethical uses AI can be put to when it falls into the hands of certain individuals.”
Corroborating Ademola’s position on AI’s potential of creating jobs, Napa Onwusah, the first female leader of the Startups Segment for Amazon Web Services, told Neusroom that those who fail to upskill are majorly at the risk of losing their jobs to AI.
“I strongly believe that AI will create new jobs and those who fail to make the best out of the systems are likely to lose their jobs to those who do,” She said, “So, rather than worrying about your job being replaced by AI, I think now is the best time to start upskilling and mastering how to infuse AI into your work. While new skills like Prompt Engineering, a technique used in artificial intelligence (AI) to optimise and fine-tune language models for particular tasks and desired outputs, are emerging, you can only get the best out of AI if you have studied how to ‘influence ’ or prompt it properly.”
With over two decades as an active player in the tech ecosystem, Napa Onwusah was the Head of Sales for Google Africa and has worked in several other blue chip companies including Micsosoft, Visa, Nokia, SAP and Cisco, before she joined AWS.
While some activists are already discussing Robotic Rights, since they are now being developed to have ‘mind and subjective experience,’ AI is already posing a considerable threat to humans.
“The first law of Robotics is that a robot shall not harm a human or, by inaction, allow a human to come to harm. But in the hands of certain humans, AI can be used to inflict harm on humans,” Prof. Ademola said. He continued, “We have seen this in fintech where people use all sorts of AI tools to gain access to bank accounts of individuals and financial institutions.”
In the first quarter of 2023, a report on Fraud and Forgeries in the Nigerian Banking System showed that fraudulent activities resulted in a loss of N472 million.
“But the AI threat extends across other sectors, from the military to agriculture, to healthcare. For instance, what if an AI used to analyze blood samples decides to tamper with the samples, thereby generating harmful results for humans?” he said.
When asked how developing nations like Nigeria can formulate policies to mitigate the existential threats of AI, Prof. Ademola noted that measures can only be fully implemented when one understands the basics of AI.
“How can you make policies on things you don’t produce? Nigeria is not a producing nation but rather a consuming nation. Before we can begin to enact laws that will help mitigate the threats posed by AI, we need to understand how it works, ” he said.
In 2022, a total of 37 nations passed AI-related laws. The US passed a total of nine AI laws, followed by Spain with five. However, there has been an exponential increase in funding for projects related to the development and enhancement of AI. The US government allocated $1.7 billion to AI in 2022, a 13% increase from the previous year. To put this into perspective, the US Department of Defense, in its non-classified AI budget, requested $1.1 billion in 2022.
However, just as there is a demand for developing nations to consider reducing the use of fossil fuels and commit to renewable sources of energy, a move that can hinder the growth of some African countries, Prof. Ademola maintained that investing in AI technology should be a priority for the Nigerian government.
“First, the Nigerian government should invest in AI and teach programming languages. We can integrate that into our school curriculum. Who says we cannot have programming hubs in every Local Government Area in Nigeria? In the last general election, there were about 7,000 polling units. We can create hubs where we train young people in programming and then start partnering with bilateral companies to build AI tools before we can begin to concern ourselves with regulations,” he said.
But Napa Onwusah believes that there are already notable investments in Nigeria’s tech space.
“Nigerian companies especially in the tech space are already applying these AI systems,” she said. However, before more people are equipped to develop AI systems, I suggest the government should implement legislation to ensure these systems are not used to enable criminal platforms and behaviors. Some industry leaders like Mo Gawdat, ex-chief business officer for Google X, have advocated the introduction of higher taxes for companies using AI systems. This way, we can generally curb how fast the systems are developed and deployed.”




