A few weeks ago I wrote a blog on the potential dangers to the human race of artificial intelligence (AI). One of the possible scenarios, I cited, was related to pandemics: Briefly, the scenario went like this. AI is charged with predicting, addressing and solving the issue of a new pandemic. It analyses all the data – far more efficiently than humans can. It then decides, on its own, that one of the key problems with pandemics is the rapid spread of the disease. It then asks itself what causes that spread. The obvious answer is human to human interactions. AI then asks itself how that process can be reduced or stopped. The obvious, logical, answer to that is fewer humans. AI institutes a program to eliminate a percentage, or even all, humans.

     Does that sound like science fiction? Yes, but the almost out-of-control current race to make AI more and more intelligent could very well make that science fiction scenario a reality before we can address the controls we would need to prevent it.

     This week a number of key figures have written a warning letter suggesting that the training of AI systems should be halted for at least six months. The letter says, “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict, or reliably control”.

     The letter warns that these AIs could flood information channels with misinformation, and replace jobs with automation. I would add that, potentially, they could also decide that the elimination of humans, because they are so inefficient, is a logical next step.

     The signatories to this letter include Elon Musk, and Apple co-founder, Steve Wosniak, so these are serious people with intimate knowledge of AI development.

     The letter, from the Future of Life Institute, wants development to be halted temporarily at the current level, warning of the risks future, more advanced, AI systems might pose. “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” it says.

     The Future of Life Institute is a not-for-profit organisation which says its mission is to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life”.

     The letter follows a recent report from investment bank Goldman Sachs which said that while AI was likely to increase productivity, millions of jobs could become automated.

     More speculatively, the letter asks: “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete [sic] and replace us?”

     In a recent blog post, quoted in the letter, one of the current development companies, Open AI, warned of the risks if artificial general intelligence (AGI) were developed recklessly: “A misaligned super-intelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive super-intelligence lead could do that, too. Co-ordination among AGI efforts to slow down at critical junctures will likely be important, the firm wrote.

     OpenAI has not publicly commented on the letter. Mr Musk was a co-founder of OpenAI – though he resigned from the board of the organisation some years ago, and has tweeted critically about its current direction. Autonomous driving functions made by his car company Tesla, like most similar systems, use AI technology.

     The letter asks all AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4”. It goes on to say that, if such a delay cannot be enacted quickly, governments should step in and institute a moratorium.

“New and capable regulatory authorities dedicated to AI” would also be needed.

     Recently, a number of proposals for the regulation of AI technology have been put forward in the US, UK and EU. However, the UK has ruled out a dedicated regulator for AI – I have no idea why.

     This is a very real problem that has snuck up on us quickly and insidiously. Given the current head-long rush to make AI intelligence equal to our own, we definitely need to pause, think, and act, before the matter is taken out of our hands by a superior “being”.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

For security, use of hCaptcha is required which is subject to their Privacy Policy and Terms of Use.

I agree to these terms.

Scroll to Top