Cutting-edge AI tech has been described as ‘conscious’ by a leading philosophy of mind expert. New York University’s Professor David Chalmers made the bombshell claim while discussing the highly-controversial Generative Pre-trained Transformer 3 (GPT-3) – OpenAI’s powerful new language generator able to create content better than anything else ever made.
The technology can answer questions, write essays, translate languages and even create computer code with almost no human input.
GPT-3 has consequently been hailed as the largest artificial neural network ever created.
But now, there are even suggestions GPT-3 is showing rudimentary signs of consciousness.
Professor Chalmers said: “I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too.”
GPT-3’s almost-incomprehensible power is due to the fact it has been trained on approximately 45 terabytes of text data.
To put this into context, the entirety of Wikipedia accounts for just 0.6 percent of GPT-3’s entire data set.
GPT-3 can consequently process roughly 45 billion times the number of words a human ingests over an entire lifetime.
But only adding to the intrigue, the AI model itself wrote a rebuttal to the news GPT-3 has attained consciousness.
The rapidly-evolving artificial intelligence revolution is thought to be increasingly capable of transforming our world more than the agricultural, industrial and computer revolutions combined.
The development of Artificial General Intelligence (AGI) may even give rise to a higher form of electronic intelligence.
Sam Altman, the chief executive of OpenAI is among the AI optimists who think the technology will prove key to address the world’s most complex challenges, such as climate change and pandemics.
He said: “I think it’s going to be an incredibly powerful future.”
However, a more dystopian future could also present itself, with AI only multiplying the problems faced today.
Oxford University philosopher Nick Bostrom suggests runaway AI could one day pose an existential threat to humanity.
He said: “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.
And such dire warnings attracted the attention of Elon Musk, an original co-founder of OpenAI.
Mr Musk tweeted last year: “We need to be super careful with AI … potentially more dangerous than nukes.”