It has been six years since global superstar of physics, Professor Stephen Hawking, died in 2018 and even he knew the dangers of generative AI.
Long before ChatGPT and automation systems became the daily norm for most of us, it was a lot harder to delegate and complete tasks.
But is it all it's cracked up to be?
Advert
A lot of people are agreeing that the rise of AI is going to turn out like a scene from I, Robot, but what will it look like for the workplace?
Hawking was way ahead of us with his prediction and even told the BBC back in 2012 that ‘the development of full artificial intelligence could spell the end of the human race’.
That’s pretty dark, Stephen.
Advert
But, he’s got a point.
At the time of his interview, AI was in its infancy, and Hawking could already see the power it might be able to have, especially its ability to surpass human intelligence.
He continued: "It would take off on its own, and re-design itself at an ever-increasing rate.
Advert
"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
In 2015, he was also one of 100 experts to sign an open letter to the United Nations warning of the dangers of unchecked AI development and was joined by SpaceX founder Elon Musk in a bid to create stringent rules for AI.
Then, one year before he died in 2017, he warned mankind about the dangers of allowing AI to grow further during an interview with Wired magazine.
Advert
He said: “I fear AI may replace humans altogether."
Let’s be honest, if a man as smart as Hawking is telling us that AI is going to bring upon the end of times, we should probably listen to him.
At one point, he even suggested that we could become as dumb as rocks compared to machines if we don’t stay sharp in his book, Brief Answers to the Big Questions, published a few months after his death.
He wrote: "We may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
Advert
"It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake - and potentially our worst mistake ever."
He’s not exactly wrong, though.
If we look at the state of the digital era we are currently living in, AI has exploded at a rapid pace.
Not only can we now harness the use of OpenAI to create full-length educational articles, news pieces and scripts for films, it’s continuously learning by the information we feed it.
What will it be like in 20 years after being fed a wealth of knowledge and data?
A new text-to-video tool is also being developed called Sora, which spared another open letter in 2023 to push for a six-month pause on AI research.
I’m not saying it’s going to happen, but if the end of times is brought by a group of crazy smart machines thanks to AI development, I’m out.
Topics: Technology, Artificial Intelligence, Elon Musk, Stephen Hawking