In the ‘not so long ago’, science fiction movies were filled with weird and wonderful (and pretty impossible) scenarios and exciting plot lines. It was thrilling, entertaining, and while the entire humankind was almost always in grave danger, we never believed that it would happen to us because, well… it was fiction, it’s not real…or so we thought. Now today, when the credits roll, we take a little longer to mull over the what if and ponder about the increasingly blurred lines between science fiction and science fact.
The story we’re pondering today is called ‘The Singularity’. It’s essentially the idea that artificial intelligence (AI) will enter a ‘runaway reaction’ of self-improvement cycles – becoming smarter, faster, and causing an intelligence explosion. How superhuman intelligence will impact the human race, however, is still debatable. On the one hand, scientists like Ray Kurzweil say that AI will enhance our humanity, and produce the kinds of necessary step-change solutions to give mankind a boost. On the other hand, sceptics like Elon Musk believe that AI is “far more dangerous than nukes – our greatest existential threat” with the likelihood to bring our species to its knees.
So, which storyline should we believe? Is AI the evil antagonist to “end the human era”, as scientist and science fiction writer Vernor Vinge forewarns? Or will superintelligence be the hero of the story?
What if the answer could be both? The unstoppable, irreversible power of AI is terrifying by nature, agreed. But, as the University of New South Wales AI expert Professor Toby Walsh points out, the future isn’t fixed. We can determine the direction of AI if we believe we have a say in it . Standing ankle deep in this Fourth Industrial Revolution, we are still crafting the technologies and algorithms to open new avenues of possibility – but it’s not too late to be the gatekeepers.
The rise of superintelligence
The world today is plagued by problems of unscalable complexity, ranging from global warming to geopolitics to extreme market volatility. All these realities will profoundly impact the coming generations and redefine our role in shaping the future. But nothing will cause such tectonic change as the advent of superhuman intelligence on earth. Says the ‘father of AI’, Jürgen Schmidhuber: “it is much more than just another industrial revolution. It is something that transcends humankind and life itself.”
All the more, it’s no longer a matter of if but when machines will bypass human cognition. Kurzweil hangs his hat on 2045, while Louis Rosenberg would argue that a technological tipping point could be as early as 2030. Schmidhuber says we are 30 years off, but Walsh would push the date to 2062.
And yet, as Futurism’s Jolene Creighton says, does it really matter all that much? At the end of the day, it’s a difference of a few decades we are talking about. The real issue is, what could this reality look like?
Oh, the things we can do
Assuming by 2030 the human brain will have been successfully reverse-engineered, enabling supercomputers to understand and simulate its functions, we will have laid the groundwork for this imminent intelligence explosion.
“Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence,” Kurzweil adds. Even our everyday devices in 30 years’ time will have incomprehensible capabilities – “rather cheap computational devices that have as many connections as your brain but are much faster,” explains Schmidhuber.
He goes on to say, “And that’s just the beginning. Imagine a cheap little device that isn’t just smarter than humans – it can compute as much data as all human brains taken together!”
Kurzweil is not only looking forward to all the mind-blowing solutions this reality ushers in; he sees the upgrade opportunities for our humanity. By taking these supercomputers, embedding them in our brains and then connecting our brains to the cloud, we can tap into an unlimited library of knowledge and ability. It will unlock what Peter Diamandis calls a ‘Meta-Intelligence’ – the next evolutionary step towards a human-scale transformation.
As described by Fei-Fei Li, AI Researcher and Stanford University Professor: “When machines can see, doctors and nurses will have extra pairs of tireless eyes to help them to diagnose and take care of patients. Cars will run smarter and safer on the road. Robots, not just humans, will help us to brave the disaster zones to save the trapped and wounded. We will discover new species, better materials, and explore unseen frontiers with the help of the machines.”
Li explains that “For the first time, human eyes won’t be the only ones pondering and exploring our world. We will not only use the machines for their intelligence, we will also collaborate with them in ways that we cannot even imagine.”
Safeguarding the future with AI
Of course, let’s not forget that many smart people out there are equally freaked by this fact. Professor Nick Bostrom rightly notes, what if we unknowingly encode our first super intelligent entity with goals that are best achieved through our annihilation? What if a mis-programmed superintelligence could lead to, in his words, “a society of economic miracles and technological awesomeness, with nobody there to benefit…a Disneyland without children?”
Again, does the issue not lie with how much we believe we have a hand in it? Toby Walsh would urge us to think carefully about the tools we are creating and their potential impact on society.
For example, warfare. “Think about the implications of handing over the reins to decide who lives and who dies to machines,” says Walsh, who signed an open letter with more than 100 global tech leaders in 2017 calling on the United Nations to ban killer robots. He emphasises the importance of building algorithms that are cognisant of our unconscious biases when it comes to things like race, gender, sexual identity etc.
“We have to be careful not to bake these into algorithms and take ourselves backwards,” he says. Eventually he believes AI will deliver on good promises, provided humans stay close to the ground and thoughtfully architect the DNA of our future tools and technologies.
So, do we trust the optimistic judgement of one genius – Ray Kurzweil, or should we join the eminent ranks of the “very concerned” including Bill Gates, Elon Musk and Stephen Hawking? But are these the only options? The solution may be found in the middle ground. We need a managed evolution, a considered caretaking of the technology, so that we can balance any challenges that arise while harnessing the power of AI. As noted by Toby Walsh, “what happens next in AI is very much the product of the choices we make today .”