Fear not extinction at the hands of artificial general intelligence

Artificial-Intelligence

I recently participated in a panel which had a rather bombastic title: “Is Singularity Here? AI vs. Human: What will be the new reality?” Elon Musk has been musing on the same Singularity, saying that “The advent of artificial general intelligence is called the Singularity because it is so hard to predict what will happen after that”. He also speculated that this would bestow us “an age of abundance” but there was “some chance” that it “destroys humanity.” Geoffrey Hinton, an acknowledged Godfather of AI, and someone not prone to hyperbole, recently confessed to CBS News that, “Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI” Hinton said, “And now I think it may be 20 years or less.”

With the advent of AI and the Generative AI tidal wave, talking about Singularity and AGI (or Artificial General Intelligence) seems to have become fashionable these days. The concept is much older, though, and was probably coined by the brilliant polymath John von Neumann. In the 1950s, Neumann talked about how “the ever-accelerating progress of technology” could lead to “some essential singularity in the history of the race.” (https://bit.ly/3Pja8UJ ). In recent years, the person who has championed singularity most, and has become almost synonymous with it, is Ray Kurzweil, a futurist and computer scientist. Kurzweil, along with Peter Diamandis, set up Singularity University in the US, which advocates the concepts of abundance, exponential technologies like AI, and, of course, singularity. (Disclaimer: I have attended SU’s Executive Programme and am now an Expert faculty for them). In 2005, Kurzweil authored ‘The Singularity is Near’; now, perhaps encouraged by the generative AI tsunami, he is busy penning ‘The Singularity is Nearer’. For the record, in his earlier book Kurzweil had stuck out his neck and proclaimed that singularity will arrive in or around 2045. Let’s see how much nearer he thinks it is now.

But what is singularity? Much like Artificial Intelligence, it has no one definition. It is widely believed to be the moment of time (2045 in Kurzweil’s estimation), when artificial intelligence will exceed human intelligence, and thus become smarter than us. AGI, or Artificial General Intelligence, has a similar description: when an AI agent can accomplish any intellectual feat that humans can perform. Opinion is divided whether we should consider AGI or Singularity to have happened when an AI agent becomes smarter than the average humans, or the smartest human (Kurzweil?) or all humans put together. The word is borrowed from space science and the Big Bang theory, which postulates that nearly 14bn years ago, the universe emerged from a singularity – a single point of infinite density and gravity, and before this space and time did not exist. Interestingly, even Hinduism refers to something akin to singularity, with some ancient Hindu texts speaking about all universe and consciousness arising from a single point origin – the primeval sound of Aum.

Irrespective of its origin and definition, what happens after Singularity is where there is a sharper divergence in opinion. Optimists like Kurzweil, Sam Altman, and a majority of Big Tech leaders gush about how AGI will solve the world’s biggest problems like global warming and nuclear fusion, eliminate drudgery in jobs and generally make the world a better place. On the other hand, the more cynical voices of Yuval Harari, Elon Musk and now Geoffrey Hinton worry about the uncontrolled race to singularity, and the injustice, division, and destruction that it can bring. It was therefore quite interesting that both the optimists and pessimists got together to sign a single line open letter released by the Center for AI Safety, which implored that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. This had a legion of prominent signatories which included Altman, Hinton, DeepMind founder Demis Hassabis and represented quite liberally by Big Tech CXOs.

So, will AI become superintelligent and will it destroy us all? I do believe that we are hurtling towards some kind of alternative intelligence. Recent studies on large language models like GPT4 have revealed what are called ‘emergent’ abilities – capabilities which were unexpected and went

far beyond just ‘autocompleting text’. Microsoft researchers unveiled some of these astonishing skills of GPT4 in a now famous ‘Sparks of AGI’ paper (https://bit.ly/3Pj6hHn ). To me it is inevitable that we will create highly intelligent AI, what I am much less sure of is whether these will ever be sentient or conscious. If we yet do not understand our own brains and consciousness – it is what is called the ‘hard problem’ of philosophy – so how will we infuse a machine with one? In terms of a super-intelligent AI destroying us all, I am equally skeptical. It is us humans I fear, we have a greater ability and possibility to destroy humankind, than any AI ever will. Much like AI will not take our jobs, a human using AI might; AI will not annihilate us, but a human misusing AI can.


Subscribe To My Monthly Newsletter