The regulation of AI is too important to be left only to technologists
Jaspreet BindraIt is a question we will ask often in 2024. With the stunning debut of ChatGPT, 2023 was the year when AI became a buzzword, and LLMs took over the discourse around us. With AI deepfakes threatening major democratic elections, the fear of AI powered autonomous drones and other weapons in the wars that humanity is fighting, and with corporations ready to lay off workers as their jobs get ‘replaced’ by AI, the debate on AI ethics and safety will become even more heated in 2024. Every major country and global grouping has jumped on to the race for policing AI, the regulatory air is still murky with proposals, principles and guidelines. Although almost everyone seems to agree that this shape-shifting technology needs to be controlled and guard railed, it is not clear whose approaches and guidelines will be accepted. Who decides that? More importantly, who decides who decides?
In my opinion, the debate is beyond just AI safety or ethics. It is much more fundamental than that – it’s about what kind of future we want for humanity. Technologies have come and gone, but this one is fundamentally different. Gartner aptly says that AI is not a mere technology or a trend, it is a fundamental shift on how humans and machines will interact with each other. A few years back, in those halcyon pre-GPT days, I mused on the frog in the boiling water parable, and whether technology was that slowly boiling water which eventually incapacitates the frog. We forgot our dear ones’ phone numbers with mobile phones, we have forgotten directions with Google Maps, we are forgetting what libraries look like with search and chatbots, most children today do not know that vegetables actually grow somewhere other than a supermarket, soon autonomous cars will make us forget how to drive. Will then we turn into ineffectual vegetables, spending our time playing video games and ‘consuming content’, while AI robots and algorithms do all our work for us? Is that a future we are creating?
The debate, therefore, is not just about regulating AI, it is about the future we want for ourselves. Consider the recent scriptwriters strike in Hollywood, where writers feared the loss of their jobs to AI. But as Jamie Susskind writes in the Financial Times (https://bit.ly/48m8DvN): “Is the point of cinematic art to provide a living for people in the film industry? Or is it to provide stimulation and joy for consumers?” Would we want human producers of films just to keep them employed? “A similar debate”, says he, “is playing out in the world of literature.” While Margaret Atwood and Stephen King are worried that there works are being used to train AI systems, would it not be wonderful for AI to write like them after they are long gone, for Beethoven to continue producing wonderful music, or Rabindranath Tagore to continue to produce his masterpieces even when he is not there? When Snoop Dogg and D. Dre produced a hologram of Tupac Shakur in 2012, fifteen years after Shakur was dead, did it not excite the humans who wanted him to come back? Does all creativity and expression need to come only from human beings, if AI can created something much better?
This debate came home to me yet again, when I witnessed the recent revolution in gemstones, where cheaper lab-made diamonds identical to the mined stones are upsetting the industry. Is it a good thing that now everyone can buy these beautiful lab-made gemstones, or is it a bad thing that we are replacing something nature perfected over thousands of years with furnace-like temperatures and crushing pressures? As Susskind writes further: “Is art’s purpose merely to venerate and compensate artists, or to provoke aesthetic stimulation and cultural advance? These aren’t easy questions. And they can’t simply be answered by reference to “safety” either. These debates are about values. They ask us to choose, in Amos Oz’s words, between ‘right and right’. Regulating technology is about safety, but it is also about the kind of civilisation we wish to create for ourselves. We can’t leave these big moral questions for AI companies (like OpenAI) to decide.”
Or even governments and regulators. We need people who think and work beyond just the technologies of AI; we need historians and philosophers, humanists and naturalists, sociologists and musicians. People who work solely on AI and live and breathe the technology are exactly the wrong kind of people to decide the future we want, since all they know is AI. And, paraphrasing writer CLR James in his immortal ‘Beyond the Boundary’: “What do they know of AI, who only AI know.”