AI needs a globally co-ordinated effort for its effective regulation
Jaspreet BindraLast week, I was delivering a keynote on Generative AI at the Leadership Retreat of a premier industry organisation, when its current President raised a question: “We all talk about regulating AI. But a technology like this, can it even be regulated?”. I have been asked this before a few times. In fact, when I started doing my Masters in AI and Ethics at Cambridge University in September 2021, ChatGPT was not yet out, but even then we had the feeling that the horse had already left the barn, and ethicists and regulators were chasing a fast moving target, which was a speck on the horizon. With Generative AI and ChatGPT moving so much faster, we sometimes cannot even the see the horse.
So while we all have noble intentions, are there ways we can regulate AI? I have presented some options earlier in this column (https://bit.ly/3LwbNn9 ). There are five possible ways being discussed. The first is licensing, which came straight from the horse’s mouth, in this case OpenAI CEO Sam Altman, who suggested that AI companies would need some kind of license to operate and all such licensed bodies will be regulated. The second is a use-case led regulation: much like the FDA, which regulates new drugs at the point of use. The third is a CERN like approach, where countries get together and cooperatively work on AI together much like the discovery of the Higgs Boson at the CERN accelerator. With the fractured geopolitics that exists now, it might be flogging a dead horse though. A fourth is suggestion of an ‘isolated island’ approach, where AI research on superintelligence happens in an isolated, ‘air gapped’ manner, before it is released to the wild – something like how new aircraft are made and approved. The fifth is another being suggested by Altman again (and incidentally something I wrote a paper about in Cambridge), which is akin to regulating nuclear energy through the International Atomic Energy Agency (IAEA) and the Non-Proliferation Treaty (NPT).
None of them is perfect, and would require countries and companies to come together. In the meantime, the major players in the AI race are deploying a horses for courses approach. The EU is predictably the most stringent, adopting an approach similar to the much-vaunted GDPR, or the Global Data Protection Regulation, called the AI Act where the creators of LLMs would be responsible and liable on how their tech is used, even if another party uses it for anything else, after licensing or buying it. This has created howls of protest from OpenAI with Altman threatening to withdraw from the EU and Google’s Kent Walker saying: “You wouldn’t expect the maker of a typewriter to be responsible for something libelous.”
The US approach has been more laissez faire with the industry being asked to self-regulate itself, with some not so gentle prodding from the White House. President Biden has gotten the major players like OpenAI, Microsoft, Google, Meta, etc. together and made them sign to a set of ‘voluntary commitments’ which include internal and external testing of AI systems before release, clearly classifying which content is AI-generated, and a promise of increased transparency on the capabilities and limitations of their models. The US is very reluctant to stifle innovation in its world leading AI capabilities, avoiding to look a gift horse in its mouth, if it may.
China, on the other hand, has been the earliest and the strictest to regulate AI. It has been very targeted and specific with an intention to control the flow of information, especially information that it does not like, requiring the products to adhere to “core values of
socialism”. Companies have to submit their models for approval before they are released, as we have seen in the recent case of Tencent and Baidu. In many ways, the Great Wall is being built around Generative AI too, and much like the splintered Internet, we will see a splintered AI world also. The UK has proclaimed a ‘pro-innovation’ approach, by regulating the usage rather than the technology. It is also striving to take a leadership position with its Global AI Summit in London in October, getting countries together to hammer out a global framework. This seems to be the approach that countries like India favour in global technologies like crypto or AI, where a global approach would be more efficacious than individual country regulation. A technology like AI crosses borders effortlessly, and can be created by a developer with a laptop sitting in a Croatian basement. Much like it did in the case of nuclear power, the world needs to come together to harness and regulate a technology as existential like AI. The state of the world, however, does not lend itself to a lot of optimism on this front. After all, you can lead a horse to water, but you can’t make it drink.