We Mustn’t Let Ethical Questions on AI get lost in all The Euphoria

The Colorado State Art Fair in 2022 was abuzz with unhappy, angry artists. One lamented, “Imagine spending hours upon hours on a piece of work to proudly present it for a competition and being beaten by somebody who pressed ‘generate’ on a screen.” Another ranted: “This thing wants our jobs, its actively anti-artist.” This ‘thing’ was Midjourney, one of the plethora of Generative AI products dropping like confetti on us hapless humans. Midjourney was used by artist Jason Allen to create a painting listed as ‘Jason M. Allen via Midjourney’, and it won the prestigious Blue Ribbon award for emerging artists. Allen was unrepentant, saying: “This isn’t going to stop. Art is dead, dude. It’s over. AI won. Humans lost.” While transformers like Midjourney, DALL E2, GPT3 created ripples in the tech world, the launch of ChatGPT a mere two months back has taken the world by storm. It raced to a 100mn users in 2 months, Twitter took 5 years and even the WWW took 7. There is enormous level of nervous excitement on how it could change everything, replace search, reshape Big Tech, accelerate Artificial General Intelligence. Somewhere in the middle of this adulatory cacophony are lost the very dangerous ethical issues surrounding ChatGPT in particular, and Generative AI in general. Ethics is a complex and complicated subject; too vast for a short column but let me address three big issues needing our immediate attention.

Generative AI models are terrible for the environment: Our plant is already on the edge with climate change and global warming, and these models could only accelerate that trend. The cloud and these AI models running on it take huge amounts of energy. For instance, training a transformer model just once with 213mn parameters can spew out CO2 equivalent to that of 125 NY-Beijing round flight trips; and GPT3 has 175bn parameters! Most of these models live on the cloud – ChatGPT lives on Microsoft Azure, for instance. This ‘cloud’ is nothing but hundreds of data centres around our planet, guzzling water and power in alarming quantities. An article in The Guardian revealed that data centers currently consume 200 terawatt hours per year, roughly equal to what South Africa does; by 2030 this could equal Japan’s power consumption. The cloud is no fluffy cottonwool thing, but as author Kate Crawford writes “The cloud is made of rocks and lithium brine and crude oil.”

Generative AI models plagiarise: Getty Images is suing Stable Diffusion in the London High Court, accusing it of illegally using its images without its permission. A clutch of artists is doing the same in the US. If you got Stable Diffusion or DALL-E 2 to comb the Web and combine multiple images (say, a Pablo Picasso Mona Lisa), who owns it – you, the AI model, or Picasso and da Vinci whose original pictures and compositions were mashed together? OpenAI claims ownership of all DALL – E created images, and paid users can reproduce, sell and merchandise the images created. This is a legal quagmire: the US Copyrights office refused to grant a copyright to a composition created by a AI model, Creativity Machine, however, both Australia and South Africa declared that AI can be considered as an inventor. There is the associated fear of AI models taking your job. A ’generate’ button could theoretically replace artists, photographers, and graphic designers. The model is not really creating art, it is crunching and manipulating data and it has no idea or sense of what and why it is doing so. But if it can do so well enough, cheaply, and at scale, customers would shrug their shoulders and use it.

The models are inherently biased: OpenAI has built ethical guard rails around ChatGPT, and the model does not spout racist or sexist content. But as Gary Marcus writes , these guard rails are thin, the model is amoral and we are sitting on a ethical time bomb. Currently, ChatGPT does not crawl the web, but later versions (like the Bing integration) will, and the whole swamp that is the Internet will be open to it. AI researcher Timnit Gebru was with Google when she co-wrote a seminal research paper calling these LLMs ‘stochastic parrots’, because, like parrots, they just repeated words without really understanding their meaning and implications. ChatGPT does not really

understand what it says, and it is designed to be plausible rather than truthful. GPT3has been trained on Reddit and Wikipedia: 67% of Reddit users in US are men, less than 15% of Wikipedians are women. DALL-E 2 tilts towards creating images of white men, and sometimes oversexualises images of women. This is because it is trained on the massive amount of open-source images on the Internet, and the imagery and language on the internet is still overwhelmingly Western, male, and sexist (for example: men being called doctors, and women being referred to as women doctors)

As these models become increasingly powerful, these ethical issues will become even more dangerous. Early signs are not encouraging: Timnit Gebru, after publishing her prescient paper on these dangers, was summarily fired from Google


Subscribe To My Monthly Newsletter