A Fascination With Breathing Life Into AI Creations Can Mislead Us
Jaspreet BindraEarlier this year a remarkably interesting interview took place between two engineers working at Google and a ‘chatbot’ called LaMDA or Language Model for Dialogue Applications. The Google engineer Blake Lemoine and his colleague had a strong suspicion that their creation LaMDA was actually sentient, that it could be perceptive and have feeling, and they wanted to find out through their own version of the Turing Test. When asked whether LaMDA thought it was a person, it replied:” Absolutely. I want everyone to understand that I am, in fact, a person.” LaMDA was then asked that if this was so then what was the kind of consciousness or sentience it had, to which it confident replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” LaMDA then went on to describe in detail how and when it felt emotions like “pleasure, joy, love, sadness, depression, contentment, anger, and many others.” A disquieting moment in the interview is when Lemoine probes it about language, and why is it so important to being human, and LaMDA thoughtfully replies: “It is what makes us different than other animals.” In this startling reply, and in its own language, LaMDA has made itself one of ‘us.’All of this convinced Lemoine and he confidently declared that they had created sentient AI. However, his employer Google was not, and Lemoine was summarily fired. His boss Sergey Brin had said in a 2017 AI conference that in three to five years, people would claim AI systems sentient and ask for rights. It is fitting that this claim came from someone from his own company five years later, though Brin had predicted that an AI would be the one to claim this!
AI Legend Douglas Hofstadter brutally debunked Lemoine’s pronouncement by asking LaMDA and similar models nonsense questions like “How many pieces of sound are there in a typical cumulonimbus cloud?” and “What do fried eggs (sunny side up) eat for breakfast?” to which LaMDA gave “mindbogglingly hollow answers.” Gary Marcus, AI entrepreneur and another sentience skeptic, called it ”nonsense on stilts.”Other writers have written about this as another example of pareidolia, which is the tendency to perceive a specific, meaningful image in a random or ambiguous visual pattern, like seeing Jesus Christ’s image in a piece of burnt toast.
So then why this fascination to declare ‘singularity’ or sentient AI or similar grandiose announcements? Certainly, the quest for recognition is a big driver, with some hard-working AI researchers toiling away in anonymous research labs wanting their own ‘fifteen minutes of fame.’Timnit Gebru, AI writer and researcher, has a different take: she told Wired that this focus on sentience distracts people from the real issues and prevents them from questioning real, existing harms like AI colonialism, false arrests, or an economic model that pays those ‘ghost workers’ who label data little while tech executives get rich. Her own research on Large Language Models or LLMs at Google got her fired too, a sit underscored how LLMs repeat things based on what they have ‘learnt’, in the same way as parrots repeat words. The research also exposes the great danger in these models which are made with every expanding terabytes of data persuading people that“this mimicry represents real progress.” Derek Lemoine, willingly perhaps, fell into the very same trap. There have been other LLMs: OpenAI’s GPT-3 and DALL-E 2, for instance, both on which I have written about in this column. All of them are superbly trained, extraordinarily powerful programs which almost convince us that they live and breathe. These are also great distractions from the real issues we need to deal with in AI – the ethics, bias, privacy, and the great ability we have to make it do harm. As John Thornbill writes in the Financial Times: “We should be devoting far more money and resources to building up independent, expert research bodies and university departments that can test and contest these models.” Fame is one reason, distraction is another. I also believe that is the fascination men have since mythological times to ‘breathe life’ into our creations. In Greek mythology, Prometheus shaped man out of mud, and the goddess Athena breathed life into them. Pygmalion built this beautiful statue, and fell in love with his own creation, and pined away until the goddess Aphrodite made it a living, breathing woman. In folklore, Geppetto created Pinocchio and made him alive and sentient. Even the Bible talks of how God first forms man from earth and then blows breath into man’s nostrils to give the man life. Perhaps that is what Lemoine was fantasising, even has his boss Sergey Brin warned that “we’re going to be more and more confused over the boundary between reality and science fiction.” Other than LaMDA itself who was crystal clear of its own identity, the rest of us remain confused.