Technology alone is not enough to combat deepfakes, we need a concerted effort
Jaspreet BindraWe know that a technology concern has become really serious when the leader of the most populated country in the world complains publicly about it – Narendra Modi, the Prime Minister of India, declared in a speech recently that he himself had become a victim of a ‘deep fake’ being shown to having participated in a folk dance, which he did not. He went on to say that he had personally talked to ChatGPT maker, OpenAI about it. A few days before that, the Indian IT Minister warned AI and Social Media companies about strictly weeding out deepfakes. Earlier that week, three of India’s most popular film actors had suffered from a deepfake scam.
While we seem to have seen a spate of public and prominent deepfakes recently, this phenomenon itself is not new. There is the infamous case of the ‘Queen’s Speech in which a Christmas broadcast featured the Queen Elizabeth seemingly delivering a message, only for it to be revealed as a deepfake designed to highlight the potential dangers of this technology. Another famous example is a manipulated video of the Speaker of the House, Nancy Pelosi, which gave the impression that she was intoxicated during an official speech. A more recent one was of a debonair looking Pope in a white Balenciaga puffer jacket.
The world’s first ‘certified’ deepfake was probably of an AI professional, Nina Schick, delivering a warning about how ‘the lines between real and fiction are becoming blurred’. These manipulated videos were coined ‘deepfakes’, the combination of two words – deep learning and fake – in 2017 on Reddit. The same (now deleted) Reddit thread morphed the faces of actors like Gal Gadot and Taylor Swift onto porn stars and deepfake pornography was born. It is now estimated that approximately 95% of all deepfakes are pornographically oriented, causing distress to an untold number of women.
The technology behind deepfakes is called GANs or ‘Generative Adversarial Networks’, purportedly invented by a very famous, but in this case quite inaptly named AI scientist, Ian Goodfellow. GANs, as their rather apt name suggests, are twin AI agents: one forges an image, the other tries to detect it. If the second ‘adversarial’ agent identifies the forgery, the forger AI adapts and improves, and the process continues ad infinitum to build sophisticated deepfakes. Initially deepfakes were a fun thing, a way to flaunt the prowess of AI to your friends. However, they took a dark turn – manipulating political leader speeches to cause unrest to being used for revenge porn by jilted boyfriends. There is now fear that the 2024 elections in India, the US and UK, among other countries, could be undermined using deepfakes and democracy itself could be subverted
While deepfakes predate Generative AI, this tech has put the creation and spread of deepfakes on steroids. This combined with increasingly sophisticated deepfake production software has made detecting and stopping them very difficult. Deep learning algorithms are very good at analyzing thousands of facial expressions and body movements, making the fakes incredibly realistic. Deepfakes can sometimes be detected through visual and auditory irregularities, and there are also AI tools created by companies like Deeptrace to identify them. However, much like the virus-antivirus scenario, this is an arms race. Experts design a deepfake detector and someone can make a better deepfake to evade it; you design a better detector, and an even more evasive deepfake is created, etc. It is a battle of AI against AI, and the arms continues forever. Blockchain based solutions are another option,
with the power to detect provenance, and therefore tracing deepfakes back to the origin. Big Techs and innovative startups are racing to develop digital watermarks and classifiers. AI and social media companies are under pressure across the world to ramp up their efforts to weed them out.
But technology by itself will not solve the problem. To win this war, we need two other weapons: regulation and education. We need stringent regulations and laws at global and national level to combat this. We also need awareness and education at a societal and school level; we must become savvy consumers of media, questioning the authenticity of suspicious content. In certain countries children are being formally taught in school to distinguish between real and fake content; perhaps we need that too.
One of my more horrific memories are of reading about innocent women who were disfigured for life by their jilted spouses or boyfriends pouring acid on their faces, in an effort to shame them and ruin their social lives. Many women fought this revulsive practice bravely, but many others cut themselves off from society or even tried to kill themselves. Deepfakes, in a sense are something like that: an online acid attack, meant to take cheap revenge and ‘dishonour’ a person. Acid attacks were identified as a severe criminal offense and society turned against the perpetrators. As our online and offline lives merge into each other, we need something similar to combat the perverted people who try doing the same thing with deepfakes.