Sitemap

The Brighter Side of Deepfakes

6 min readMay 26, 2021

It all started in 2017. A Reddit user named “DeepFakes’’ had posted false, manipulated pornographic content in which the faces of celebrities had been realistically swapped onto the explicit material. After that moment, the concept of Deepfakes and the idea of synthetic media spread like wildfire. Along with the spread came a plethora of concerns that continue to evolve with each passing year with the modernization of technology. Deepfakes do indeed pose a threat, but there is no need to eradicate and stigmatize the new technology: Deepfakes do need to be addressed, but the opportunities presented by synthetic media should be prioritized.

First of all… What are Deepfakes?

Deepfakes are a form of synthetic media in which existing images and videos could be manipulated. It offers the ability to face swap individuals for another in images or videos, as well as the opportunity to manipulate their facial and vocal structure and features to have them verbally say something which is false in videos.

An example of a Deepfake showing Jim Carrey’s face on Alison Brie’s body

Why Do Deepfakes Matter?

Deepfakes are very influential nowadays because of the easy capability to manipulate an individual’s speech or face. Deepfakes have also become easily accessible to anyone with a basic technological background, with softwares and apps further simplifying the process .

The Deepfake Debate

The Bad…

Deepfakes have earned their bad reputation due to the most common way to which it is applicable. Through the work of the media and researchers, it has been found that Deepfakes have been used for nonconsensual porn, violence, harmful false impersonation, and a means of influencing corruption and rising tensions within politicians and political elections.

Deepfakes are primarily used to create non consensual, false porn. A published paper, “The Real Threat of Deepfake Pornography: A Review of Canadian Policy” written by Vasileia Karasavva and Aalia Noorbhai investigates and researches the creation and distribution of pornography created by Deepfakes and the legal implications that come with it. The authors found that “nonconsensual pornography…reportedly accounts for 96% of the total Deepfake videos found online”. The impacts of Deepfake pornography have not been directly studied on the targeted victims or consumers, the estimated effects have been negative, leading in depression and anxiety within victims, and altered behavior patterns in the consumers.

Many people have accidentally stumbled upon videos of politicians, such as Barack Obama, saying absurd claims, such as “President Trump is a total and complete dipsh%t” (Whittaker et al.). However, this may come as a surprise to many, but these believable, mainstream videos, are Deepfakes. Deepfakes politically create an environment of distrust and corruption. “Deepfakes and the New Disinformation War: The Coming Age of Post-truth Geopolitics”, a 2019 paper written by Danielle Citron and Robert Chesney introduces an unfamiliarized audience into the concept of Deepfakes as a way to cause harm on a governmental scale. In a time where disinformation is prominent and media is widespread due to the continuous evolution of technology, the authors warn about the implications Deepfakes may have. If disinformation has already been prominent in several occasions such as the 2016 elections where Russia had successfully spread disinformation through fake social media accounts, sharing inflammatory content on the Black Lives Matter movement purposely designed to stoke racial tensions (Citron and Chesney), disinformation be spread through more realistic means such as Deepfakes, further worsening disinformation and distrust on a political scale. Deepfakes could create a cast on a government’s or politicians legitimacy and create distrust in a government.

Example of a Deepfake clip where Obama is lipsyncing.

…And the Good

Not everything has to be entirely negative. Although Deepfakes have posed threats and have been used in malicious ways, the spectrum synthetic media that Deepfakes grow from, has to offer is wide and can be beneficial. Deepfakes can be helpful whether it be culturally, medically, or in the entertainment industry.

Language barriers could be facilitated or removed through the use of Deepfakes. Take for example David Beckham’s Malaria Must Die campaign. In the video, Beckham is shown fluently speaking nine different languages, which he cannot do. In order to achieve this effect, Deepfakes were used through emerging AI video synthesis technology (Whittaker et al). This eases the language barrier and the distribution of the video content which may be impeded due to the need of subtitles. The effects of removing language barriers can also be seen politically. Manoj Tiwari, an Indian politician, used Deepfakes to address his target Hindi speaking audience in order to effectively criticize his political opponent. Tiwari’s original recording, in English, was translated into the Hindi dialect, Haryanvi. The Deepfake video had widely and effectively spread, reaching “approximately 15 million people in 5800 WhatsApp groups” (Whittaker et al).

Deepfakes can also be beneficial in the medical field and to those who are impaired, providing them accessibility. The same technology principles that go into making Deepfakes could also be applied to create synthetic voices. This creation lets people who have lost their ability to speak due to diseases have a personalized synthetic voice which is beneficial for communication in everyday life. Synthetic voice can also be utilized to turn text into speech for those with speech and hearing difficulties, allowing for increasing accessibility, independence, and communication to those with impairments. Synthetic media has also been used by prestigious clinics to create fake brain MRI scans in order to train algorithms to spot tumors, and applying that testing to real medical images and scenarios, improving the accuracy of spotting tumors on actual medical scans.

Deepfakes are effective in the entertainment industry and help cut down on CGI and editing costs. For example, the Netflix show, The Irishman, spent a large sum of money for on CGI, which drove the budget up to 175 million dollars. However, an individual managed to replicate the exact same scenes using Deepfake softwares such as used DeepFaceLab, in order to de-age the actors just like the CGI programs intended to do. This was done within a span of 7 days and came at no cost, showing that Deepfakes can be a cost-effective way to edit films and save money in the film industry.

Comparison of the CGI and Deepfake effects on The Irishman

The CGI effects provided by Deepfakes have also been used for educational and interactive reasons. The Dalí Museum used Deepfakes to bring back Dalí, where he can interact with the audience throughout the museum, showing the capacities Deepfake technology has: It can establish deeper connections to the past and allow humans to interact and understand history or historical figures on a deeper level.

The Future

Deepfakes may have negative implications, but that is not all that has to be thought about it. The distrust, corruption, and fraud that Deepfakes bring need to be addressed with stronger domestic and international laws and more in-depth regulations regarding the creation, sampling, and distribution of Deepfakes. Algorithms have been created to detect Deepfakes in order to catch the media and stop the spread, but even those algorithms may introduce legal complications because of the lax regulations on media and content distribution and ownership (Pantserev). The general population must also be taught on how to read the signs to detect deepfakes: Unnatural lighting, mouth movements, body gestures, etc. If the general public is taught about the media, Deepfakes, and what such content aims to bring, they will have more of an understanding and be able to detect false media and lower the levels of distrust in the media environment.

Technology continues to evolve each day and so does synthetic media. Just because Deepfakes present a few problems does not mean the public should be afraid of the concept and reject it. If they are paid more attention to and emphasized by companies and lawmakers, Deepfakes could be regulated and adapted for public use. This is still a relatively new technology with a vast horizon and a plethora of opportunities which can alter the world and humanity for the better.

--

--

No responses yet