Does Artificial Intelligence detect Deep Falsification

Artificial Intelligence detect Deep Falsification

 


 DeepFake It is a technology based on making fake videos through computer programs through artificial intelligence learning. This technique is based on trying to combine a number of photos and videos of a character in order to produce a new video - using machine learning technology-which at first glance may seem to be real, but in fact it is fake 

So Deepfake is a technique based on making fake videos through computer programs through artificial intelligence learning.

The technique of deep counterfeiting.. Problems and solutions

The videos demonstrated a key feature on social networking sites, and an important tool to attract the pioneers of those platforms, who agree to watch and share those clips among themselves, and this digital environment conducive to proliferation as well as technological development has led to the emergence of another phenomenon, fabricated videos through the technology of “deep falsification”.
 

The motives for using this technology range from entertainment and ridicule of public figures, defamation and revenge porn, and the technology company "sensitivity" has detected an increase in the number of fabricated videos on the internet over the first six months of this year, where the number doubled to 49,081 videos. 

The technical possibilities of activating the tools of "deep falsification” 

The beginning of the emergence of “Deep Fakes” was in 2017 when a user of the social networking site Reddit posted edited clips on the site for a number of celebrities, and in those clips was based on an “artificial intelligence algorithm” to swap faces. The use of the so-called generative adversarial network system “Gan” is also the most widespread approach to conducting deep falsification. 

According to the British newspaper” Guardian", Anyone can prepare deepfake videos, but quality requires logistical and technical equipment, as most deepfake videos are created on high-end desktop computers using powerful graphics cards or using computing power in the cloud, which shortens the duration of video preparation from days and weeks to only hours. 

It should also be noted that there are a lot of tools available now to help people prepare deepfake videos, there are mobile applications that allow users to add their faces to a list of characters featured in videos that the application has been trained on, an example is the Chinese application “Zao”. 

Research projects have focused on how to use image analysis to detect deep fakes, for example, in June 2018, American researchers published a study describing how eye flash analysis can help detect videos with deep fakes. 

The study indicates that usually there are not enough photos of people blinking their eyes, most of the photos show them with open eyes, and therefore algorithms cannot simulate blinking, so the people in the deepfake videos were blinking their eyes rarely, it's hard to believe. 

But as soon as that study was published, the technique of deep falsification with flashing appeared, and perhaps this is the nature of that technique, as soon as a weakness is detected, it is addressed. 

The spread of "deep falsification” through social networking sites 

The danger of deepfake technology is highlighted by the widespread sharing of “deepfake” videos by social media users, exposing them to deception, as a survey conducted by Nanyang Technological University in Singapore, involving 1,231 people, found that one in three of the study participants shared content on social media that they later learned was fake. 

One in five people who are familiar with deepfakes within the study sample also reported that they encounter deepfake content online regularly. 

The study, published in the journal”Telematics and Informatics“ last October, measured the results regarding Singaporeans ' understanding of deepfakes against a similar demographic group in the United States, and concluded that survey respondents in the United States are more aware of deepfakes (61% in the United States versus 54% in Singapore). 

Dr. Saifuddin Ahmed, assistant professor at Nanyang Technological University, supervisor of the study, points out that deepfakes are a new, more deceptive form of fake news, pointing out that in some countries deepfakes have been used to create pornography, incite fear and violence, influence distrust of society, and with the development of artificial intelligence technology it will be very difficult to distinguish truth from fiction. 

Seifeddine explained that while technology companies such as Facebook, Twitter and Google have begun to classify what they identified as manipulated content on the internet as deepfakes, more efforts will be needed to educate citizens in order to effectively reject such content.

The legal and ethical dimensions related to this controversial technique 

There are many questions surrounding the technique of deep forgery from legal and ethical points of view, and to what extent the use of this technique may put people under the penalty of the law. 

It can be argued that the deepfake technique is not illegal per se, but the producers and distributors of deepfake videos can easily commit a legal offense depending on the content that may violate copyright, or constitute a breach of data protection law, and may be defamatory if it exposes the victim to ridicule or abuse. 

There is also the possibility of a criminal offense of sharing private intimate photos without the consent of the owners, the so-called”revenge porn”, a crime that carries a two-year prison sentence, according to the law in Britain. 

And another question seems to be: is deep falsification always harmful 

The answer is that not necessarily at all, a lot of these clips are entertaining, and some of them are useful as sound reproduction can restore people's voices when they lose them due to some disease. 

This technology can also be used in the entertainment industry to improve the dubbing of foreign films, and most controversially, the use of late actors in some films by employing the technique of “deep faking”, for example, the late actor James Dean is scheduled to star in the film “Finding Jack“, one of the films of the Vietnam War. 

Artificial intelligence as a solution to the problem of “deep falsification” 

Paradoxically, artificial intelligence may be the solution to overcoming "deep fakes", helping to detect fake videos, but many current detection systems have a serious weakness that they work best with celebrities, because they can train on hours of footage available for free. 

To overcome this dilemma, technology companies are now developing detection systems aimed at reporting fake videos whenever they appear, by focusing on the origin of the media material.while digital watermarks are not a proof to confirm the authenticity of the clip, online databases can keep an tamper-proof record of videos, photos and audio, so that their origins and any manipulations that may occur can always be verified. 

Based on the above, it can be said that the technology of deep falsification along with fake news and their spread on social media platforms would create a society in which distrust prevails, its members cannot distinguish truth from falsehood, or perhaps they do not seek to do so, and when trust is eroded it is very easy to raise doubts about everything. 

Deepfake technology also poses a threat to personal security, as deepfake technology can mimic biometric data, and deceive systems that rely on facial or voice recognition.

Post a Comment