Deepfake technology is a technology that getting popular at a very fast speed and there are lots of events in deepfake technology that has been used till now. In this article, we will take a look at such events and analyze the real threat posed by deep fake technology.
One of the examples is the deepfake video of Her Majesty, The Queen of the Uk. As it is the annual tradition in the UK, Christmas saw Queen Elizabeth deliver her 3 PM speech on television to households all around the country. And as is additionally a practice, Channel 4 offered its viewers an alternate speech that broadcast at an equivalent time as that of the Queen’s, usually delivered by another renowned personality—previous notable figures include former President of Iran Mahmoud Ahmadinejad, Edward Snowden, Ali G, and Marge Simpson. the foremost recent occasion, however, saw Channel 4 airing a speech that appeared to be delivered by Her Royal Highness but one that saw her bizarrely performing a dance routine made popular on the social-media platform TikTok.
It soon became apparent that the British public was watching a digitally manipulated version of the Queen, one that used deepfake technology to change her behavior, together with her imitated voice being provided by English actress and comedian Debra Stephenson. consistent with the channel itself, the print was aired to supply viewers with a stark warning of the doubtless dangerous threat posed by fake news, with the director of programs, Ian Katz, describing the video as “a powerful reminder that we will not trust our own eyes”.
At its core, deepfake may be a sort of AI that mixes the terms deep learning and faux. It typically involves using deep learning—a category of AI concerned with algorithms that will learn and become more intelligent over time—to falsify videos. Neural networks scan large datasets to find out the way to replicate a person’s mannerisms, behavior, voice, and facial expressions. Facial-mapping technology is additionally wont to swap the face of 1 person into another face using deep-learning algorithms. As such, deepfake technology presents a transparent danger of manufacturing content that “can be wont to make people believe something is real when it’s not”, consistent with Peter Singer, cybersecurity and defense-focused strategist and senior fellow at the New America think factory.
According to Areeq Chowdhury, who researched deepfake technology being applied to UK Prime Minister Boris Johnson and former Leader of the Opposition Jeremy Corbyn once they were contesting the 2019 election, Channel 4’s decision to spotlight the impact of deepfakes was the proper one, but the technology doesn’t currently pose a widespread threat to information sharing. “The risk is that it becomes easier and easier to use deepfakes, and there’s the apparent challenge of getting fake information out there, but also the threat that they undermine genuine video footage which might be dismissed as a deepfake,” Chowdhury told The Guardian. “My view is that we should always generally worry about this tech, but that the most problem with deepfakes today is their use in non-consensual deepfake pornography, instead of information.”
Indeed, the Queen’s alternative speech is way from being the primary widespread application of deepfake. And while early iterations of the technology indicated that the target video had been doctored, its evolution in recent years has made it far more difficult to differentiate the fake content from the important. “Since the inception of deepfakes in 2017, we’ve witnessed an exponential growth within them almost like that seen within the youth of malware in the 1990s,” noted Estonia-based firm Sentinel, which specializes in helping to stay democracies free from disinformation campaigns. “Since 2019, the amount of deepfakes online has grown from 14,678 to 145,227, a staggering growth of ~900 percent YOY.” Forrester Research, meanwhile, estimated in October 2019 that deepfake fraud scams will have cost $250 million by the top of 2020.
Most commonly, deepfake technology has been utilized in the political sphere to falsify claims made by politicians and mislead the general public. John Villasenor, a senior fellow of governance studies at the middle for Technology Innovation at the Brookings Institution, told CNBC in 2019 that it is often to undermine a political candidate’s reputation by making the candidate appear to possess said or done things that never actually occurred. “They are a strong new tool for those that might want to (use) misinformation to influence an election,” he said.
Most recently, supporters of former US President Donald Trump were musing whether a speech during which he conceded the 2020 election to incoming President Joe Biden was, in fact, a deepfake. “I am outraged by the violence, lawlessness, and mayhem,” Trump said within the video. “The demonstrators who infiltrated the Capitol have defiled the seat of yank democracy. To those that engaged within the acts of violence and destruction: you are doing not represent our country. To those that broke the law: you’ll pay.” With such statements standing in stark contrast to sentiments he had expressed previously, supporters were left wondering whether deepfake technology was being employed. “Anyone else notices this eerie deepfake look to Trump, or is he airbrushed?” one supporter tweeted soon after.
“One side effect of the utilization of deepfakes for disinformation is that the diminished trust of citizens in authority and knowledge media,” consistent with a recent report from Europol (European Union Agency for enforcement Cooperation) and therefore the United Nations . Flooded with increasingly AI-generated spam and faux news that repose on bigoted texts, fake videos, and a plethora of conspiracy theories, people might feel that a substantial amount of data, including videos, simply can’t be trusted, thereby leading to a phenomenon termed as information apocalypse or reality apathy. And as Google research engineer Nick Dufour acknowledged, deepfakes “have allowed people to say that video evidence that might rather be very compelling may be a fake”.
It would seem that preventative action should be taken before later, especially given how sophisticated the technology has become. “Wow, this is developing sooner than I assumed,” acknowledged Hao Li, a deepfake pioneer and a professor at the University of Southern California, in September 2019. “We are working together on an approach that assumes that deepfakes are going to be perfect…. Our guess that in two to 3 years, it’s getting to be perfect. there’ll be no telling if it’s real or not, so we’ve to acquire a special approach.”
Hypothetically, then, deepfake could find yourself having hugely impactful consequences. Indeed, Brookings researchers Chris Meserole and Alina Polyakova suggest that the US and its allies are currently “ill-prepared” for the wave of deepfakes that Russian disinformation campaigns could inflict upon the planet. “To cite only one example, fake Russian accounts on social media claiming to be affiliated with the Black Lives Matter movement shared inflammatory content purposely designed to stoke racial tensions,” Robert Chesney and Danielle Citron wrote last year in Foreign Affairs magazine. “Next time, rather than tweets and Facebook posts, such disinformation could are available the shape of a fake video of a white policeman shouting racial slurs or a Black Lives Matter activist calling for violence.”
Responding to such concerns, the United States Senate approved a bill in November 2020 requiring the govt to conduct further research into deepfakes. “This bill directs the National Science Foundation (NSF) and therefore the National Institute of Standards and Technology (NIST) to support research on generative adversarial networks. A generative adversarial network may be software designed to be trained with authentic inputs (e.g. photographs) to get similar, but artificial, outputs (e.g. deepfakes),” consistent with a summary of the bill. “Specifically, the NSF must support research on manipulated or synthesized content and knowledge authenticity, and NIST must support research for the event of measurements and standards necessary to accelerate the event of the technological tools to look at the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content.”
How to avoid deepfakes
According to Accenture, businesses can adopt a three-pillared strategy to protect against deepfakes:
Employee training and awareness as to how to make a further line of defense. “Training should specialize in how the technology is leveraged in malicious attempts and the way this will be detected: enabling employees to identify deepfake-based social engineering attempts,” noted Accenture, adding that an identical methodology to assist counter the threat of email-based phishing via security-awareness programs is often applied.
A detection model to spot false media as early as possible and thus minimize the impact on the organization. This is particularly relevant whan repeated attempts are made to damage the reputation of a person or an organization.
A response strategy to make sure the organization can adequately answer a deepfake. In this strategy all the individuals of organization have a role to respond against deepfake.
Positive Use of Deepfake
On the plus side, a minimum of a couple of beneficial applications of the technology also exist. The movie industry, as an example, can benefit in several ways. “For example, it can help in making digital voices for actors who lost theirs thanks to disease, or for updating film footage rather than reshooting it,” stated the November 2019 study “The Emergence of Deepfake Technology: A Review” published within the journal Technology Innovation Management Review. “Moviemakers are going to be ready to recreate classic scenes in movies, create new movies starring long-dead actors, make use of computer graphics and advanced face editing in post-production, and improve amateur videos to professional quality.”
Nonetheless, it’s clear that within the current era of disinformation during which we all now reside, deepfakes represent a seriously dangerous weapon. And democracies will either need to learn to measure with such lies or do their best to act quickly to preserve the reality before it irrevocably fades even further.