Defence and Security The Press

Artificial intelligence and the worrying use of the deepfake

Member ratings
  • Well argued: 87%
  • Interesting points: 87%
  • Agree with arguments: 87%
10 ratings - view all
Artificial intelligence and the worrying use of the deepfake

(Shutterstock)

As is the case with many technological developments, ‘deepfakes’ — videos in which someone who did not originally appear in the clip is rendered into it using artificial intelligence (AI) — largely started in the world of pornography. Viewers, should they so desire, can now watch videos of their favourite musicians and film stars “in action,” although that celebrity was never in that video.

In these cases, increasingly sophisticated tools are used to put the musicians and film stars’ faces onto pre-existing pornographic videos. There can obviously be a sinister, non-celebrity side to this too. The recent Sam Bourne novel, To Kill The Truth, features a protagonist Maggie Costello who appears in such a video as part of a cruel plot to undermine her.

In a report published in September this year, the cyber security firm Deeptrace, which specialises in this issue, found that 96 per cent of deepfakes were pornographic.  The researchers found that 100 per cent of deepfake pornography videos had women as the subject. By comparison, 61 per cent of the non-pornographic deepfake videos that the company analysed featured male subjects. The misogyny at the heart of this issue must not be overlooked.

Perhaps unsurprisingly, deepfakes have started to move from porn to politics. As the general election campaign hots up, and with a US presidential election on the horizon, it is something we are all going to have to be more aware of. We all know about fake news — knowingly false content intentionally published for profit and/or political benefit.

But deepfakes are more sinister and harder to spot. For one thing, they look startlingly real. Fake news is one thing  — we can at least take it upon ourselves to be more media literate, to research an issue more deeply. Not being able to fully trust what you are seeing with your own eyes is quite another.

As such videos become more widespread, it is crucial that we differentiate actual deepfakes from other types of falsified video. We cannot allow the waters to become muddied in the way they have with fake news. As I outlined in my book on the subject, people now incorrectly dismiss any story containing an explainable error, or even something they disagree with, as fake news.

Similarly, it would be wrong to describe videos that are simply edited in order to misrepresent what some has said or done, as deepfakes.

We’ve seen such videos, sometimes referred to as ‘shallowfakes’, start to infiltrate politics already. A famous example was a video of Nancy Pelosi that circulated online. Pelosi’s opponents did not use particularly clever AI technology. Instead, it was video edited fairly simply to make it look like Pelosi, the Democratic Speaker of the House, was unwell.

Another video, of the CNN correspondent Jim Acosta, appeared to show him aggressively handling a White House staffer who was trying to move a microphone away from him, as he attempted to question President Trump. Again, the video had been edited, not manipulated with AI. It still had consequences. The Trump White House used the fake video as an excuse to revoke Acosta’s press credentials.

One might reasonably put into the same category the rather crassly-edited video by the Conservatives, that appeared to show that Keir Starmer had been unable to answer a question on Labour’s Brexit policy on television. In fact, he had answered the question perfectly capably.

The misleading elements in these clips were all fairly easy to spot, especially when next to the originals. That is obviously not the case with deepfakes. In their report Deeptrace highlighted incidences of political unrest caused by deepfakes in Gabon and Malaysia. As people try and use ever more sophisticated methods to influence our political discourse, this is going to be an ever-increasing problem — so watch out.

Member ratings
  • Well argued: 87%
  • Interesting points: 87%
  • Agree with arguments: 87%
10 ratings - view all

You may also like