Friday 21 June 2019 4:50 am

How worried should you be about deepfakes?

Mark Zuckerberg recently became the latest high-profile subject of a “deepfake” hoax.

A manipulated video circulating on Instagram appears to show him boasting about having “billions of people’s stolen data” and thanking a Sheffield art installation for his success.

Squint, and you could mistake the fake video produced using artificial intelligence software for the real deal.  

Zuckerberg took the decision not to remove the video from his platform, having recently faced criticism for his refusal to remove a manipulated video of speaker for the US House of Representatives, Nancy Pelosi.


But while the Zuckerberg deepfake was part of an art installation and not designed to deceive, the same cannot be said of the Pelosi video. In it, her speech was slowed down to sound incoherent and shared online as evidence of a “neurological condition”.

The attack echoed the “brain damage” reputational attack faced by Hillary Clinton, and more broadly the dissemination of “fake news” in the 2016 US presidential election.

The episode has fuelled fears that deepfakes are providing ever more sophisticated means for nefarious political ends. Barack Obama recently spoke out about his concern for the rapidly developing technology.

Deepfakes, so-called because of the use of deep learning AI, are made with machine learning techniques which combine and superimpose images and videos to create a convincing fake. Anyone can create a deepfake for just $10. Already we have seen political leaders, celebrities, and chief executives become victims of deepfake fraud.

Deepfake headlines first began circulating in 2017. Even in its infancy, the technology caused controversy as internet users realised that the tech could be abused to create revenge porn, malicious hoaxes, or fake news.

As always with new technologies, there has been confusion over how the law covers this emerging area, and whether an updated legal framework is required to regulate it. But existing laws mean that creating deepfakes perceived to be damaging to a person or brand’s reputation already comes with consequences, while victims potentially have a number of legal options.

First and most simply, in regard to intellectual property rights, there could be a claim if the victim owns (or can acquire) the copyright of the image used.


Equally, if the image was used to sell a product, this could constitute “passing off” (which Rihanna successfully argued when Topshop featured her photo on T-shirts without her permission). If the photos used have been hacked or are intimate, this could constitute theft and misuse of private information.

Another legal option could be defamation, if members of society think less of a person because of the deepfake (for example, if they think that it is real or officially sanctioned).

Harassment and malicious online communication laws may also come into play, and these can lead to criminal prosecution. The impact could be widespread if the deepfake goes viral, and damages for defamation in the US can be substantial – sometimes millions of dollars.

For those who are considering creating a deepfake, especially of a prominent person, note that this comes with high risks, and exercise extreme caution. If you are a victim, you should contact your lawyer as soon as possible in order to stem the damage.

For everyone else, keep an eye on the future. The current batch of deepfake videos may be just an interesting demo of what technology can do, a mishmash of poor voice-overs on top of grainy unconvincing video. Few will be fooled into thinking that this is really Zuckerberg waxing lyrical about an art exhibition. However, as they become more advanced, it will become harder to discern what is real.

The law is notoriously slow at tackling cutting-edge tech, but in this case the legislative will need to catch up quickly in order to quell what could become an insidious phenomenon. Current legal remedies, although relevant, may not quite hit the spot.

City A.M.'s opinion pages are a place for thought-provoking views and debate. These views are not necessarily shared by City A.M.

Share