The UK financial regulator, the Financial Conduct Authority (FCA), has accused US tech giant Google of not doing enough to stop fraudsters using its internet search pages to target victims with scam financial advertisements. Currently, fraudsters and promoters of high-risk schemes are widely able to place fake ads aimed at UK private investors on Google search pages, with the FCA only having the power to ask Google to take them down once they have been spotted. Lists of ads and websites the FCA believes to be scams are being shared with Google, but scammers are easily able to get around this with increasingly professional-looking promotions. At the same time, there is growing concern among banks and fintech groups that “deepfakes” – doctored or realistically edited videos, images and audio – will exacerbate the issue of online fraud, by impersonating clients and creating identity-based cyber-attacks. Banks, including HSBC, Chase and Mastercard, are now setting up partnerships with tech firms, to tackle deepfake content through biometric identification systems.
Coverage of deepfakes has so far largely concentrated on the potential for manipulated videos of politicians, to cause large-scale deception across society and pose a threat to democracy. However, the FCA’s complaints to Google regarding action against fraudulent ads, and the move by some banks to invest in anti-deepfake technology, highlight a much more everyday concern of fabricated content targeting individuals. The chances of such fraud being successful in its aims is arguably higher than any dystopian society-wide deception. There would be many quick to analyse and call out fabricated video content from a President, but a fake call from a colleague or relative asking for a bank transfer could be much more difficult for the individual to dismiss. As our daily conversations continue to be predominantly online during Covid, this vulnerability is even greater. The existing challenges of preventing scams before they appear, as highlighted by the FCA, are a strong signal that industries are hugely underprepared for the onset of deepfake fraud, which will present an ever-more realistic and believable scam for individuals. The current status quo of removing fraudulent content after it appears, will not be enough for content that is close to life and personalised. Responsible businesses will act now to get ahead in the fight against deepfakes.
Richard Phillips, Consultant, EMEA