Biometrics: the key to tackling fraud and deepfakes?

(Image credit: Shutterstock)

The number of fraud cases has skyrocketed during the coronavirus pandemic, with more than 2,500 scams being reported to investigators in the UK by the start of April alone. From social media scams to fake government texts and phishing links in emails, the various tactics fraudsters are using to scam the public continues to grow. 

Our dependence on the internet has not helped slow down this surge either. Whether we’re buying groceries online, working in the safety of our homes or booking virtual workout classes, we often overlook the trail of breadcrumbs of information we are leaving behind us. The characteristics that we openly disclose are exactly what fraudsters are looking for to exploit us and this has meant it is becoming easier for fraudsters to completely take over legitimate accounts.

Fortunately, there are ways to ensure fraudsters can be stopped in their tracks. With the use of unique identifiers and usage patterns, it is possible to verify the digital identity and verify a user – making sure that they are who they say they are when participating in any online or digital interaction. To protect ourselves from identity theft, we must first look at what makes up a digital identity.

What is digital identity?

A digital identity can be defined as “a body of information about an individual or organisation that exists online.” But the reality is that consumers remain confused about what actually constitutes a digital identity. Consumers don’t feel they can fully protect what they don’t understand. Is it our social media profile? Our credit score or history? Is it contained within a biometric passport?

This confusion means many are also concerned about the level of access a digital identity exposes to potential fraudsters. Once a hacker has our personal details, how much of ‘us’ can they really access? In the US, we found that 76 percent of consumers are extremely or very concerned about the possibility of having their personal information stolen online when using digital identities; but 60 percent feel powerless to protect their identity in the digital world.

This is mainly because many trust in their old methods and devices for security control – passwords, security questions, and digital signatures. But as modern security techniques evolve, these methods are no longer able to protect us on their own.

More advanced and secure methods of identity verification mirror modern social media habits. Most of us are familiar with taking selfies. Now, technology can match that selfie to an ID document such as a driving licence, turning a social behaviour into a verifiable form of digital identification. A simple, secure process enables people to gain access to a variety of e-commerce and digital banking services, without a long and friction filled ‘in-person’ process. 

But as tech becomes more sophisticated, it has also meant fraud becomes more sophisticated too. One worrying problem is the rise of deepfake technology. Bad actors are taking advantage of AI and big data to defraud the public.

The threat of deepfakes

Deepfake technology has been used to create videos often impersonating politicians, business magnates and A-listers. Previously, the BBC called out a hyper-realistic deepfake of Boris Johnson and Jeremy Corbyn endorsing each other in the UK general election. This was a clear example of the potential that deepfake technology holds – its limitless potential to distort and manipulate reality. 

As deepfake technology matures, a much more concerning use of the phenomenon is emerging. Deepfake technologies are being used to produce nearly flawless falsified digital identities and ID documents – and the range and quality of the technologies available to fraudsters is the underlying driver. Technology that was once considered to be available to a few industry mavericks – using it for the right reasons – has now gone ‘mainstream’.

The good news is that the ability to identify deepfakes will only improve with time, as researchers are experimenting with AI that is trained to spot even the deepest of fakes, using facial recognition and behavioural biometrics. By creating a record of how a person types and talks, the websites they visit, and even how they hold their phone, researchers can create a unique digital ‘fingerprint’ to verify a user’s identity and prevent any unauthorised access to devices or documents. Using this technique, researchers are aiming to crack even the most flawless of deepfakes, by pitting one AI engine against another in the hope that the cost of ‘superior AI’ will be prohibitive for cybercriminals.

To combat against the threat of fraud and deepfakes, biometrics will need to keep pace with cutting edge innovation. Biometric technologies will always be a preventative measure – the only real indicator of who you really are. Overall, the adoption of biometrics as an option for digital identity verification, or account and device sign-in, is a reliable measure of security.

The pandemic has certainly accelerated the adoption of biometric technologies. As such, it won’t be long before we see biometric technologies used to keep our identities more secure, make our lives easier, and curb the threat of fraudsters and deepfake technologies.

Joe Bloemendaal is Head of Strategy at Mitek

Joe leads Mitek’s Strategy team in EMEA, helping businesses achieve their digital transformation and customer identity verification goals. Joe has over 15 years of experience, working in technology companies, such as ID checker and DataChecker, specialising in identity verification solutions. Over the years, he has helped businesses and financial firms to improve business performance, compliance and customer satisfaction.