Face-off: biometric data a growing business risk

As the use of biometrics grows, businesses need to be careful about how they collect, store and use the data

The use of biometric data – such as in fingerprint readers and facial recognition systems – has grown significantly in recent years as organisations look for more secure alternatives to password-based authentication. Whilst biometric technology may represent the future of digital security, organisations must be prepared for the privacy risks associated with the use of this data.

Risks associated with the collection of this form of data are, however, nothing new – legislation specifically addressing biometric data has been in place since 2008. But the increased uptake of biometric solutions, a heightened awareness of privacy exposures, and several high-profile court rulings have brought biometrics into the spotlight in recent months.

Bio hack

In 2019, security platform Biostar 2, which stores fingerprint and facial recognition details used for security access to commercial buildings, was discovered to be publicly accessible, potentially compromising the biometric data of millions of people worldwide working at organisations including banks, governments and even London’s Metropolitan Police.

Cases such as this are a warning to businesses not to rush to make use of biometrics without having rigorous security and governance controls in place. “Biometric data is an emerging exposure for businesses. The technology’s rapid growth means there is more of this sensitive data being collected and the costs associated with any security failure can be significant,” says James Lindsay-Carlin, Cyber Underwriter for Hiscox London Market. 

The human touch

Cyber security firm Norton defines biometrics as “any metrics related to human features” that “can include physiological traits, such as fingerprints and eyes, or behavioural characteristics, such as the unique way you'd complete a security-authentication puzzle.” For access and control purposes, the types of biometrics in use today are wide and varied, ranging from retinal scans to fingerprints, voiceprints, facial geometry and handprints. The arrival of new technology has turbocharged today’s use of biometric data.

Biometric data is an emerging exposure for businesses. The technology’s rapid growth means there is more of this sensitive data being collected and the costs associated with any security failure can be significant.

In turn, this has created a growing potential corporate liability, particularly when it comes to dealing with biometric data regulation. In the US, the first state legislation specifically to focus on biometrics was passed by Illinois in 2008. The Biometric Information Privacy Act (BIPA) was enacted to protect consumers against the unlawful collection and storing of their biometric information, imposing requirements on companies around collection, retention, disclosure, and destruction. Washington and Texas have followed suit with similar legislation.

A biometric ticket to ride

In 2019, theme park operator Six Flags’ collection of park-goers' thumbprints without providing disclosure of policies or obtaining informed consent led to them falling foul of BIPA in the Illinois Supreme Court. Crucially, this ruling found that plaintiffs need not plead actual harm or injury under BIPA, only that Six Flags violated the statute. “Six Flags was an interesting case because it showed there is no need to prove injury or damage in order to bring a successful legal case against businesses. All a plaintiff needs to do is prove statutory non-compliance with BIPA. The door has been pushed wide open for class actions which have now become commonplace,” says Lindsay-Carlin.

Facebook fail

In January 2020, Facebook settled a class action for over $500 million following an allegation filed by Illinois users that the social network used facial recognition in its photo tagging feature without their consent, therefore violating BIPA. Following the case, Rebecca Glenberg, Senior Staff Attorney for the ACLU (American Civil Liberties Union) of Illinois said: “In this case, a federal court of appeals validated BIPA's central insight: When corporations unlawfully collect or store a person's biometric information, it causes real harm, and companies may be held liable for that harm."

When corporations unlawfully collect or store a person's biometric information, it causes real harm, and companies may be held liable for that harm.

It’s not just US legislation that’s baring its teeth either. “Under GDPR, biometric data is classed as ‘special category data’, alongside health information, religious beliefs, and sexual orientation. This means data controllers must fulfil several specific requirements including obtaining explicit consent from the data subject and the completion of a data protection impact assessment,” says Lindsay-Carlin. Consequently, the Information Commissioner’s Office (ICO), the UK’s data watchdog, has carried out its first enforcement action involving biometrics since the arrival of GDPR, with a recent judgement against Her Majesty’s Revenue and Customs (HMRC) concerning a voice authentication service which stored millions of users’ voices. The ICO found that HMRC gave “little or no consideration to the data protection principles when rolling out the Voice ID service.”

Financial and reputational challenge

Regulatory damages can be material – BIPA starts at $1,000 for each negligent violation – however these are only part of the problem. As with any other data breach there is the financial cost of managing the breach and the potential reputational fallout that follows. “Businesses should assess what biometric data they hold and make sure it is subject to the same controls as any other sensitive data they are holding. This doesn’t simply stop with technical security controls – as we’ve seen in recent settlements, particular care must be given to the collection of informed consent, communication of a comprehensive and clear written policy, and overall governance procedures,” says Lindsay-Carlin.

Businesses should assess what biometric data they hold and make sure it is subject to the same controls as any other sensitive data they are holding.

A good cyber insurance policy has a key role to play. Where insurable by law, fines and penalties would most likely be covered in a typical cyber policy, as would settlements following a class action, says Lindsay-Carlin. He adds that cyber policies don’t necessarily refer to biometric data in particular, which usually falls under the broader definition of data/personal data. “If the recent settlements and rulings in the US initiate a slew of class actions, insurers might need to consider pricing adequacy and the broad terms currently available in respect to legislative exposures such as these.”

*********************************************************

Facial recognition coming to a street near you

London’s Metropolitan police has announced that they are to start using live facial recognition (LFR) cameras on the city’s streets claiming the system is 70% effective at picking out suspects. The Met has promised that data for innocent faces captured on camera will be immediately deleted. The move has sparked outrage from civil rights groups including Big Brother Watch who said: “This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK. It flies in the face of the independent review showing the Met’s use of facial recognition was likely unlawful, risked harming public rights and was 81% inaccurate.”

In the US, facial recognition technology is now being used by Lockport City School District in New York for the first time, following a decision to use facial recognition to monitor property and detect threats to students at its eight schools. Voicing concerns, the New York Civil Liberties Union said: “Now children as young as 5-years-old will have their faces scanned wherever they go. Their images will be captured by a system that is error-prone, discriminatory, and puts students’ safety at risk.”

Categories:

  • Cyber