Privacy and security issues associated with facial recognition software


Image: Adobe Stock

The market for facial recognition technology is growing rapidly as organizations use the technology for a variety of reasons, including verifying and/or identifying people to grant them access to online accounts, authorizing payments, tracking and monitoring employee attendance, targeting specific advertisements to shoppers and much more.

In fact, the global facial recognition market size is expected to reach $12.67 billion by 2028, from $5.01 billion in 2021, according to Insight Partners. This increase is also due to growing demand from governments and law enforcement agencies, who use technology to aid in criminal investigations, conduct surveillance activities or other security efforts.

But as with any technology, there are potential downsides to using facial recognition, including privacy and security issues.

Privacy issues related to facial recognition technology

The most significant privacy implication of facial recognition technology is the use of technology to identify individuals without their consent. This includes using applications, such as real-time public surveillance or via database aggregation, that are not legally built, said Joey Pritikin, chief product officer at Paravision, a computer vision company. specializing in facial recognition technology.

Tracy Hulver, senior director of digital identity products at Aware Inc., agreed that it’s important for organizations to inform users of the biometric data they collect and then obtain their consent.

“You need to make it clear to the user what you’re doing and why you’re doing it,” he said. “And ask them if they consent to it.”

Stephen Ritter, CTO at Mitek Systems Inc., a provider of mobile capture and digital identity verification products, agreed that consumer notification and consent is critically important.

“Whether we are providing an application or user experience directly to a consumer or providing technology to a bank, marketplace or any company providing an application to the end user, we require proper notification, which means that the consumer is very aware of the data we are going to collect and is able to consent to it,” Ritter said.

SEE: Mobile Device Security Policy (TechRepublic Premium)

In surveillance apps, citizens’ top concern is privacy, said Matt Lewis, director of business research at security consultancy NCC Group.

Facial recognition technology in surveillance has improved dramatically in recent years, which means it’s quite easy to track someone as they move around a city, he said. One of the privacy concerns about the power of this technology is who has access to this information and for what purpose.

Ajay Mohan, director, AI and analytics at Capgemini Americas, agrees with this assessment.

“The big problem is that companies already collect a huge amount of personal and financial information about us. [for profit-driven applications] that just follows you everywhere, even if you don’t actively approve or allow it,” Mohan said. “I can go from here to the grocery store, and then all of a sudden they have a scan of my face, and they can follow it to see where I’m going.”

Additionally, artificial intelligence (AI) continues to push the performance capabilities of facial recognition systems, while from an attacker’s perspective, emerging research is leveraging AI to create ‘master keys’. facials, that is, the AI ​​generating a face that matches many different faces, through the use of what are called generative adversarial network techniques, according to Lewis.

“AI also enables the detection of additional features on faces beyond simple recognition, i.e. being able to determine the mood of a face (happy or sad) and also a good approximation of an individual’s age and gender based solely on their facial imagery,” Lewis said. “These developments certainly exacerbate privacy concerns in this space.”

Overall, facial recognition captures a lot of information depending on the amount and sources of data and that’s what the future needs to worry about, said Doug Barbin, chief executive of Schellman, a global evaluator of cybersecurity.

“If I do a Google image search for myself, does it return images tagged with my name or are images previously tagged as me recognizable without text or context? This creates privacy issues he said. “What about medical records? A huge application of machine learning is being able to identify health issues via scans. But what about the cost of disclosing the state of an individual?

Security issues related to facial recognition technology

All biometrics, including facial recognition, are not private, which also leads to security concerns, Lewis said.

“This is a property rather than a vulnerability, but it basically means biometrics can be copied and that presents security issues,” he said. “Through facial recognition, it may be possible to ‘tamper’ with a system (mask as a victim) using images or 3D masks created from images taken of a victim.”

Another property of all biometric data is that the matching process is statistical: a user never presents their face to a camera in exactly the same way, and user characteristics may differ depending on the time of day, use of cosmetics, etc., Lewis said.

Therefore, a facial recognition system must determine the likelihood that a face presented to it is that of an authorized person, he said.

“This means that some people may resemble others enough to be able to authenticate as other people due to similarities in characteristics,” Lewis said. “This is called the false acceptance rate in biometrics.”

Because it includes the storage of face images or templates (mathematical representations of face images used for matching), the security implications of facial recognition are similar to any personally identifiable information, where approaches Encryption companies, policies and process safeguards need to be in place, he said.

SEE: Password Breach: Why Pop Culture and Passwords Don’t Mix (Free PDF) (TechRepublic)

“Furthermore, facial recognition may be subject to what we call ‘presentation attacks’ or the use of physical or digital impersonations, such as masks or deepfakes, respectively,” Pritikin said. “Appropriate technology to detect these attacks is therefore essential in many use cases.

People’s faces are key to their identity, said John Ryan, a partner at law firm Hinshaw & Culbertson LLP. People who use facial recognition technology put themselves at risk of identity theft. Unlike a password, people cannot simply change their face. As a result, companies using facial recognition technology are targets for hackers.

As such, companies typically adopt storage and destruction policies to protect this data, Ryan said. Additionally, facial recognition technology typically uses algorithms that cannot be reverse engineered.

“These barriers have been helpful so far,” he said. “However, governments at the state and federal levels are concerned. Some states, like Illinois, have already passed laws to regulate the use of facial recognition technology. There is also pending legislation at the federal level.

Pritikin said his company uses advanced technologies, such as Presentation Attack Detection, which protect against the use of tampered data.

“We are also currently developing advanced technologies to detect deep forgery or other digital facial manipulation,” he said. “In a world where we rely on faces to confirm identity, whether in person on a video call, understanding what’s real and what’s fake is a critical aspect of security and privacy. even if facial recognition technology is not used.”

Previous Debt Consolidation Market to Record Healthy Annual Growth Rate to 2028
Next New post-exploit backdoor called "MagicWeb" used by SolarWinds hackers