The IRS/ID.me debacle: A teaching moment for tech

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!


Last year, when the Internal Revenue Service (IRS) signed a $ 86 million agreement with identity verification provider ID.me to provide biometric identification services, it was a big vote of confidence for the technology. Taxpayers can now verify their identities online using facial biometrics, to better protect the handling of federal tax matters by American taxpayers.

However, following strong opposition from privacy groups and bipartisan legislators expressing privacy concerns, the IRS abandoned its plan in February. The critics also raised the issue of taxpayers submitting their biometrics in the form of selfies as part of a new identity verification program. Since then, both IRS and ID.me have provided additional options that give taxpayers the choice to use the ID.me service or to verify their identity through live, virtual video interviews with agents. While the move could appease the parties who expressed concern – including Sen. Jeff Merkel (D-OR) who proposed No Facial Recognition at the top of the debate on the IRS Act (S. Bill 3668) – the very public misunderstanding of the IRS deal with ID.me has hurt public opinion on biometrics. Raises big questions for authentication technology and the large-scale cyber security industry.

Although the IRS has since agreed to continue offering ID.me’s facial-matching biometric technology as an identity verification method for taxpayers with an opt-out option, confusion still exists. High-profile complaints against the IRS deal have, at least for now, unnecessarily undermined public confidence in biometric authentication technology and allowed fraudsters to be greatly relieved. However, having ID.me duplicate fades in the rearview mirror is a lesson to consider for both government agencies and technology providers.

Do not underestimate the political value of controversy

This recent controversy highlights the need for better education and understanding of the nuances of biometric technology, the types of content that are subject to facial matching versus facial recognition, the use cases and the potential privacy issues that arise from these techniques and necessary regulations. To better protect consumer rights and interests.

For example, biometrics are used for a single, one-time purpose with the consent of a clearly informed user that benefits the user, such as identity verification and authentication to protect the user’s identity from fraud, as opposed to scraping biometric data on each. Use it for non-consensual purposes such as identity verification transactions or surveillance or marketing purposes without permission. Most consumers do not realize that images of their faces on social media or other Internet sites can be cut off for a biometric database without their explicit consent. When platforms like Facebook or Instagram explicitly communicate such activity, it is buried in a privacy policy that is described in terms incomprehensible to the average user. In the case of ID.me, companies implementing this “scraping” technology should be required to educate users and obtain explicit informed consent for the use case they are enabling.

In other cases, different biometric technologies that seem to be doing the same thing may not be created in the same way. Benchmarks such as NIST FRVT provide a rigorous evaluation of biometric matching technologies and a standardized way to compare their functionality and ability to avoid problematic demographic performance bias in features such as skin tone, age or gender. Biometric technology companies should be held accountable not only for the ethical use of biometrics, but for the equitable use of biometrics that they serve to serve the entire population well.

Politicians and privacy activists hold biometrics technology providers to a high standard. And they should – the stakes are high, and privacy matters. As such, these companies need to be transparent, clear and – most importantly – active in communicating to their audiences about the nuances of their technology. A misinformation, fiery speech by a politician trying to win hearts during a campaign can undermine otherwise relevant and focused consumer education efforts. Sen. Ron Wyden, a member of the Senate Finance Committee, declared, “No one should be forced to submit for facial recognition in order to use serious government services.” And in doing so, he misrepresented facial matching as facial recognition, and the damage was done.

Maybe Sen. Little did Wyden realize that millions of Americans submit to facial recognition every day while using critical services – at airports, at government facilities and in many workplaces. But since the outlet was not involved in this misunderstanding, ID.me and IRS allowed the public to openly misrepresent and use the agency’s face-matching. The technology is unusual and disgusting.

Honesty is a business requirement

In the face of a flood of third-party misinformation, ID.me’s response was slow and confusing, if not misleading. In January, CEO Black Hole stated in a statement that ID.me 1: does not use multiple facial recognition technology – comparing one face against others stored in a central repository. Less than a week later, the latest in a string of inconsistencies, Hall Returned, ID.me 1: Many use during registration, but only once. An ID.me engineer referred to that inconsistency in a recent slack channel post:

“We can disable 1: many face detection, but then lose a valuable anti-fraud tool. Or we can change our public attitude by using 1: multiple face detection. But it seems we can’t keep doing one thing and saying another, because that will get us into hot water. “

Transparent and consistent communication with the public and key influencers, using print and digital media as well as other creative channels, will help combat misinformation and ensure that facial biometric technology is more legacy based when used with explicit informed consent to protect consumers. More secure. Options

Be prepared for regulation

Massive cybercrime has encouraged more aggressive state and federal legislation, while policymakers have placed themselves at the center of a push-bridge between privacy and security, and from there they must act. The heads of the agency may claim that their legal efforts are fueled by a commitment to the safety, security and privacy of the constituents, but Congress and the White House must decide what comprehensive rules protect all Americans from the current cyber-threat landscape.

There is no shortage of regulatory examples for reference. The California Consumer Privacy Act (CCPA) and its landmark European Cousins, the General Data Protection Regulation (GDPR), model how to ensure that user organizations collect what kind of data from them, how it is being used, monitor and manage Steps to take. That data, and how to opt out of data collection. To date, officials in Washington have left the data protection infrastructure to the states. The Biometric Information Privacy Act (BIPA) in Illinois, as well as similar bills in Texas and Washington, regulate the collection and use of biometric data. These rules stipulate that organizations must obtain consent before collecting or disclosing individual identities or biometric data. They should store biometric data securely and destroy it in a timely manner. BIPA imposes fines for violating these rules.

If legislators enact and pass legislation combining the principles of CCPA and GDPR rules with the biometric-specific rules outlined in the BIPA, greater reliability can be established around the security and convenience of biometric authentication technology.

The future of biometrics

Biometric authentication providers and government agencies need to be good shepherds of the technology they offer – and receive – and more importantly when it comes to educating people. Some hide behind the apparent danger of giving cybercriminals too much information about how technology works. The fate of those companies, not theirs, depends on the success of certain deployments, and wherever there is a lack of communication and transparency, opportunistic critics will be eager to publicly misrepresent biometric facial matching technology to advance their own agenda.

While multiple legislators have tarnished facial recognition and biometrics companies as bad actors, they have missed the opportunity to expose the real culprits – cybercriminals and identity crooks.

Tom Thimot is the CEO of authID.ai.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Similar Posts

Leave a Reply

Your email address will not be published.