Federal agencies are rapidly increasing their use of facial recognition

Despite growing opposition, the U.S. government is on track to increase its use of controversial facial recognition technology

Why computing experts say no

The Association for Computing Machinery’s U.S. Technology Policy Committee, which issued the call for a moratorium, includes computing professionals from academia, industry and government, a number of whom were actively involved in the development or analysis of the technology. As chair of the committee at the time the statement was issued and as acomputer science researcher, I can explain what prompted our committee to recommend this ban and, perhaps more significantly, what it would take for the committee to rescind its call.

If your cellphone doesn’t recognize your face and makes you type in your passcode, or if the photo-sorting software you’re using misidentifies a family member, no real harm is done. On the other hand, if you become liable for arrest or denied entrance to a facility because the recognition algorithms are imperfect, the impact can be drastic.

The statement we wrote outlines principles for the use of facial recognition technologies in these consequential applications. The first and most critical of these is the need to understand the accuracy of these systems. One of the key problems with these algorithms is that theyperform differently for different ethnic groups.

Anevaluation of facial recognition vendorsby the U.S. National Institute of Standards and Technology found that the majority of the systems tested had clear differences in their ability to match two images of the same person when one ethnic group was compared with another. Another study found the algorithms aremore accurate for lighter-skinned malesthan for darker-skinned females. Researchers are also exploring how other features, such as age, disease anddisability status, affect these systems. These studies are alsoturning up disparities.

A number of other features affect the performance of these algorithms. Consider the difference between how you might look in a nice family photo you have shared on social media versus a picture of you taken by a grainy security camera, or a moving police car, late on a misty night. Would a system trained on the former perform well in the latter context? Howlighting, weather, camera angle and other factors affect these algorithmsis still an open question.

In the past, systems that matchedfingerprintsorDNA traceshad to be formally evaluated, and standards set, before they were trusted for use by the police and others. Until facial recognition algorithms can meet similar standards – and researchers and regulators truly understand how the context in which the technology is used affects its accuracy – the systems shouldn’t be used in applications that can have serious consequences for people’s lives.

Transparency and accountability

It’s also important that organizations using facial recognition provide some form of meaningful advanced and ongoing public notice. If a system can result in your losing your liberty or your life, you should know it is being used. In the U.S., this has been a principle for the use of many potentially harmful technologies, from speed cameras tovideo surveillance, and the USTPC’s position is that facial recognition systems should be held to the same standard.

To get transparency, there also must be rules that govern the collection and use of the personal information that underlies the training of facial recognition systems. The company Clearview AI, which now has softwarein use by police agencies around the world, is acase in point. The company collected its data – photos of individuals’ faces – with no notification.

Clearview AI collected data from many different applications, vendors and systems, taking advantage of thelax laws controlling such collection. Kids who post videos of themselves on TikTok, users who tag friends in photos on Facebook, consumers who make purchases with Venmo, people who upload videos to YouTube and many others all create images that can be linked to their names and scraped from these applications by companies like Clearview AI.

Are you in the dataset Clearview uses? You have no way to know. The ACM’s position is that you should have a right to know, and that governments should put limits on how this data is collected, stored and used.

In 2017, the Association for Computing Machinery U.S. Technology Policy Committee and its European counterpart released ajoint statementon algorithms for automated decision-making about individuals that can result in harmful discrimination. In short, we called for policymakers to hold institutions using analytics to the same standards as for institutions where humans have traditionally made decisions, whether it be traffic enforcement or criminal prosecution.

This includes understanding the trade-offs between the risks and benefits of powerful computational technologies when they are put into practice and having clear principles about who is liable when harms occur. Facial recognition technologies are in this category, and it’s important to understand how to measure their risks and benefits and who is responsible when they fail.

Protecting the public

One of the primary roles of governments is to manage technology risks and protect their populations. The principles the Association for Computing Machinery’s USTPC has outlined have been used in regulating transportation systems, medical and pharmaceutical products, food safety practices and many other aspects of society. The Association for Computing Machinery’s USTPC is, in short, asking that governments recognize the potential for facial recognition systems to cause significant harm to many people, through errors and bias.

These systems are still in an early stage of maturity, and there is much that researchers, government and industry don’t understand about them. Until facial recognition technologies are better understood, their use in consequential applications should be halted until they can be properly regulated.

Article byJames Hendler, Professor of Computer, Web and Cognitive Sciences,Rensselaer Polytechnic Institute

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Story byThe Conversation

An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

More TNW

About TNW

Amazon to launch ‘sovereign’ European cloud amid data privacy concerns

Discover TNW All Access

Netflix losses: The streaming giant’s biggest threat is competitors are catching up

Amazon goes all in on eCargo bike delivery, but our cities aren’t ready