Mainstream adoption of facial recognition can have sinister consequences

Discrimination, privacy, and racism are just some concerns

Everyday face scans

Open-sourcealgorithms that detect facial featuresmake face analysis or recognition an easy add-on for app developers. We already use facial recognition to unlock our phones orpay for goods. Video cameras incorporated into smart homes use facial recognition to identify visitors as well as personalize screen displays and audio reminders. The auto-focus feature on cellphone cameras includes face detection and tracking, while cloud photo storage generates albums and themed slideshows by matching and grouping faces it recognizes in the images we make.

Face analysis is used by many apps including social media filters and accessories that produce effects like artificially aging and animating facial features. Self-improvement and forecasting apps for beauty, horoscopes, or ethnicity detection also generate advice and conclusions based on facial scans.

But using face analysis systems for horoscopes, selfies, or identifying who’s on our front steps can have long-term societal consequences: they canfacilitate large-scale surveillanceand tracking whilesustaining systemic social inequality.

Casual risks

When repeated over time, such low-stakes and quick-reward uses can inure us to face-scanning more generally, opening the door tomore expansive systems across differing contexts. We have no control over — and little insight into — who runs those systems and how the data is used.

If we already subject our faces to automated scrutiny, not only with our consent but also with our active participation, then being subjected to similar scans and analysis as we move through public spaces or access services might not seem particularly intrusive.

In addition, our personal use of face analysis technologies contributes directly to the development and implementation of larger systems meant for tracking populations, ranking clients, or developing suspect pools for investigations. Companies can collect and share data that connects our images to our identities, or forlarger data sets used to train AI systems for face or emotion recognition.

Even if the platform we use restricts such uses, partner products may not abide by the same restrictions. The development of new databases of private individuals can be lucrative, especially when these can comprise multiple face images of each user or can associate images with identifying information, such as account names.

Pseudoscientific digital profiling

But perhaps most troubling, our growing embrace of facial analysis technologies feeds into how they not only determine an individual’s identity but also their background, character, and social value.

Theseinterrelated systemsdepended to varying degrees on face analysis to justify racial hierarchies, colonization, chattel slavery, forced sterilization, and preventative incarceration. Many predictive and diagnostic apps that scan our faces to determine our ethnicity, beauty, wellness, emotions, and even our potential earning power build on the disturbing historical pseudosciences ofphrenology,physiognomy, andeugenics.

Our use of face analysis technologies canperpetuate these beliefs and biases, implying they have a legitimate place in society. This complicity can then justifysimilar automated face analysis systemsfor uses such asscreening job applicantsordetermining criminality.

Building better habits

Regulating how facial recognition systems collect, interpret and distribute biometric datahas not kept pace with our everyday useof face-scanning and analysis. There has been some policy progress inEuropeandparts of the United States, but greater regulation is needed.

In addition, we need to confront our own habits and assumptions. How might we be putting ourselves and others, especially marginalized populations, at risk by making such machine-based scrutiny commonplace?

A few simple adjustments may help us address the creeping assimilation of facial analysis systems in our everyday lives. A good start is to change app and device settings to minimize scanning and sharing. Before downloading apps, research them andread the terms of use.

Resist the short-lived thrill of the latest social media face-effect fad — do we really need to know how we’d look as Pixar characters? Reconsider smart devices equipped with facial recognition technologies. Be aware of the rights of those whose image might be captured on a smart home device — you should always get explicit consent from anyone passing before the lens.

These small changes, if multiplied across users, products, and platforms, can protect our data and buy time for greater reflection on the risks, benefits, and fair deployment of facial recognition technologies.

Article byStephen Monteiro, Assistant Professor of Communication Studies,Concordia University

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Story byThe Conversation

An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

More TNW

About TNW

Can OpenAI’s Strawberry program deceive humans?

TNW Conference 2025 theme spotlight: AI and Deeptech

Discover TNW All Access

Tech bosses think nuclear fusion is the solution to AI’s energy demands – here’s what they’re missing

Uber backs Wayve as it targets level 4 autonomous vehicles