Our occasional use of facial analysis tools can lead to more sinister applications.

0


[ad_1]

Facial recognition technologies have become more popular thanks to increasingly sophisticated devices and popular fashions. The occasional use of facial scanning and analysis features has long-term social impacts.

On December 14, the governments of British Columbia, Alberta and Quebec ordered facial recognition company Clearview AI to stop collecting – and delete – images of people obtained without their consent. Discussions about the risks of facial recognition systems that rely on automated facial analysis technologies tend to focus on businesses, national governments, and law enforcement. But what is of great concern are the ways in which facial recognition and analysis have become part of our daily lives.

Amazon, Microsoft and IBM have stopped providing facial recognition systems to police services after studies have shown algorithmic bias identify yourself disproportionately people of color, especially blacks.

Facebook and Clearview AI handled legal proceedings and colonies to create databases of billions of face models without people’s consent.

Police in the UK come under scrutiny for their use of real-time facial recognition in public spaces. Chinese government tracks its Uyghur minority population through facial scanning technologies.

And yet, to grasp the scope and consequences of these systems, we must also pay attention to the occasional practices of everyday users who apply face scans and scans on a routine basis that contribute to the erosion of privacy and increase the social discrimination and racism.

As a researcher of visual practices of mobile media and their historical links with social inequalities, I regularly explore how user actions can create or change standards around issues such as privacy and identity. In this regard, the adoption and use of facial analysis systems and products in our daily lives can reach a dangerous tipping point.

Daily face scans

Open source algorithms that detect facial features make facial scanning or recognition an easy add-on for app developers. We already use facial recognition to unlock our phones or pay for goods. Video cameras built into smart homes use facial recognition to identify visitors as well as to personalize screens and audio reminders. The autofocus feature on cell phone cameras includes face detection and tracking, while cloud photo storage generates themed albums and slideshows by matching and grouping faces on it. recognizes in the images we create.

Facial analysis is used by many applications including social media filters and accessories that produce effects such as artificial aging and animation of facial features. Self-improvement and prediction apps for beauty, horoscopes or ethnicity detection also generate advice and conclusions based on face scans.

But using facial analysis systems for horoscopes, selfies or identifying who is in our footsteps can have long-term societal consequences: they can facilitate large-scale surveillance and monitoring, while perpetuate systemic social inequalities.

Occasional risks

When repeated over time, these low-stake, quick-reward uses can spur us on to tackle digitization more broadly, opening the door to larger systems in different contexts. We have no control over – and little information over – who manages these systems and how the data is used.

If we are already subjecting our faces to an automated examination, not only with our consent but also with our active participation, then to be subjected to similar scans and scans as we move around public spaces or access services may not appear. particularly intrusive.

In addition, our personal use of facial analysis technologies directly contributes to the development and implementation of larger systems for tracking populations, classifying clients or developing suspect pools for investigations. Businesses may collect and share data that links our images to our identities, or to larger datasets used to train AI systems to recognize faces or emotions.

Although the platform we use restricts such uses, partner products may not meet the same restrictions. Developing new personal databases can be lucrative, especially when they can include multiple face images of each user or can associate images with identifying information, such as account names.

Pseudo-scientific digital profiling

But perhaps most disturbing, our growing adoption of facial analysis technologies is fueling the way they determine not only an individual’s identity, but also their origin, character, and social value.

Many prediction and diagnostic apps that scan our faces to determine our ethnicity, beauty, well-being, emotions, and even earning potential rely on the ominous historical pseudoscience of phrenology, physiognomy and eugenics.

Those interdependent systems depended to varying degrees on facial analysis to justify racial hierarchies, colonization, slavery, forced sterilization and preventive incarceration.

Our use of facial analysis technologies may perpetuate these beliefs and prejudices, which implies that they have a legitimate place in society. This complicity can then justify Similar automated face analysis systems for uses such as preselection of candidates Where crime determination.

Build better habits

Regulate the way facial recognition systems collect, interpret and distribute biometric data has not kept pace with our daily use face scanning and analysis. There has been political progress in Europe and parts of the United States, but stricter regulation is needed.

In addition, we must confront our own habits and assumptions. How could we put ourselves and others, especially marginalized populations, at risk by trivializing such automated review?

A few simple adjustments can help us deal with the creeping assimilation of facial analysis systems into our daily lives. A good start is to change app and device settings to minimize scanning and sharing. Before downloading apps, find them and read the terms of use.

Resist the short-lived thrill of the latest face-effect fad on social media – do we really need to know what we would look like as Pixar characters? Reconsider smart devices equipped with facial recognition technologies. Be aware of the rights of those whose image might be captured on a smart home device – you should always obtain the express consent of anyone walking past the lens.

These small changes, if multiplied across users, products, and platforms, can protect our data and save time for further thinking about the risks, benefits, and fair deployment of facial recognition technologies.

Stephen Monteiro does not work, consult, own stock or receive funding from any company or organization that would benefit from this article, and has not disclosed any relevant affiliation beyond his academic position.

[ad_2]

Share.

Comments are closed.