It is not the only Invasion of Privacy more so Racist
Originally published by Illumination on Medium
Facial recognition, along with its accompanying mission, is not a new human endeavor. It has been tried with many failures since as early as 1964 when Bledsoe, along with Helen Chan and Charles Bisson, tried utilizing the computer to recognize the human face.
According to historians, they were the Pioneers of automated face recognition. Bledsoe, in particular, was proud of his work. Still, since a covert intelligence agency financed the project, it did not get too much publicity; hence little was publicly published. Based on limited data available, Bledsoe’s initial approach included the manual marking of several landmarks on the face, such as the center of eyes, mouth, etc. After that, the markers were mathematically rotated by computer to compensate for pose variation or facial expression.
The distances between points of reference on the face and image were also automatically computed and compared between the photos to determine identity. However, given an extensive database of images and photographs, the obstacle was to extract a small set of records so that one of the image records matched the picture. The significant difficulty, according to Bledsoe, was due to the considerable inconsistency in the position of the head, facial expression, and aging. Yet, the scheme of correlation (or pattern matching) of unprocessed optical data, which some researchers often use, is sure to fail in circumstances where the variability is prominent. Notably, the correlation is very low among the two portraits of the same person with two different head rotations.
The challenges continued until the recent decades when the creation of the silos of big data and the availability of quantum computers with immense processing power was discovered and put to work.
Today, the resolution of one technical problem precedes another major issue- individual privacy and second, Discrimination and racism, which has recently gained overwhelming attention. Indiscriminate use of high profiling facial recognition is at the center of courtrooms across many countries. For example, not too long ago, South Wales Police in the United Kingdom was sued in court over discriminatory use of the facial recognition system given the green light to use by their administration.
Racial Profiling and Facial Recognition
Facial-Recognition Software may be racially biased. Of course, no technology is inherently racist; however, how the facial recognition algorithms are equipped is more accurate when trained differently to identify white faces from persons of African or Asian descent. To be more precise, as mentioned earlier, historically, some of the most underwood failures of creating mathematical formulas that would accurately do the job were merely the matter of tactical vs. strategic alignment of its creator. For instance, according to the Atlantic publication, recent research suggests that advancing accuracy rates are not distributed equally within the given community. Many current algorithms reveal troubling disparities in precision across race, gender, and other demographics.
Based on a study performed in 2011 by one of the organizers of NIST’s vendor tests, discovered that algorithms developed in some Asian countries such as South Korea, Japan, and china recognized East Asian faces far better than Caucasians. And similarly, but in reverse order, algorithms acquired in France, Germany, and the United States, were significantly better at recognizing Caucasian facial attributes.
The conditions in which an algorithm is constructed, expressly the type of racial makeup of its developers and test photo databases, most likely have a significant influence on the accuracy of its outcome. That is why to overcome such a barrier; it may be strategically feasible for the developer to make a shortcut and meddle by creating individual database profiles based on race, gender, and ethnicity.
Process of Facial Recognition in Laypersons Terms
To understand how facial recognition functions and how it relates to data security and discrimination, we must first familiarize ourselves with its historical development and fundamentals.
Essentially, how face recognition is achieved is over two Steps.
First- it necessitates the feature extraction of the target subject and selection.
Second- classification of the objective finding of the extracted image data. Historically, some of the unique techniques to perform the tasks mentioned above include the following:
Some face recognition algorithms recognize facial features by extracting landmarks, or topographies, from an image of the person’s face. The latter includes relative position, size, and shape of the eyes, nose, cheekbones, and jaw; then, the result is used to search for other images with matching features.
Other algorithms create a set of facial data profiles normalizing a gallery of face images and then compress the data, only saving the valuable image for face recognition after the probe image is compared against the data collected from the face.
Recognition Algorithms can be divided into two Fundamental Methods, Geometric and Photometric.
The geometric approach peeks at distinguishing characteristics, whereas photometric takes a statistical approach by distilling an image into values and associating those values with templates to eliminate discrepancies, further sub-categorizing it into the all-inclusive and feature-based models. The former tries to recognize the face in totality while the feature-based subdivide into components. It provides contrast according to features and analyzes each as well as its spatial location concerning other features.
The Concept of 3-Dimensional Recognition
A three-dimensional face recognition procedure uses 3-D sensors to catch data about facial contour. The captured data is then utilized to identify unique details on the facial skin, like the outline of the eye sockets, nose, and chin. One benefit of 3-D face recognition is that it is not affected by changes in lighting like other techniques and identifies a face from a range of viewing angles, including a profile view.
Skin Texture Analysis
Skin texture analysis turns the unique skin lines, patterns, and spots into a mathematical formula. It works much the same way as facial recognition. In addition to skin texture analysis, performance in recognizing faces can increase 20 to 25 percent.
Facial Recognition combining different Techniques
As every technique has its purposes and flaws, technology companies have combined traditional 3D face recognition and Skin Texture analysis to form recognition systems with higher success rates.
Combined methods have an advantage over other systems, as it is relatively insensitive to fluctuations in expression, involving blinking, frowning or smiling. It also can compensate for mustache or beard growth and the appearance of eyeglasses. The system is also uniform concerning race and gender.
Incorporation of Thermal in Facial Recognition Technology
A different form of data extraction for face recognition usage is the utility of thermal or infrared cameras. This procedure helps cameras detect the shape of the head, excluding the associated accessories such as glasses, hats, or makeup. It will also capture facial imagery in low-light and nighttime conditions without a flash and exposing the camera's position. However, due to low sensitivity for details, the thermal cameras are almost always coupled with other technologies, as described earlier.
So- Where does Facial Recognition become Discriminatory?
As noticeable from the description of methodologies, there is always a level of profiling involved with the design of every facial recognition algorithms that are merely unavoidable without incorporating careful, ethical consideration. I foresee we can avoid the profiling pattern by implementing proper, hence unbiased, Deep Learning algorithms; however, some entities may bypass that (at least for the time being) due to fiscal restraints and opportunity for gaining a competitive edge. Therefore, not surprisingly, most current facial recognition technologies are flagged for discrimination and discriminatory profiling practices, irrespective of the technology they use.
For instance, according to a recent report, arrest and incarceration rates across Los Angeles, California, have surged among hotlists are disproportionately subjects of African descent. Because based on further investigation, the algorithms backing those facial recognition technologies may work poorly on Black demographics. Moreover, since Facial recognition technology is being rolled out by law enforcement across the country, parallel to it, there is increase profiling and incarceration of law-abiding citizens with little effort by the legislatures to explore and correct such prejudice.
Reportedly, Businesses that market facial recognition technology claims that their product is highly efficient and accurate, with a reliability of over 95%. But, in reality, the latter claims are almost impossible to substantiate because it is the common notion that the facial recognition algorithms adopted by Police are not bound to undergo a public or independent examination. Or there not to determine the correctness or check for prejudice before using them on ordinary citizens. More bothersome, yet, the insufficient testing that has been executed on the most popular facial recognition systems has exposed some pattern of racial bias.
Racial profiling is not coincidental, particularly for public surveys. The latter further reinforces the fact that as why entities like Police, the vendors they use, may have been exempted from disclosing their Proprietary algorithms.
Racial profiling is a discriminatory practice often employed by law enforcement officials worldwide to target individuals for suspicion of crime based on the individual’s race, ethnicity, religion, or national origin. Another pattern of racial profiling is the targeting, ongoing since the September 11th attacks, of Muslims, Arabs, and South Asians for detention on minor immigrant violations without association with the attacks on the World Trade Center. In reality, Racial profiling is a longstanding and deeply troubling national problem despite claims that the United States has entered a “post-racial era.”
Common Utility of Facial Recognition Technology and its Pitfalls
Facial recognition technology has many uses, from preventing retail crime and finding a missing person to tracking school attendance. The technology market is growing exponentially. According to research, the Facial Recognition Market is expected to surge from $3.2 billion in 2019 to $7.0 billion by 2024 in the U.S.
Photo by Ramón Salinero on Unsplash
The most important uses for the technology being for surveillance and marketing. That raises concerns for many people. The leading cause of the problem amongst civilians is the lack of appropriate federal statutes surrounding facial recognition technology. One issue, for instance, is that according to studies, the technology has been proven inaccurate at identifying people of color, especially black women.
With the increasing number of anxieties and privacy concerns encompassing facial recognition software and its application, cities around the U.S. will experience additional dilemmas as they attempt to tackle these concerns.
As with any other technologies out there, when it comes to errors, false-negative and false-positive results of facial recognition technologies are real issues that need to be considered. False-negative is when the system fails to match a person’s face to an image in a database, or the method will erroneously return zero results in response to a query. On the other hand, false-positive is when the system fails to match the person’s face with a picture stored in the database, but that match is wrong. When a police officer presents a suspect's description, the system mistakenly alerts the officer that the photo is someone else’s.
Facial Recognition Algorithm determines its Task.
Facial recognition technologies are as good as and as bad as their algorithms. In other words, it is another technology that will function what it is given. For instance, inventors of facial-recognition technology struggle to adapt to a world where people routinely cover their faces to avoid spreading disease, like we see today with coronavirus pandemics.
Facial recognition has grown more popular and accurate in recent years, as an artificial intelligence technology called deep learning made computers much better at interpreting images. But some experts say the current facial recognition algorithms are generally less reliable when a face is veiled, whether by an obstacle, a camera angle, or a mask, because there are merely fewer data prepared for comparative analysis.
Facial Recognition is all about Profiling
Under science, Facial Recognition is about comparison and matching. The latter requires considering common similarities and matches; therefore, irrespective of the intention, there will always be a point where profiling must be the subject of consideration. Nonetheless, it does not necessarily be at the expense of individual and civil liberty. That is why due to overwhelming criticism, some manufacturers, particularly those with lesser partnership missions with the government agencies, at least temporarily, abandoned their facial technology projects. For instance, IBM recently quit the facial recognition market over Police racial-profiling concerns, calling U.S. Congress for ‘national dialogue’ about their use in law enforcement. Likewise, Microsoft’s chairman, Brad Smith, told the Guardian that the company was willingly withholding its facial recognition technology from governments that would use it for mass surveillance.
Facial Recognition Algorithms are Biased
It is the prevailing theory that most facial recognition solutions are biased. It is tough to go wrong going that route since most technologies are used in law enforcement, public scenes and are exempted from the proper validation process and disclosure.
Congressional Democrats are currently probing the FBI and other federal agencies to determine if the surveillance software has been deployed against demonstrators of “Black lives matter” while states including California and New York are assessing laws to ban police use of the technology. Concurrent to that, major tech corporations in the players are edging away from their artificial intelligence inventions. For instance, Amazon, after years of pressure from civil rights advocates, recently announced a one-year delay in police use of its dubious facial recognition product, called Rekognition. IBM, once again, announced its intention to vacate the facial recognition research program altogether, citing concerns about the human rights implications.
Face surveillance is by far one of the most exposed and dangerous of the technologies accessible to law enforcement- because it is discriminatory in a variety of ways, as it stands today. First- the technology by itself can be racially biased. Second- Police in many domains in the U.S. uses mugshot databases to classify people with face recognition algorithms. However, using mugshot databases for face recognition reuses racial preference from the past, supercharging that bias with 21st-century surveillance technology.
Research conducted showed that algorithms can be racist.
A 2018 study conducted by Buolamwini and Gebru’s showed that some facial analysis algorithms misclassified Black women nearly 35 percent of the time while almost invariably get it right for white men. A subsequent study by Buolamwini and Raji at the Massachusetts Institute of Technology confirmed these problems persisted with Amazon’s software.
Corporate Giants are Reluctant to be Transparent about their Facial Recognition Algorithms
The recent rebuff in the U.K. against the NEC ( the provider of facial recognition technology) could potentially prop U.S. activists’ movements and consequently spread globally. Among all, NEC, which has more than 1,000 contracts around the globe, is one of, if not the principal targets. NEC’s response to the lawsuit against the South Wales police department has lacked detail. The company merely refused to provide details of what data is used to train its algorithms to differentiate one face from another. Allegedly, a test of NEC’s technology in 2018 had a 98% failure rate, and a 2019 audit found an 81% false-positive rate.
A 2019 report written by researchers from the Human Rights, Big Data & Technology Project, based at the University of Essex Human Rights Center, identified notable flaws with the way live facial recognition technology was utilized in London by the Metropolitan Police Service. Additionally, they found that Black and minority ethnic people are being falsely profiled and taken into questioning because Police have slipped to test how adequately their systems deal with non-white faces.
Amid the developing turmoil around facial recognition technologies, as pointed out earlier, some tech companies are opting out of the latter market, at least for the time being.
It is my impression that since various law enforcement procedures are profile-driven, they may explicitly demand technology that is enhanced based on secondary input data on human traits. The facial recognition bias and discriminatory behavior may probably result from the need for convenience and lower cost. For example, if the police department already performs screening processes via a manual profile of, let’s say, black people, then that will also be potentially used in their operational and technical requirements. Law enforcement agencies have used profiling techniques for centuries. Thus, it should not be surprising to request the same practice pattern from their facial recognition tools.
Photo by Ricardo Arce on Unsplash
Once we put the puzzles of the business relationship between the NEC and the South Wales Police Department together, it will be much brighter as to why the company is reluctant to disclose the hidden algorithm. NEC, today has more than 1,000 public biometrics deployments across the world, including operations in 20 U.S. states. It is expected that the company has many non-disclosure clauses within their respective agreements. Or corporations such as IBM alienate their facial recognition project for the sake of avoiding future questions.
Facial Recognition is Racist and will Covertly Target what their Technical requirements Dictates.
Despite the overwhelming negative publicity, Facial recognition technology is a valuable asset to any industry. However, just like any other tool, it can be mistreated and strategically pivoted to fulfill different tasks at any time by its architect. And when they do, expectedly, they will do whatever in their power to disguise it from the public. Keeping this in mind, it is also convenient to profiles people based on their given traits, such as color, sex, race, and deformities. Although it may be attractive to law enforcement, it comes at the expense of those who hold the trait but have done nothing wrong to be the target of humiliation and fall under the radar. It is simply not fair!
Facial recognition is an instrument that followers the mathematically driven commands of its engineers written based on the requirements of the law enforcement agency. Therefore, if the provided algorithms are detected racially biased, one must question the entire players in its development chain, from Business requirements to validation and utility.
“Facial recognition is as racist as its developers and users” thus, it is unethical, prejudice and must also be illegal.