A collage with a woman wearing a headscarf taking a selfie and a man wearing glasses looking in a screen, both their images are distorted in reflection

No facing away: Why India’s facial recognition system is bad news for minorities

Facial recognition technology is severely limited by racial bias. As India seeks to build one of the world’s largest face databases, activists fear the impact for minorities beyond black and white.

On a hot May 2021 afternoon in India, when Hyderabad activist SQ Masood stepped out to get some paperwork done, he didn’t expect the police to stop and question him. Nor did he expect them to  photograph him without consent.

 

At the time of the incident, Masood was passing through Shahran Market. It is an area inhabited by a number of Muslim residents in Hyderabad, the capital of the south Indian state of Telangana. His father-in-law was also with him.

Profiled Picture

“There were 8 to 10 police officers there randomly stopping people and questioning them. Quite a few people saw them and turned back. I knew I wasn’t doing anything wrong and wasn’t afraid. So I didn’t turn back.”

 

A constable stopped Masood, and asked him to take his mask off in order to photograph him. Masood wasn’t violating lockdown restrictions, and was well within the 6 am to 3 pm window (when Hyderabad authorities allowed for an easing of the lockdown). So he found this odd.

“I said I won’t take off my mask until you tell me why. There were two of them. They moved back a bit, and photographed me with my bike and my mask on. I think they were asking other people as well. I was in a rush to get back so I’m not sure,” said Masood. “When I returned, I realized why I’d probably been photographed.”

In 2020, the Indian government approved a plan to build the National Automated Facial Recognition System (NAFRS or AFRS). Led by the National Crime Records Bureau (NCRB), this system originally aimed to be able to extract facial biometrics from videos and CCTV footage, matching it with photographs present in existing databases.

Aiding and abetting rights’ violations

Across the world, facial recognition technology has played a role in wrongful arrests, intrusive surveillance and crackdown on protests. It is now outlawed in 13 US cities, including San Francisco and Boston. Regulators in Europe are also rethinking the indiscriminate use of facial recognition systems in public spaces. However, India is moving ahead. 

 

Most privacy activists argue that the use of this technology is violative of human and digital rights.

 

Anushka Jain, associate counsel (surveillance and transparency) at Internet Freedom Foundation (IFF), an independent organization that advocates for digital privacy and rights in India, told Unbias the News,

Research by people who have worked in these areas has shown that the use of facial recognition technology can never be consolidated with human rights.

The Central government of India appears to be leaning heavily on facial recognition for law enforcement and other purposes, in the absence of a data privacy law.

 

In December 2019, hundreds of thousands of Indian citizens came out to protest the Citizenship Amendment Act, a new citizenship law believed to be discriminatory towards Muslims and other marginalized communities. During the riots that followed (dubbed a targeted pogrom against the Muslim minority), Delhi Police used facial recognition system for over 100 of the 1,818 arrests they made.

 

At another protest against agricultural reforms (accused of favouring corporations over farmers) at Delhi’s historical Red Fort, facial recognition technology was used along with CCTV footage to arrest over 200 protestors.

Erroneous exclusion, coerced inclusion

Other instances include the use of facial recognition to authenticate identity for public food grain distribution systems and other welfare programs. Access to subsidized food rations, fertilizers, cooking gas, cash transfers and other social welfare benefits are also governed through a digital, biometric identification system known as UID (unique identification) or Aadhaar.

 

Internet Freedom Foundation’s Rohin Garg pointed out how this can lead to exclusion from state-funded welfare programs:

“If you're a migrant worker who works in construction, and you go to the ration shop, you have to press your thumb there to authenticate identity. Your finger might be so calloused that your fingerprint might have worn off from use, and you will be denied ration. So that is an issue.”

For Garg, “coerced inclusion” is also an issue raised by this tech. “What if I don’t want to provide my face to the government? A lot of airports are using facial recognition technology to verify passengers instead of the usual flight tickets they used to hand out. What if you want to opt out of something like that? There’s no alternative available, and I would call this coerced inclusion,” he concluded.

Historically racist

There is strong evidence of various facial recognition systems displaying racial and gender bias, leading to false matches and exclusion.

 

Like Masood in India said, “Usually, these practices [of stopping or questioning citizens] have been happening in Old Hyderabad, where you will find more poor citizens, Muslims, and Dalits. This sort of thing doesn’t happen in the new city [of Hyderabad]. They wouldn’t dare ask the people there to take off their masks,” he added.

Previous slide
Next slide

As MIT’s Gendershades program conclusively proved, facial recognition systems developed by IBM, Microsoft and Face++ displayed relatively high accuracy overall but faltered in recognizing certain genders and races. The systems, with overall accuracy of 87.9% to 93.7%, identified male faces 8.1% to 20.6% better than female faces, and lighter-skinned faces 11.8% to 19.2% better than dark-skinned faces. IBM has since worked on improving their system, after digital activist Joy Buolamwini pointed out the bias.

Basis of the bias

“It is important to look at how this bias develops,” said IFF’s associate counsel Anushka  Jain.

 

“This bias develops because the data set based on which these algorithms are developed themselves are biased. So if you get a technology that has been developed on data sets that are predominantly white in that situation also they will not be able to identify people of color in India.”

 

She added:

“And even if these systems are being developed in India, not everyone in India has a similar skin tone. There are differences in color, which have to be taken into account. Just because we don’t have skin tones as disparate as in the US does not mean that facial recognition technology is automatically more accurate in India.”

“Possible, not positive” matches

Not everyone agrees, however, that facial recognition systems are biased. Michael Furia, analyst certified in Adobe Photoshop and detective in law enforcement, told Unbias The News, “To say facial recognition technology is biased is ridiculous to me. I would never claim to have a positive match on someone if I wasn’t positive and, even then, we say it is possible, not positive.”

“The idea that Asians or women are more likely to be wrong is not accurate. It is just more difficult to distinguish facial identifiers between Asians and women because of make-up and beauty alterations in mugshots such as eyebrows tattooed on, unusual haircuts, and so on. Let’s face it: you don’t see blonde Asians with freckles and sunburn too often. You don’t see the majority of women sporting beards and trimmed facial hair as often as men,” he said.

 

Law enforcement worldwide has largely resisted the idea of regulating or banning facial recognition technology.

Targeting the vulnerable

Global investigations, reports and documentaries on technology indicate minority groups are the worst affected by invasive tech in the hands of majoritarian governments.

 

Talking about the time when cops photographed him on the streets, Masood shares, “My apprehension is that if they didn’t use my photograph to generate a challaan [a document issued upon a violation of traffic rules by the police], why did they take a photograph at all? Have they deleted it or stored it in some database? If they’ve stored it, what will they do with it? Which other databases will they link it to, and who will they share it with? For what purpose will they use it?”

Building a database is critical to the National Crime Records Bureau’s plan to deploy the Automated Facial Recognition System (AFRS) across the country.

 

In a 172-page Request for Proposal released by the NCRB in 2019 (and since removed but obtainable in cache), the originally desired specifications for such a system are laid out in detail: “The system shall be able to broadly match a suspect/criminal photograph with database created using photograph images available with Passport, CCTNS, ICJS and Prisons, Ministry of women and child development (KhoyaPaya) State or National Automated Fingerprint Identification System or any other image database available with police/other entity. Match suspected criminal face from pre-recorded video feeds obtained from CCTVs deployed in various critical identified locations, or with the video feeds received from private or other public organization’s video feeds.”

Excerpt from 2019 NCRB Request for Proposals
An excerpt from original 2019 "Request For Proposal To procure National Automated Facial Recognition System (AFRS)" from NCRB

In response to legal demands by IFF, the original RFP was removed and replaced with one that excludes references to using facial images from CCTV video feeds. However, the vagueness of the new language still leaves the door open for video footage to be used, according to IFF.

The potentially broad scope of the program is chilling, particularly in a country where law enforcement has openly committed atrocities against Muslims, and has repeatedly displayed a communal and casteist attitude.

 

Not surprisingly, Masood expressed concern over where his photograph could end up: “Telangana police have developed an app called TSCOP that every constable has, and they’ve linked different databases to it. Databases of prisoners, history sheeters and others I don’t know of. If my photograph ends up in that database, what category will it fall under? I don’t know that and it worries me, especially since I’m Muslim and being a Muslim in India . . . well, you know how it is.”

Investigative tool or surveillance weapon?

Civil and digital rights activists globally have been calling for a complete ban on facial recognition systems. Anushka Jain from India’s Internet Freedom Foundation told Unbias The News that even though the organization is calling for a complete ban on facial recognition systems, they understand why it might not be feasible in India, and are willing to work with the government on building safeguards and regulating the use of this technology.

 

But not everyone wants a ban.

Michael Furia told Unbias The News, “I feel banning facial recognition is similar to telling a detective not to use a computer to conduct an investigation. Like a computer, facial recognition is not a must-have tool but it saves days, weeks, or months of investigating.”

“Facial Recognition is never telling or giving you permission to lock someone up. It is merely a lead telling you that the person of interest looks almost identical (in most cases) to the suspect of the crime and you may want to speak to that person or keep an eye on him until you develop probable cause. I can’t see how that is a violation of anyone’s rights and am very angered to think the public wouldn’t want to have such a valuable tool at law enforcement’s hands when there are so many shootings and even terroristic acts in the modern world,” he added.

Even though law enforcement and governments continue to be at loggerheads with privacy and digital rights advocates around the world on the use of facial recognition systems, repeated human rights violations across the world show that the use of this technology comes at a steep cost.

 

The question is: can we afford it?

Please consider a donation to support the work of our all-women newsroom. We create a space for journalists facing structural barriers, working towards a more equitable, inclusive world of journalism. Join our mission today!

Related Posts

In an ink illustration, several people wrapped in blankets stare in the distance at ship on fire sinking

Missing data, missing souls in Italy

From 2013 to the present, Refaat has searched everywhere for their children. For ten years he has been traveling, asking, and searching. He has even appeared on TV hoping one day to be reunited with them. But to this day he still does not know if his children were saved or if they are two of the 268 victims of the October 11, 2013 shipwreck, one of the worst Mediterranean disasters in the last three decades.

Widowed by Europe’s borders

It was already dark when Samrin was left alone in the woods. He had no backpack, sleeping bag, or food. His phone was running out of battery. The next morning, Samrin came online briefly to send Sanooja a final message on WhatsApp: “No water, I think I’ll die. Trangam, I love you.”

A boat graveyard sits beneath a cloudy sky

Counting the invisible victims of Spain’s EU borders

No one ever comes to visit, but on days when there are funerals here and flowers are about to be thrown out, I place them on the tombs containing the unknown migrants,” he explains. “In some of the older graves, you have the remains of up to five or six migrants together, each placed in separate sacks within the same niche to save space.”

Unbias your inbox

Do you share our mission? Sign up for our newsletter so we can keep in touch!


Please confirm that you would like to hear from us via email:
We use Mailchimp as our marketing platform. You can unsubscribe at any time by clicking the link in the footer of our emails. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.