Western Academia Helps Build China’s Automated Racism

Charles Rollet:

Last summer, a respected U.S. academic journal about data mining published a study titled “Facial feature discovery for ethnicity recognition”, authored by four professors in China and one in Australia. The study found that an effective way to automatically predict the ethnicity of minorities in China was for facial recognition systems to focus on specific, T-shaped regions of their faces. In order to reach this conclusion, over 7,000 photographs were taken of 300 Uyghur, Tibetan, and Korean students at Dalian Minzu University in northeastern China.

The study, which received funding from Chinese government foundations, attracted little attention when it was published, but went viral at the end of May when PhD student Os Keyes tweeted out its abstract, writing: “TIL [today I learned] there’s a shitton of computer vision literature in 2017-2018 that COINCIDENTALLY tries to build facial recognition for Uyghur people. How. Curious.” Keyes’ post was retweeted over 500 times.

The study sparked concern for good reason. China’s government is waging a well-documented mass surveillance and internment campaign against the Uyghurs, a predominantly Muslim people in the country’s far western region of Xinjiang, where around one million of them have been detained in “re-education” camps. From facial recognition cameras in mosques to mass DNA collection and iris scans, biometrics are being deployed in Xinjiang to track Uyghurs and other minorities on an unprecedented scale. Most of China’s billion-dollar facial recognition startups now sell ethnicity analytics software for police to automatically distinguish Uyghurs from others.

Despite this, academic papers that refine facial recognition techniques to identify Uyghurs are being published in U.S. and European academic journals and presented at international computer science conferences. China’s largest biometrics research conference, last held in Xinjiang in 2018, included prominent U.S. artificial intelligence (AI) researchers as keynote speakers, including one from Microsoft. One paper at the conference, co-authored by local police, discussed ways to find “terrorism” and “extreme religion” content in Uyghur script.

Separately, Imperial College London is hosting an open facial recognition competition where one of the sponsors is a Chinese AI startup called DeepGlint which advertises its Uyghur ethnicity recognition capabilities to police on its Chinese website, where it boasts of several Xinjiang security projects. The competition’s organizer stated he was not aware of DeepGlint’s role tracking Uyghurs and said he wouldn’t accept funding from DeepGlint in the future.