LGBTQ groups are not happy with the news that someone has come up with a facial recognition AI (or artificial intelligence) that can supposedly identify a person’s sexual orientation.
Then again, the LGBTQ community can’t be blamed for being wary. After all, when science gets us wrong, it gets horribly, horribly wrong. Think homosexuality as a mental illness, conversion therapy, and the like.
Facial recognition AI: How it works
Researchers from Stanford University came up with a study how AIs could tag people’s sexual ortientation based on their faces.
The researchers used on more than 35,000 pictures of self-identified gay and straight people from a public dating website.
Using these pictures, they came up with an algorithm that determined the subtle differences of the person’s features.
Afterwards, they used the software to guess whether the people were gay or straight based on randomly selected face pictures.
According to the study, the AI was able to distinguish gay and heterosexual men 81 percent of the time. For gay and heterosexual women, it got it right 71 percent of the time.
Human judges could only get it right 61 percent for men and 54 percent for women.
The study, which was done by Michal Kosinski and Yilun Wang, was first reported in The Economist and published in the Journal of Personality and Social Psychology.
Their research noted that facialy morphology, expressions, and grooming styles were reliable predictors to guess whether a person was straight or gay.
This is because, the researchers said, gender-atypical features like narrower jaws in gay men and larger jaws in lesbians, could be linked to levels of hormone exposure in the womb.
Facial recognition AI: Junk science?
Two LGBTQ advocacy group have denounced the study, saying it could be used as a weapon against LGBTQ people while also being used to inaccurately out people as gay.
The two groups– GLAAD and the Human Rights Campaign– issued a joint statement saying the research was “dangerous” and the findings used out of context.
“Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay,” said Ashland Johnson, HRC director of public education and research.
“Stanford should distance itself from such junk science,” Johnson added.
“Technology cannot identify someone’s sexual orientation,” GLAAD Chief Digital Officer Jim Halloran said.
“This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community,” Halloran added.
The groups noted the limitations of the study undermine its conclusion.
One example they cited was that the researchers didn’t take into consideration non-white people. Another was that they examined “superficial characteristics” like weight, hairstyle, and facial expression.
The groups further said they aired their concerns to the researchers but to no avail.
Defending facial recognition AI
Kosinski and Wang defended their study, saying the groups’ reaction was “knee-jerk.” They noted that GLAAD and HRC hadn’t seem to have read their study in full nor understand the science behind it.
“It really saddens us that the LGBTQ rights groups, HRC and GLAAD, who strived for so many years to protect the rights of the oppressed, are now engaged in a smear campaign against us with a real gusto,” read a statement from the researchers.
The researchers also said the groups’ news release was “full of counterfactual statements” and they added that the groups’ concerns were addressed in the study itself.
Kosinski, co-author of the study and an assistant professor at Stanford, said: “One of my obligations as a scientist is that if I know something that can potentially protect people from falling prey to such risks, I should publish it.”
“Rejecting the results because you don’t agree with them on an ideological level– you might be harming the very people that you care about,” Kosinki added.