GLAAD Blasts AI ‘Gaydar’ Study, Says Purportedly Accurate Algorithm Is ‘Dangerous And Flawed’

A study suggests that there is such a thing as artificial intelligence “gaydar,” and that it happens to be quite accurate in guessing one’s sexual orientation. Not surprisingly, it has caused a huge uproar in the LGBTQ community, with the advocacy group GLAAD issuing a statement asking that the study’s findings be debunked by “responsible media.”

In a new study published earlier in the week, researchers from Stanford University used a computer algorithm to analyze photos of men and women and determine whether the people in the images were gay or straight. According to The Advocate, the algorithm was accurate 81 percent of the time when analyzing single photos of men, and 74 percent accurate when reading women’s photos. But when the so-called AI gaydar was fed five photos each of the same person, it was 91 percent accurate with men, and 83 percent accurate with women.

Based on the numbers, those are impressive batting averages, especially when compared to the accuracy of human judges who took part in the study. The Stanford researchers noted that humans were only 61 percent right when guessing a man’s sexual orientation, and 54 percent correct when it came to female subjects.

According to the researchers, genetic variables might have a role in determining the appearance of straight and gay individuals, as the AI gaydar had taken into account the subjects’ facial features and grooming when guessing their sexual orientation.

“Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles,” the researchers wrote.

There were, however, some limitations to the study, according to a report from The Guardian. Chief among these limitations was the fact that the researchers only used photos of Caucasians — no people of color were included among the images. The possibility of the people being transgender or bisexual also wasn’t taken into account. Additionally, the researchers warned that their algorithm could potentially be abused, as social media photos and other publicly available images could be used by people to determine a person’s sexual orientation without their permission.

These abuses could include people using the technology to confirm their suspicions about potentially closeted partners, or teenagers using the AI gaydar to see if their peers are gay or not. Worse, prejudiced governments could use the technology to identify and target LGBT people for persecution, The Guardian added.

Since the Stanford scientists published their study, LGBTQ advocacy groups have reacted very negatively toward the research, having taken stock of all of the limitations mentioned above. GLAAD and the Human Rights Campaign issued a joint statement Friday, noting that multiple media outlets had “wrongfully suggested” that there is such a thing as AI gaydar, and asking these publications to duly note the “myriad flaws” in the study’s methodology, including inaccurate assumptions, the lack of non-white subjects, and the fact that the paper hasn’t even been peer-reviewed.

“Technology cannot identify someone’s sexual orientation,” said GLAAD chief digital officer Jim Halloran.

“What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated.”

Halloran also warned that the so-called AI gaydar could be weaponized against actual LGBTQ individuals, as well as wrongfully outed heterosexuals from countries where being gay is still frowned upon.

“At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous.”

[Featured Image by Lisa-Lisa/Shutterstock]