AI Gaydar Study Authors Issue Response To ‘Irresponsible’ GLAAD Press Release

The Stanford University researchers who penned the controversial AI “gaydar” study have responded to GLAAD’s statement that dismissed their findings as a dangerous form of “junk science.”

As previously reported by the Inquisitr, the Stanford researchers were able to come up with a computer algorithm that analyzes the faces of people to determine whether they are gay or straight. When only one picture was fed into the algorithm, it was able to correctly predict sexual orientation 81 percent and 74 percent of the time respectively with men and women, but when five pictures for the same person were used, the AI gaydar’s accuracy rose to 91 percent with men and 83 percent with women.

With these findings going viral earlier in the week, activists raised concerns about the study, saying that the algorithm could potentially end up in the wrong hands and be used as a tool to persecute LGBTQ individuals. Both GLAAD and the Human Rights Campaign teamed up to release a statement where they referred to the Stanford study as an example of “junk science” and brought up several areas that weren’t addressed, primarily the lack of non-white subjects, the lack of verification on the subjects’ age and sexual orientation, and the assumption that people can only be gay or straight, but not bisexual.

With GLAAD and the HRC having said their piece on the AI gaydar study, the researchers fired back yesterday, issuing their own response via Google Docs, and referring to the GLAAD/HRC joint statement as “irresponsible” and based on “poorly-researched opinions of non-scientists.” The researchers individually dissected each of the advocacy groups’ allegations and added that the groups might have missed one of the key points of the study, which was to suggest that people can, or probably already have misused artificial intelligence technology.

“We think that this shows premature judgment by the individuals behind this press release. They do a great disservice to the LGBTQ community by dismissing our results outright without properly assessing the science behind it, and hurt the mission of the great organizations that they represent.”

Regarding the allegation that the AI gaydar study was not peer-reviewed, the researchers debunked this claim, stating that the paper was reviewed and accepted for publication in the Journal of Personality and Social Psychology. They also stressed that great efforts were taken to determine the validity of the data they gathered, adding that there were specific reasons why they only focused on two sexual orientations in their research.

“We did not make any claims related to how many sexual orientations there are. Our study focuses on just two—straight and gay—which were best represented in our dataset.”

Likewise, the researchers maintained that there weren’t enough non-white people in their dataset, hence their decision to focus on white subjects in the AI gaydar paper.

Speaking to The Guardian, study co-author Michal Kosinski, an assistant professor at Stanford, said that he was “perplexed” by the comments made by GLAAD and the HRC, again stressing that the research had a broader goal of warning people about how AI could be abused, and how tighter privacy regulations are needed. He added that his critics’ efforts to discredit his work could end up counterproductive, as it could convince people to ignore that broader goal.

“Rejecting the results because you don’t agree with them on an ideological level … you might be harming the very people that you care about,” said Kosinski.

For the meantime, Kosinski is fiercely protective of the AI gaydar algorithm; he has yet to release a version for the public, and reportedly declined The Guardian’s request to test the algorithm. He also told the publication that he was not sure at first whether to publish the study or not. As Kosinski related, he was expecting that people might get upset by the findings and concerned that the study could “give some bad guys some ideas,” but in the end, he chose to push forward, knowing that private companies and government use similar software for their own purposes.

[Featured Image by Marc Bruxelle/Shutterstock]