In 2017, Stanford researchers Michal Kosinski and Yilun Wang published a study claiming that artificial intelligence could predict a person’s sexual orientation with high accuracy using facial images.
The system, trained on over 35,000 online dating profile photos (mostly white, gay or straight individuals), reported an accuracy rate of 91% for men and 83% for women when analysing five images per person. Sounds amazing, right? Read on.
The authors claimed their intention was to highlight the potential misuse of such technology, but there was quite the ethical backlash. Rightly so.
So what is the big deal?
At first glance, the study might seem like a provocative but harmless academic exercise. In reality, there could be deeply problematic consequences:
- It violates consent and privacy principles
- It risks reinforcing harmful stereotypes about what it “looks like” to be gay
- It creates a potential tool that could be used in countries where homosexuality is illegal — leading to persecution, imprisonment or violence
Let’s dig in a bit deeper here.
1. No Consent, No Ethics
Using publicly available dating profile images without explicit permission ignores a fundamental ethical principle: informed consent. These individuals never agreed to be part of AI research, let alone studies predicting sexual orientation. As part of my doctorate studies I have to sign up to really rigorous ethics standards and informed consent is right at the top. I am astounded they didn’t have this.
2. Reviving Pseudoscience
The claim that AI can “see” sexual orientation echoes discredited pseudosciences like physiognomy. Such approaches historically justified racism, sexism, and homophobia — and this study risked reviving those ideas under the banner of machine learning. Whether intended or not this should have been considered.
3. Real-World Harm
In over 60 countries, homosexuality is criminalised. In some, it’s punishable by death. The idea that AI could be used to “detect” someone’s sexuality from a photograph is not just speculative — it is a genuine threat to the safety and lives of LGBTQ+ people.
4. Bad, Biased Data
The model was trained primarily on white, binary-identified individuals (gay or straight). This not only ignores the vast diversity within LGBTQ+ communities, but also reinforces a false idea of universal predictability. It erases bisexual, trans, non-binary, and racial identity experiences altogether. I mean, this is approaching negligence of an epic level.
5. Inaccurate Interpretation of “Accuracy”
Later analyses suggested the AI was picking up on grooming styles, facial expressions, or head angles, not innate features. In some cases, blurred images still worked — implying the AI learned superficial social cues, not anything meaningful or biological.
Despite their warnings about misuse, the Stanford team appeared blind to several serious blind spots which make me wonder about their own biases.
- Cultural presentation vs innate traits: The model may have simply picked up on cultural patterns, such as how people style their hair or smile in photos.
- Objectivity illusion: Presenting the work as neutral ignored the bias embedded in their dataset and methodology.
- Oversimplification of identity: Sexuality was treated as binary, ignoring its fluidity and spectrum.
- Consent and dignity: LGBTQ+ people, often at greater risk of surveillance, were not given a voice or choice in how their data was used.
- Global consequences: In places where being gay is a death sentence, this kind of research is not just careless — it is potentially deadly.
AI Ain’t Neutral: Garbage in Garbage Out etc
This study highlights a broader truth: AI systems inherit the assumptions and biases of their creators. When deployed without due care, they amplify those biases — especially when trained on non-representative or ethically compromised data. As AI becomes more powerful and more embedded in daily life, we must ask not only what can be done, but what should not be done. This study should not have been done. But, also, AI isn’t the problem here. Guns don’t kill people, people do. Likewise, AI isn’t the problem here, the people using it are.
“You’re going down a very slippery slope, if one in 20 or one in a hundred times … you’re going to be dead wrong.”
Thomas Keenan
The Michal Kosinski and Yilun Wang AI sexuality study serves as a critical reminder that technological capability must never come at the expense of human’s. We need to think about the Human in the Loop here.
Rather than asking whether AI can predict personal traits from appearance, we should be asking why we would want it to — and who benefits from those predictions. In many cases, the answers reveal more about our cultural assumptions than about the people we claim to study.
When it comes to identity, appearance, and safety — especially in contexts where being different can get you killed — the stakes are simply too high for careless science, or pseudoscience.
James Stewart





Leave a comment