Twitter it was looking into why the neural network it uses to generate photo previews apparently chooses to show white people’s faces more frequently than Black faces.
Several Twitter users demonstrated the issue over the weekend, posting examples of posts that had a Black person’s face and a white person’s face. Twitter’s preview showed the white faces more often.
The informal testing began after a Twitter user tried to post about a problem he noticed in Zoom’s facial recognition, which was not showing the face of a Black colleague on calls. When he posted to Twitter, he noticed it too was favoring his white face over his Black colleague’s face.
Users discovered the preview algorithm chose non-Black cartoon characters as well.
When Twitter first began using the neural network to automatically crop photo previews, machine learning researchers explained in a blog post how they started with facial recognition to crop images, but found it lacking, mainly because not all images have faces:
Previously, we used face detection to focus the view on the most prominent face we could find. While this is not an unreasonable heuristic, the approach has obvious limitations since not all images contain faces. Additionally, our face detector often missed faces and sometimes mistakenly detected faces when there were none. If no faces were found, we would focus the view on the center of the image. This could lead to awkwardly cropped preview images.
Twitter chief design officer Dantley Davis tweeted that the company was investigating the neural network, as he conducted some unscientific experiments with images:
Here’s another example of what I’ve experimented with. It’s not a scientific test as it’s an isolated example, but it points to some variables that we need to look into. Both men now have the same suits and I covered their hands. We’re still investigating the NN. pic.twitter.com/06BhFgDkyA
— Dantley (@dantley) September 20, 2020
Liz Kelley of the Twitter communications team tweeted Sunday that the company had tested for bias but hadn’t found evidence of racial or gender bias in its testing. “It’s clear that we’ve got more analysis to do,” Kelley tweeted. “We’ll open source our work so others can review and replicate.”
Twitter chief technology officer Parag Agrawal tweeted that the model needed “continuous improvement,” adding he was “eager to learn” from the experiments.
This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement.
Love this public, open, and rigorous test — and eager to learn from this. https://t.co/E8Y71qSLXa
— Parag Agrawal (@paraga) September 20, 2020