Twitter is starting a new initiative, Responsible Machine Learning, to assess any “unintentional harms” caused by its algorithms. A team of engineers, researchers, and data scientists across the company will study how Twitter’s use of machine learning can lead to algorithmic biases that negatively impact users.
One of the first tasks is an assessment of racial and gender bias in Twitter’s image cropping algorithm. Twitter users have pointed out that its auto-cropped photo previews seem to favor white faces over Black faces. Last month, the company began testing displaying full images rather than cropped previews.
The team will also look at how timeline recommendations differ across racial subgroups and analyze content recommendations across political ideologies in different countries. Twitter says it will “work closely” with third-party academic researchers and will share results of its analyses and ask for feedback from the public.
It’s not clear how much impact the findings will have. Twitter says they “may not always translate into visible product changes,” instead simply leading to “heightened awareness and important discussions” about how the company uses machine learning.
Twitter’s decision to analyze its own algorithms for bias follows other social networks like Facebook, which formed similar teams in 2020. There’s also ongoing pressure from lawmakers to keep companies’ algorithmic bias in check.
Twitter is also in the early stages of exploring “algorithmic choice,” which will potentially allow people to have more input into what content is served to them. CEO Jack Dorsey said in February that he envisions an “app-store-like view of ranking algorithms,” from which people will be able to choose which algorithms control their feeds.