One of the scariest concerns about social media is the idea of algorithmic misinformation — that the recommendation systems built into platforms like Facebook and YouTube could be quietly elevating the most harmful and disruptive content on the network. But at a Senate hearing Tuesday titled “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape our Discourse and Our Minds,’’ lawmakers didn’t appear any closer to finding a solution for the violence platforms like Facebook have internally recognized they cause in the real world.
At the top, Sen. Chris Coons (D-DE) said “there’s nothing inherently wrong” with how the testifying companies, Facebook, Twitter, and YouTube, leverage algorithms to keep users engaged on their platforms. He said the committee wasn’t weighing any actual legislation and that the hearing was set to serve as a listening session between the lawmakers and the companies.
It’s a major contrast from more focused issues like Facebook’s acquisition practices or Apple’s App Store fees, which have seen immediate action from courts and regulators often set off by congressional fact-findings. But the algorithm issue is thornier and harder to address — and increasingly, there doesn’t seem to be much appetite from lawmakers to take it on.
Congress’ slow walk was particularly notable compared with the expert panelists, who presented algorithmic disinformation as an existential threat to our system of government. “The biggest problem facing our nation is misinformation-at-scale,” Joan Donovan, research director at Harvard’s Shorenstein Center on Media, Politics and Public Policy, said Tuesday. “The cost of doing nothing is democracy’s end.”
Donovan expressed frustration about the hearing on Twitter afterward, saying lawmakers should have pressed the platforms for more information about the specific mechanisms used to rank content. “The companies should have been answering questions about how they determine what content to distribute and what criteria is used to moderate,” Donovan said. “We could have also gone deeper into the role that political advertising and source hacking plays on our democracy, or the need for curatorial models for information integrity.”
Still, the overall impression was that the federal government’s approach to algorithms hasn’t developed much since 2016, and it doesn’t seem likely to move faster anytime soon. “We are still talking about conversations that we’ve had four years ago about spread of misinformation,” said Tristan Harris, co-founder and president of the Center for Humane Technology.
WHAT IT MEANS
Congress first started hearing these arguments on how algorithms amplify extremist content following the 2016 presidential election and the rise of the Trump-wing of the Republican Party. Over the last few years, lawmakers have held hearings. They’ve sent letters. But unlike the testy exchanges and deluge of bills we’ve seen around antitrust and content moderation, they’ve steered clear of regulating algorithms, broadly opting to politely nudge these companies in the right direction.
It’s something the lawmaker chairing Tuesday’s hearing, Sen. Chris Coons (D-DE), acknowledged in an interview with Politico last month. Coons said, “Congress regulates the internet and social media rarely, and when it does, whatever it puts in statute often has ended up being stuck or set in place for years.” He continued, “And in an area where technology is moving fairly quickly, sometimes oversight hearings, letters [and] conversations with the leaders of major social media companies can result in those companies demonstrably changing their practices faster than we can legislate.”
This theme was apparent throughout Tuesday’s hearing, and it’s unclear exactly how the committee may move forward on regulating social media algorithms.
The closest Congress has come in recent weeks is a bill introduced in the House.
In March, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) reintroduced the “Protecting Americans from Dangerous Algorithms Act.” The bill would amend Section 230 of the Communications Decency Act in a way that would remove liability immunity for a platform if its algorithm amplified content related to a civil rights violation. (Critics argue it may do more harm than good.) While the bill has garnered some partisan support in the House following the Capitol riot, the Senate hasn’t taken it up or offered its own solution.
WHAT’S NEXT?
Most likely, more talking. “I’m encouraged to see that these are topics that are broadly of interest and where I believe there could be a broadly bipartisan solution,” Coons said at Tuesday’s hearing. “But I also am conscious of the fact that we don’t want to needlessly constrain some of the most innovative, fastest growing businesses in the West. Striking that balance is going to require more conversation.”