A group of House Democrats plan to introduce legislation that will remove some of the liability protections for tech platforms, on the heels of a Facebook whistleblower’s testimony last week that was highly critical of the social giant’s practices.
The new legislation would target Section 230, a provision of a 1996 law that has given platforms liability for third party content.
The bill, to be introduced on Friday, would expose an online platform to liability when it “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to the House Energy and Commerce Committee.
Its chairman, Rep. Frank Pallone (D-NJ), said in a statement that platforms like Facebook continue to actively amplify content that endangers our families, promotes conspiracy theories, and incites extremism to generate more clicks and ad dollars. These platforms are not passive bystanders – they are knowingly choosing profits over people, and our country is paying the price.”
The bill is called the Justice Against Malicious Algorithms Act (read the text here). The bill does not apply to search features or algorithms that do not rely on personalization, nor does it include web hosting or data storage and transfer. Small online platforms with fewer than five million unique monthly visitors or users also would continue to have the liability protection.
The House Energy and Commerce Committee has had a number of hearings focusing on the spread of harmful content on social media and other platforms, including a hearing one in March on disinformation. It featured Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey. In his prepared testimony, Zuckerberg proposed making liability protection “for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content.” That would mean that platforms would be required to show that they have systems in place for identifying unlawful content and removing it.
That’s a far different proposal than the one offered by the lawmakers, as the threshold would be that they have “adequate systems in place,” with a third party body setting that definition. Zuckerberg argued that platforms “should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.”
The Facebook whistleblower, Frances Haugen, testified last week before the Senate Commerce Committee, and zeroed in on the role that algorithms play in promoting harmful or extremist content. She has provided a trove of documents to the committee, which has yet to propose its own legislation.
Rep. Anna Eshoo (D-CA), chair of the Energy and Commerce health subcommittee, said in a statement that Haugen showed that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.”
After Haugen’s testimony, Zuckerberg objected to what he called a “false picture” of the company.
“Many of the claims don’t make any sense. If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?” he said in response.
The House legislation is co-sponsored by a number of other top Energy and Commerce committee Democrats. While Republicans have attacked Section 230 as well, they have targeted platforms for being too aggressive in removing content they say has been at the expense of conservative voices. Platforms have pushed back against the idea of political bias, and figures on the right continue to rank atop Facebook’s social media posts.
Subscribe to Deadline Breaking News Alerts and keep your inbox happy.