The U.S. Supreme Court, hearing a case that could reshape the internet, considered on Tuesday whether Google bears liability for user-generated content when its algorithms recommend videos to users.
In the case, Gonzalez vs, Google, the family of a terrorist attack victim contends that YouTube violated the federal Anti-Terrorism Act because its algorithm recommended ISIS videos to users, helping to spread their message. Nohemi Gonzalez was an American student killed in a 2015 ISIS attack in Paris, and his family’s lawsuit challenges the broad legal immunity that tech platforms enjoy for third party content posted on their sites.
Section 230 of the Communications Decency Act, passed in 1996, protects platforms from legal action over user-generated content, and it also protects them if they choose to remove content. Section 230 has withstood court challenges for the past three decades even as the internet exploded.
Watch on Deadline
The attorney for Gonzalez’s family claimed that YouTube’s recommendations fall outside the scope of Section 230, as it is the algorithms, not the third party, that actively pick and choose where and how to present content. In this case, the attorney said, it enhanced the ISIS message.
“Third parties that post on YouTube don’t direct their videos to specific users,” said the Gonzalez’s attorney Eric Schnapper. Instead, he said, those are choices made by the platform.
Justice Neil Gorsuch said he was ‘”not sure any algorithm is neutral. Most these days are designed to maximize profit.”
He and justices on the right and left all acknowledged the importance of the case, but also said they found it confusing – most used that exact word — and would prefer that Congress, which wrote the law, be the one to address changing it.
Justice Elena Kagan said all other sectors including publishers have rules, and wondered why the internet gets “a pass.” But, she added, “We are a court. We really don’t know about these things. We are not the nine biggest experts on the internet. Isn’t this a case for Congress, not the court?”
Congress has held hearings and repeatedly made noises about Section 230, which has become increasingly controversial as platforms and their power to influence society has grown exponentially. Even though there have been calls to alter or eliminate Section 230, legislation has gone nowhere.
Internet firms swear that removing or limiting 230 protections would destroy the medium.
Would it? Chief Justice John Roberts asked Google’s attorney Lisa Blatt. “Would Google collapse and the internet be destroyed if Google was prevented from posting what it knows is defamatory?”
“Not Google,” she said, but other, smaller websites, yes.
She said if the plaintiffs were victorious, the internet would become a zone of extremes – either The Truman Show, where things are moderated into nothing, or like “a horror show,” where nothing is.
Blatt said some kind of curation and targeting has been intrinsic to the internet since its early days in the 1990s when people first started signing up for various subject-specific chat groups. Even then, “the internet was a mess. You had to organize it because it was massive.” Amazon has been targeting for years, she said, telling e-shoppers “if you bought this you might also like that.”
European regulators have shown that it is possible to regulate the internet to some extent. Congress also has carved out exceptions. In 2018, it passed a law removing immunity from internet for content dealing with sex trafficking. That content swiftly disappeared, and the web is still standing.
Justice Ketanji Brown Jackson got into it with Blatt about the Section 230 “good Samaritan” provision that shields internet providers from lawsuits if they remove offensive content. “Doesn’t that suggest Congress wanted internet companies to block offensive content?… The statute is like, ‘We want you to take these things down.”
“I think a lot of things are offensive that other people think are entertainment,” said Blatt.
SCOTUS is set to hear a separate but similar case involving Twitter on Wednesday.
What content an individual chooses to view is a personal responsibility. However, messages containing heightened violence should be flagged. Obvios intent at terrorism should also be flagged. For example, messages from identified terrorist groups should have no presence on the internet.
Time for the Supreme Court not to punt on this case. This is not about profit or monetizing viewer engagement. This is about cruel and hateful ways to incite acts of violence. If kicking this case back to Congress is the opinion of the court, then it will to propagate more hate!
A little accountability in the world of web superpowers would be welcome.
With great power comes great responsibility. It’s not just a quote from Supermal, it’s true wisdom.
This looks to me like some people want to blame an algorithm for their bad choices. I’d toss the case out and let Congress figure out what to do.
I would not shed a single tear were Google to go “poof.” Good riddance, I say.
If you read the article you’d realize that Google would be left standing while many other smaller sites would be unable to handle the changes
According to Google’s attorney. Big corporations always claim if their monopoly isn’t uphold, all small players will go bankrupt, which is the big Trickle-down lie somehow people still believe.
They don’t say however, why smaller sites would be unable to deal with it. Seems that they said so simply do divert attention.