Audio player loading…
Google has warned that a ruling against it in an ongoing Supreme Court (SC) case could put the entire internet at risk by removing a key protection against lawsuits over content moderation decisions that involve artificial intelligence (AI).Section 230 of the Communications Decency Act of 1996 (opens in new tab) currently offers a blanket ‘liability shield’ in regards to how companies moderate content on their platforms. However, as reported by CNN (opens in new tab), Google wrote in a legal filing (opens in new tab) that, should the SC rule in favour of the plaintiff in the case of Gonzalez v. Google, which revolves around YouTube’s algorithms recommending pro-ISIS content to users, the internet could become overrun with dangerous, offensive, and extremist content.Automation in moderationBeing part of an almost 27-year-old law, already targeted for reform by US President Joe Biden (opens in new tab), Section 230 isn’t equipped to legislate on modern developments such as artificially intelligent algorithms, and that’s where the problems start.The crux of Google’s argument is that the internet has grown so much since 1996 that incorporating artificial intelligence into content moderation solutions has become a necessity. “Virtually no modern website would function if users had to sort through content themselves,” it said in the filing. “An abundance of content” means that tech companies have to use algorithms in order to present it to users in a manageable way, from search engine results, to flight deals, to job recommendations on employment websites. Google also addressed that under existing law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability, but that this puts the internet at risk of being a “virtual cesspool”. The tech giant also pointed out that YouTube’s community guidelines expressly disavow terrorism, adult content, violence and “other dangerous or offensive content” and that it is continually tweaking its algorithms to pre-emptively block prohibited content. It also claimed that “approximately” 95% of videos violating YouTube’s ‘Violent Extremism policy’ were automatically detected in Q2 2022.Nevertheless, the petitioners in the case maintain that YouTube has failed to remove all Isis-related content, and in doing so, has assisted “the rise of ISIS” to prominence. Read more
…
Read more