YouTube has a history of unfairly treating LGBTQ+ content creators in its moderation process, and YouTube’s new AI-powered technology is being seen as a worrying next step for the tech giant.
Last week, the video platform announced on the official YouTube blog plans to launch new automated software that will “more consistently apply age restrictions” to videos deemed inappropriate for younger viewers.
Inspired by recent concerns about children on the app, the new system relies on machine-learning artificial intelligence software with the ability to replace human moderators in favor of a more automatic process. The problem? YouTube’s automated systems have been accused of singling out LGBTQ+ content and creators simply for existing.
“Machine learning is informed and created by humans, and it’s possible that those biases are inherent in the machine or learned by the machine,” YouTuber Rowan Ellis said in a phone interview with Lifewire. “The bias around [LGBTQ+] content has been clear in the past experiences of [LGBTQ+] YouTubers, and I’ve seen no evidence that anything has been done to stop it.”