A Boston company is using machine learning to create what it claims is the world's first voice-activated moderation service that can tell the difference between what's being said and what's being meant.
Activision SPYING on gamers with Voice Chat AI? I'm done!
ToxMod is an attempt to solve the paradox of moderating public space on the internet; there aren’t enough people to meet the demand, but algorithms, filters, and a reporting system don’t understand the nuances.
ToxMod’s database allows it to track factors in players’ speech, such as emotion and volume, helping it distinguish between a momentary slump and a behavioral pattern. It was recently announced as an addition to the 7v7 American football game Gridiron, currently in Steam Early Access.
“Everyone knows that harassment, hate speech, toxicity in voice chat and in gaming is a huge problem. It’s common knowledge,” Carter Huffman, Modulate’s chief technology officer and co-founder, said in a Google Meeting with Lifewire. “We were able to take the features that we were extracting from this variety of machine learning systems and merge them into a system that took into account all of this expert knowledge that we were learning from the community.”