Two of the biggest multiplayer games developers in Riot Games and Ubisoft have joined forces to tackle the rising problem of toxic behavior in video games with AI.
According to their official statement, both organizations are actively working on an AI that will be capable of detecting toxicity in video games and taking according to action. The AI is presumed to recognize abusive voices and chats and further moderate them all alone without involving any person in the process. This will render the games free of toxicity at a rapid pace rather than engage individuals which results in a much more time-consuming process.
“That’s why Riot and Ubisoft are teaming up on a tech partnership to develop a database gathering in-game data to better train AI-based preemptive moderation tools that detect and mitigate disruptive behavior in-game. Any data gathered that will be able to identify a person will be removed before sharing.” read Riot’s official statement.
Riot and Ubisoft combined have developed a wide range of multiplayer games where toxicity is prevalent to this day. Despite having the necessary tools in place, the toxicity in games is increasing at an alarming pace, and having a moderator team is an effective but extremely slow process and often comes up short.
With two companies as big as Riot and Ubisoft, they have a plethora of data that can make AI highly efficient in detecting toxic behavior. When they are successful, we can see several organizations drop their moderation teams and replace them with high-function AI upon its proper training.
We do not have a timeline for when it launches, or when we can get see a preview but with the alarming rate at which toxicity is rising, we will likely see it real soon. This will be a big step towards making games free of toxicity and such players with inappropriate behavior towards fellow teammates will think twice before lashing out.