‘Call of Duty’ game employs AI to monitor and censor ‘toxic speech’ from players.
Call of Duty Uses AI to Crack Down on “Toxic Speech” in Online Gaming
Call of Duty, the popular shooter video game published by Activision, is taking a stand against “toxic speech” and disruptive behavior in online matches. In a recent blog post, Activision announced that it is utilizing artificial intelligence (AI) to monitor and flag offensive language, hate speech, and harassment in real-time.
“Call of Duty’s new voice chat moderation system utilizes ToxMod, the AI-Powered voice chat moderation technology from Modulate, to identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more,” the company said.
This move comes as online gaming is becoming a new frontier for censorship. With around 60 million monthly players, mostly men, Call of Duty aims to create a more inclusive and respectful gaming environment.
Activision’s speech policing algorithms will continuously monitor and record player conversations, leaving no room for players to turn off the system. Violators of the company’s online speech rules, which prohibit derogatory comments based on race, sexual orientation, or gender identity, will face strict penalties ranging from temporary suspensions to permanent bans and stat resets.
The AI-powered speech enforcer is currently in beta testing in North America with Call of Duty: Modern Warfare II and Call of Duty: Warzone. It will be fully implemented worldwide alongside the release of Call of Duty: Modern Warfare III on Nov. 10.
Cracking Down on Toxicity
Activision has been committed to maintaining a respectful gaming environment. Its Code of Conduct explicitly prohibits bullying, harassment, and offensive language based on various factors such as race, gender identity, sexual orientation, and more.
The company’s enhanced speech policing features have already made an impact. Since their introduction, over 1 million accounts have had voice and/or text chat restricted. Interestingly, data shows that approximately 20 percent of players did not re-offend after receiving a warning.
Activision’s decision to use AI to combat “toxic speech” aligns with a broader trend in corporate America to suppress offensive expression. It also reflects a growing appetite among Americans for government restrictions on false information online, as recent polling suggests.
At least 55 percent of Americans in 2023 believe that the U.S. government should take steps to restrict false information online, even if it limits freedom of information, according to a July 20 Pew Research Center survey.
By leveraging AI technology, Call of Duty aims to create a more inclusive and respectful gaming community, where players can enjoy their favorite game without encountering toxic behavior.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
Now loading...