FACEIT, the makers of Minerva, the first artificial intelligence powered admin, can reveal her upgraded capabilities in the fight against toxic behaviour in competitive games. Minerva can now hear and is able to understand voice messages in all languages. Detecting toxic behaviour on voice chat including verbal toxicity and mic spam.
- Last year FACEIT embarked on a journey to reduce toxic behaviour in online multiplayer games by releasing Minerva, an artificial intelligence powered admin, able to detect and take action against toxic messages at infinite scale. According to surveys of FACEIT users, anti-toxicity solutions were one of the most requested features.
- Since launch she has:
- Analysed more than 1.4bn chat messages
- Detected more than 1.9 million toxic messages and issues warnings
- Banned over 100k players
- As a result, the FACEIT platform has seen:
- 21% fewer toxic messages sent
- 62% reduction in seriously offensive messages being sent
- The first iteration of Minerva began by analysing thousands of text chat messages to detect abusive language and toxic behaviour, but there were other forms of toxicity relating to gameplay she was unable to detect including; players acting as obstacles for teammates, staying AFK, deliberate friendly fire, blocking, griefing and potential offensive and abusive language via voice channels
- To detect and take action against negative behaviours in gameplay, FACEIT launched Justice, a community-driven portal, that thanks to the endless effort of many members of the community, has managed to resolve more than 60,000 cases, issuing almost 30,000 punishments since it’s launch in April.
- The FACEIT community also fed back that text detection didn’t go far enough thus prompting the development in Minerva’s ability to listen. Now she is able to analyse audio sources of data and detect two different types of potential abuse: toxic verbal messages and microphone spamming.
- Minerva is able to analyse full conversations among players during a match. An important aspect that makes the overall process more reliable and less prone to misleading interpretations that could come out by analyzing single and isolated messages.
- In addition to detecting potential toxic voice messages Minerva is able to detect and sanction other forms of repetitive annoying user behaviours and sounds, which could worsen their teammates’ ability to concentrate and make the overall experience less enjoyable such as screaming or excessively loud music. To achieve this, Minerva reviews the audio file of the match and by applying a CNN (Convolutional Neural Network) technology, she can recognise a series of negative behaviours and issue either a warning or a ban in case the player has been caught multiple times.
To find out more about Minerva please head to https://fce.gg/minerva-gets-ears
If you enjoy games and gaming and want more NEWS from the Gaming World Click Here