Misogyny is still fairly widespread, as we have unfortunately found time and again lately. Not only are sexism and the like deeply rooted in many successful video game companies, women also often have to read through or listen to sexist comments from other users in online forums or video game chats.
Women are also often exposed to sexist comments on social media, which can change the way they communicate and even cause some women to turn their backs on any platforms. This was the result of a study from 2020. In addition, more than half of women from 22 countries have experienced online harassment at some point.
In a report published this January from Pew Research was published, they say men are more likely to be victims of online harassment on average. However, the consequences for women are much more serious. They are much more likely to be victims of stalking or sexual harassment and are more than twice as likely to be very angry after such a confrontation. Half of the women surveyed also reported that they were harassed because of their gender.
To counteract this, social media companies are already using artificial intelligence to identify and remove posts that harass women, threaten them with violence, or the like. The whole thing turns out to be quite a difficult undertaking, however, as there are no standardized rules about what exactly is classified as sexist or misogynist comments. A recently published paper reportedly found four different categories here, and another probably even came up with 23 as much as 23. Most research on this topic is in the English language, however, which creates a certain barrier.
Nina Nørgaard caught up with seven other people last year to talk about sexism and offensive language women face on social media. The group examined thousands of posts on Facebook, Reddit, and Twitter to determine whether or not they were sexist attacks. The group then discussed cases that were particularly difficult to assess in their various meetings, with Nørgaard as mediator.
Scientists from Denmark have now hired Nina Nørgaard and her group to review and flag posts. The group is made up of people of different ages, nationalities, and political views so that the group does not suffer from any prejudices about the same worldview. For example, the people include a software designer, a climate activist, an actress, and a health care worker. Nina Nørgaard’s job was to bring the group to a consensus.
“The great thing is that they often don’t match. We don’t want tunnel vision, we don’t want everyone to think the same,” explains Nørgaard. Their aim was simply to bring those involved up for discussion so that they can “find the answers themselves”. Over time, of course, she got to know the individual members better and found out who speaks more overall and who is more reserved. Here she was trying to make sure that no single person dominates the conversation as it is a discussion and not a debate.
Posts that contain irony, sarcasm, or jokes turned out to be the hardest decisions. However, according to Nørgaard, the meetings became shorter and shorter and the people involved discussed less, which was a good sign for Nørgaard.
The scientists behind the project see this as a complete success. With the newly acquired data, they were able to train an AI algorithm, which is 85 percent misogyny on popular social media platforms. can recognize correctly. About a year ago, the best algorithms were only 75 percent accurate.
While the project was only applied to social media posts, it could be extremely useful in other areas as well. Companies are already using AI to investigate things like press releases for sexist statements. If women abstain from online discussions to avoid sexist comments, it could damage the democratic process, explains Leon Derczynski, a co-author on the study:
“If you ignore aggression and threats against half the world’s population, you won’t have good online democratic spaces.”
The results of this study could also prove to be extremely useful in the gaming sector, as there is still a lot to be done here so that companies can also become a safe place for women.
You can find more information about the new AI algorithm and how grammar and the like are included in read this in-depth article from WIRED.
The links marked with * are affiliate links. Affiliate links are not advertisements as we are independent in researching and selecting the products presented. We receive a small commission for product sales, which we use to partially finance the free content of the website.