This New Way to Train AI Could Curb Online Harassment

Misogyny on the internet too often slips through the filters of content moderators. A new method hopes to inject more nuance into the process.

For about six months last year, Nina Nørgaard met weekly for an hour with seven people to talk about sexism and violent language used to target women in social media. Nørgaard, a PhD candidate at IT University of Copenhagen, and her discussion group were taking part in an unusual effort to better identify misogyny online. Researchers paid the seven to examine thousands of Facebook, Reddit, and Twitter posts and decide whether they evidenced sexism, stereotypes, or harassment. Once a week, the researchers brought the group together,

→ Continue reading at WIRED

Similar Articles

Advertisment

Most Popular

Aubrey Plaza Can’t Wait to Work With Mike White on ‘The White Lotus’ Season 2

Like the rest of us, Aubrey Plaza devoured the first season of “The White Lotus,” HBO’s buzzy social satire about insufferable vacationers and seemingly...

Vi from ‘League of Legends’ arrives in ‘Fortnite’

Fans of Riot’s Arcane have a long wait ahead of them before season two of the animated series arrives. In the meantime, you can at...

Serbia's Vucic Denounces Australia’s Treatment of Novak Djokovic as ‘Orwellian’

In the tennis star’s homeland, even those who didn’t support his decision to remain unvaccinated against the coronavirus said that he had been mistreated.BELGRADE,...