It's no secret that the internet has a problem with hate speech. But automated attempts to clamp down on it have mostly failed, because it's too slippery a concept to define for computers. Now, a new way of identifying the subtle linguistic fingerprints of hate speech--and separating it from benign uses of similar words--could finally help people crack down on the worst offenders. Neither human nor automatic hate-speech detection have been effective. Earlier this year, Google tried to assign comments a "toxic" score on the basis of how similar they were to phrases people had previously deemed offensive. However, the shortcomings overwhelmed the positive effects.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados