Online hate speech is a major problem in our society. While there are a lot of automatic hate speech detection models, some of which achieve state-of-the-art performance, it is usually hard to explain their decisions. Therefore, a recent study on arXiv.org suggests improving model explainability by learning both the decision and the reasons.

” data-src=”https://www.technology.org/texorgwp/wp-content/uploads/2019/01/human-1138001_960_720-720×511.jpg” alt=”” width=”500″ height=”358″>

Image credit: MikeRenpening | Free image via Pixabay

The created dataset consists of 20K posts from Twitter and Gab, manually classified into hate, offensive, and normal speech. Annotators also selected the target communities mentioned in the post and parts of the text which justify their decision. It is shown that models that perform well in classification cannot always provide rationales for their decisions. Also, including human rationales for the labeling during training allows to improve the performance and reduce unintended bias on target communities.

Hate speech is a challenging issue plaguing the online social media. While better models for hate speech detection are continuously being developed, there is little research on the bias and interpretability aspects of hate speech. In this paper, we introduce HateXplain, the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in our dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labelling decision (as hate, offensive or normal) is based. We utilize existing state-of-the-art models and observe that even models that perform very well in classification do not score high on explainability metrics like model plausibility and faithfulness. We also observe that models, which utilize the human rationales for training, perform better in reducing unintended bias towards target communities. We have made our code and dataset public at this https URL

Link: https://arxiv.org/abs/2012.10289


You can offer your link to a page which is relevant to the topic of this post.

Leave a Reply

Your email address will not be published. Required fields are marked *