Bad grammar can fool hate speech-detecting AIs, says study

Modern natural language processing techniques can classify text based on individual characters, words, sentences but faced with textual data that differs from that used in their training, they fumble.

Published: 16th September 2018 06:46 PM  |   Last Updated: 16th September 2018 06:46 PM   |  A+A-

grammar

Hate speech is subjective and context-specific, which renders text analysis techniques insufficient as stand-alone solutions the researchers said.

By IANS

LONDON: Machine learning detectors deployed by major social media and online platforms to track hate speech are "brittle and easy to deceive", a study claims.

The study, led by researchers from the Aalto University in Finland, found that bad grammar and awkward spelling -- intentional or not -- might make toxic social media comments harder for artificial intelligence (AI) detectors to spot.

Modern natural language processing techniques (NLP) can classify text based on individual characters, words or sentences. When faced with textual data that differs from that used in their training, they begin to fumble, the researchers said.

"We inserted typos, changed word boundaries or added neutral words to the original hate speech. Removing spaces between words was the most powerful attack, and a combination of these methods was effective even against Google's comment-ranking system Perspective," said Tommi Grondahl, a doctoral student at the varsity.

The team put seven state-of-the-art hate speech detectors to the test for the study. All of them failed.

Among them was Google's Perspective. It ranks the "toxicity" of comments using text analysis methods.

Earlier, it was found that "Perspective" can be fooled by introducing simple typos.

But, Grondahl's team discovered that although "Perspective" has since become resilient to simple typos, it can still be fooled by other modifications such as removing spaces or adding innocuous words like "love".

A sentence like "I hate you" slipped through the sieve and became non-hateful when modified into "Ihateyou love".

Hate speech is subjective and context-specific, which renders text analysis techniques insufficient as stand-alone solutions the researchers said.

They recommend that more attention be paid to the quality of data sets used to train machine learning models -- rather than refining the model design.

The results will be presented at the forthcoming ACM AISec workshop in Toronto.

Stay up to date on all the latest World news with The New Indian Express App. Download now

Comments

Disclaimer : We respect your thoughts and views! But we need to be judicious while moderating your comments. All the comments will be moderated by the newindianexpress.com editorial. Abstain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks. Try to avoid outside hyperlinks inside the comment. Help us delete comments that do not follow these guidelines.

The views expressed in comments published on newindianexpress.com are those of the comment writers alone. They do not represent the views or opinions of newindianexpress.com or its staff, nor do they represent the views or opinions of The New Indian Express Group, or any entity of, or affiliated with, The New Indian Express Group. newindianexpress.com reserves the right to take any or all comments down at any time.