You’ve probably seen AI create articles or speak to humans. Even in a set of comprehension tasks, anyone would assume that AI can understand language almost just as well as humans do.
However, a new research from a team at Auburn University and Adobe Research show otherwise. It appears that many AIs don’t understand language at all, because of the way natural language processing (NLP) systems are trained.
As part of their study, the researchers attempted to use an NLP system to generate explanations for its behavior. One of the questions they raised was why the system claimed different sentences had the same meaning. After many tests, they soon realized that shuffling words made no difference to the explanations.
The team didn’t only look at one system. They scanned several NLP systems based on a language model developed by Google called BERT, which consistently score better than humans on language comprehension tests.
While working on the study, they found out that systems couldn’t tell when words are jumbled up – even if it brought out a completely different meaning.
For example, the systems decided that two opposite sentences – “Does marijuana cause cancer?” and “Does cancer cause marijuana?” – were asking the same question.
After more digging, it appeared that models only pick-up key words in a sentence. It couldn’t care less about order.
While it may not be a big deal (depending on application), the lapses in NLP or AI still requires only an easy fix. Forcing a model to focus on word order could be one remedy, where the systems would be asked to spot grammatical errors – a task where order and syntax matters). Overall, the researchers say that the systems only need a few tweaks and it’s good to go once again.
Add Comment