Overcoming Poor Word Embeddings with Word Definitions

Publication Date: 3/8/2021

Event: arXiv

Reference: https://arxiv.org/abs/2103.03842

Authors: Christopher Malon, NEC Laboratories America, Inc.

Abstract: Modern natural language understanding models depend on pretrained subword embeddings, but applications may need to reason about words that were never or rarely seen during pretraining. We show that examples that depend critically on a rarer word are more challenging for natural language inference models. Then we explore how a model could learn to use definitions, provided in natural text, to overcome this handicap. Our model’s understanding of a definition is usually weaker than a well modeled word embedding, but it recovers most of the performance gap from using a completely untrained word.

Publication Link: https://arxiv.org/pdf/2103.03842.pdf