Evaluating vector-space models of analogy
2017
Vector-spacerepresentations provide geometric tools for reasoning about the similarity of a set of objects and their relationships. Recent machine learning methods for deriving
vector-spaceembeddings of words (e.g.,
word2vec) have achieved considerable success in natural language processing. These
vector spaceshave also been shown to exhibit a surprising capacity to capture verbal analogies, with similar results for natural images, giving new life to a classic model of analogies as
parallelogramsthat was first proposed by cognitive scientists. We evaluate the
parallelogrammodel of analogy as applied to modern
word embeddings, providing a detailed analysis of the extent to which this approach captures human relational similarity judgments in a large benchmark dataset. We find that that some semantic relationships are better captured than others. We then provide evidence for deeper limitations of the
parallelogrammodel based on the intrinsic geometric constraints of
vector spaces, paralleling classic results for first-order similarity.
Keywords:
-
Correction
-
Cite
-
Save
0
References
0
Citations
NaN
KQI