We use word embeddings to evaluate the interchangeability of words and how meaning evolves over time. Word embeddings are approximations of a word’s meaning based on millions of guesses by a neural network. These approximations are represented numerically, which allows us to calculate similarity between words, as well as word meaning change over time, among different authors, and in different print locations. The plots are all representations of these calculations. Search any word in our vocabulary to see a plot of that word's change. You can hover on the word at each decade to see its overall Similarity Score, as well as the other nearest terms in that decade. The word is also shown in context below the graphs.