JeSemE Help

JeSemE is described in detail in our ACL paper "Exploring Diachronic Lexical Semantics with JESEME", the following is a simplified overview. Note that the new Emotion metric will be described in a forthcoming paper.


VAD Emotions

The Valence Arousal Dominance (VAD) model of emotion assumes that affective states can be described relative to three emotional dimensions, i.e., Valence (corresponding to the concept of polarity, see above), Arousal (the degree of excitement or calmness) and Dominance (feeling in or out of control of a social situation). Emotion values for historical texts are calculated by combining word embeddings and contemporary emotion lexicons, see "The Course of Emotion in Three Centuries of German Text—A Methodological Framework" for details. The following illustration shows the three-dimensional VAD space with the position of several emotion words.

VAD emotions

PPMI and χ2

Both Positive Pointwise Mutual Information (PPMI) and Pearson's χ2 measure how specific a combination of two words is. Both use the frequency of a word \(i\) or context word \(j\) to calculate the probability of finding one of them, as in \(P(i)\) respectively \(P(j)\).They then compare the expected probability of encountering both words \(P(i)P(j)\) with the observed frequency/probability \(P(i,j)\). JeSemE provides only values above 0.01, due to storage constraints.

PPMI favors infrequent context words and can be calculated with: $$PPMI(i,j) := max(log\frac{P(i,j)}{P(i)P(j)},0)$$

χ2 is regarded as more balanced, we use a normalized version calculated with: $$\chi^2(i,j) := max(log\frac{(P(i,j) - P(i)P(j))^2}{P(i)P(j)},0)$$


SVDPPMI uses singular value decomposition (SVD) to reduce the dimensionality of a matrix storing PPMI data. It produces word embeddings with quality similar to word2vec embeddings. See Levy (2015) for details. In contrast to word2vec, it is not affected by random initialization based realiability problems, see our papers "Bad Company—Neighborhoods in Neural Embedding Spaces Considered Harmful" and "Don’t Get Fooled by Word Embeddings—Better Watch their Neighborhood" for details.



Corpus of Historical American English, representative and balanced. Lowercased during preprocessing. See here for more information.


Deutsches Textarchiv 'German Text Archive', a representative (yet only vaguely balanced) corpus of ca. 1600-1900 German. Lemmatized during preprocessing. See here for more information in German.

Google Books

The Google Books Ngram corpus covers about 6% of all books. We use the English Fiction and German subcorpus. It is unbalanced and known for sampling bias. English Fiction lowercased during preprocessing, German lemmatized and lowercased. See here for Google's visualization.

Royal Society Corpus

The Royal Society Corpus (RSC) contains the first two centuries of the Philosophical Transactions of the Royal Society of London. The corpus was lemmatized and lowercased during preprocessing. See here for its homepage.


JeSemE provides several APIs for GET requests returning JSON. All will preprocess words to match the queried corpus.

Similar Words

Expects corpus, word1 and word2 as parameters, as in:

Word Emotion

Expects corpus and word as parameters, as in:

Typical Context

Expects corpus and word as parameters, Contexts can be requested for PPMI or χ2 as in:

Relative Frequency

Expects corpus and word as parameters, as in:

Download Models

The models used by JeSemE are available for download. Each ZIP contains CSVs mapping words to IDs and IDs to decade-dependent statistics/word embeddings.