Tuesday, May 12, 2026

Studying Phrase Vectors for Sentiment Evaluation: A Python Replica


We automated the evaluation and made the code out there on GitHub.

got here to me after I tried to breed the paper “Studying Phrase Vectors for Sentiment Evaluation” by Maas et al. (2011).

On the time, I used to be nonetheless in my remaining yr of engineering college. The objective was to breed the paper, problem the authors’ strategies, and, if attainable, evaluate them with different phrase representations, together with LLM-based approaches.

What struck me was how easy and stylish the tactic was. In a approach, it jogged my memory of logistic regression in credit score scoring: easy, interpretable, and nonetheless highly effective when used appropriately.

I loved studying this paper a lot that I made a decision to share what I discovered from it.

I strongly advocate studying the unique paper. It’s going to enable you perceive what’s at stake in phrase illustration, particularly how one can analyze the proximity between two phrases from each a semantic perspective and a sentiment polarity perspective, given the particular contexts by which these phrases are used.

At first, the mannequin appears easy: construct a vocabulary, be taught phrase vectors, incorporate sentiment info, and consider the outcomes on IMDb critiques.

However after I began implementing it, I spotted that a number of particulars matter lots: how the vocabulary is constructed, how doc vectors are represented, how the semantic goal is optimized, and the way the sentiment sign is injected into the phrase vectors.

On this article, we are going to reproduce the primary concepts of the paper utilizing Python.

We’ll first clarify the instinct behind the mannequin. Then we are going to current the construction of knowledge used within the article, assemble the vocabulary, implement the semantic element, add the sentiment goal, and at last consider the discovered representations utilizing the linear SVM classifier.

The SVM will enable us to measure the classification accuracy and evaluate our outcomes with these reported within the paper.

What downside does the paper clear up?

Conventional Bag of Phrases fashions are helpful for classification, however they don’t be taught significant relationships between phrases. For instance, the phrases great and superb must be shut as a result of they categorical comparable that means and comparable sentiment. However, great and horrible could seem in comparable film evaluate contexts, however they categorical reverse sentiments.

The objective of the paper is to be taught phrase vectors that seize each semantic similarity and sentiment orientation.

Knowledge construction

The dataset accommodates:

  • 25,000 labeled coaching critiques or paperwork
  • 50,000 unlabeled coaching critiques
  • 25,000 labeled check critiques

The labeled critiques are polarized:

  • Detrimental critiques have rankings from 1 to 4
  • Optimistic critiques have rankings from 7 to 10

The rankings are linearly mapped to the interval [0, 1], which permits the mannequin to deal with sentiment as a steady likelihood of optimistic polarity.

aclImdb/
├── prepare/
│   ├── pos/    "0_10.txt"   -> evaluate #0, 10 stars, very optimistic
│   │           "1_7.txt"    -> evaluate #1, 7 stars, optimistic
│   ├── neg/    "10_2.txt"   -> evaluate #10, 2 stars, very unfavourable
│   │           "25_4.txt"   -> evaluate #25, 4 stars, unfavourable
│   └── unsup/  "938_0.txt"  -> evaluate #938, 0 stars, unlabeled
└── check/
    ├── pos/    optimistic critiques, by no means seen throughout coaching
    └── neg/    unfavourable critiques, by no means seen throughout coaching

We will due to this fact retailer every doc in a Evaluate class with the next attributes: textual content, stars, label, and bucket.

In fact, it doesn’t must be a category particularly named Evaluate. Any object can be utilized so long as it offers no less than these attributes.

from dataclasses import dataclass
from typing import Optionally available

@dataclass
class Evaluate:
    textual content: str
    stars: int            
    label: str               
    bucket: str

Vocabulary building

The paper builds a hard and fast vocabulary by first ignoring the 50 most frequent phrases, then conserving the following 5,000 most frequent tokens.

No stemming is utilized. No customary stopword removing is used. That is essential as a result of some stopwords, particularly negations, can carry sentiment info.

Earlier than constructing this vocabulary, we first want to take a look at the uncooked knowledge.

We seen that the critiques will not be absolutely cleaned. Some paperwork include HTML tags, so we take away them throughout the knowledge loading step. We additionally take away punctuation connected to phrases, corresponding to ".", ",", "!", or "?".

This can be a slight distinction from the unique paper. The authors preserve some non-word tokens as a result of they might assist seize sentiment. For instance, "!" or ":-)" can carry emotional info. In our implementation, we select to take away this punctuation and later consider how a lot this choice impacts the ultimate mannequin efficiency.

When working with textual content knowledge, the following query is all the time the identical:

How ought to we characterize paperwork and phrases numerically?

The authors begin by gathering all tokens from the coaching set, together with each labeled and unlabeled critiques. We will consider this as placing all phrases from the coaching paperwork into one giant basket.

Then, to characterize phrases in an area the place we will prepare a mannequin, they construct a set of phrases referred to as the vocabulary.

The authors construct a dictionary that maps every token, which we are going to loosely name a phrase, to its frequency. This frequency is solely the variety of occasions the token seems within the full coaching set, together with each labeled and unlabeled critiques.

Then they choose the 5,000 most frequent phrases, after eradicating the 50 most frequent phrases.

These 5,000 phrases kind the vocabulary V.

Every phrase in V will correspond to 1 column of the illustration matrix R. The authors select to characterize every phrase in a 50-dimensional area. Due to this fact, the matrix R has the next form:

Rβ=50×|V|=5000R in mathbb{R}^ = 5000

Every column of R is the vector illustration of 1 phrase:ϕw=Rw phi_w = Rw

The objective of the mannequin is to be taught this matrix R in order that the phrase vectors seize two issues on the similar time:

  • Semantic info, that means phrases utilized in comparable contexts must be shut;
  • Sentiment info, that means phrases carrying comparable polarity, must also be shut.

That is the central concept of the paper.

As soon as the information is loaded, cleaned, and the vocabulary is constructed, we will transfer to the development of the mannequin itself.

The primary a part of the mannequin is unsupervised. It learns semantic phrase representations from each labeled and unlabeled critiques.

Then, the second half provides supervision through the use of the star rankings to inject sentiment into the identical vector area.

Semantic element

The semantic element defines a probabilistic mannequin of a doc.

Every doc is related to a latent vector theta. This vector represents the semantic path of the doc.

Every phrase has a vector illustration ϕw phi_w, saved as a column of the matrix R.

The likelihood of observing a phrase w in a doc is given by a softmax mannequin:

p(w|θ;R,b)=exp(θϕw+bw)wVexp(θϕw+bw)p(w mid theta; R, b) = frac{exp(theta^high phi_w + b_w)}{sum_{w’ in V} exp(theta^high phi_{w’} + b_{w’})}

Intuitively, a phrase turns into possible when its vector ϕwphi_w is nicely aligned with the doc vector theta.

MAP estimation of theta

The mannequin alternates between two steps.

First, it fixes R and b and estimates one theta vector for every doc.

Then, it fixes theta and updates R and b.

The theta vectors will not be saved as remaining parameters. They’re momentary document-specific variables used to replace the phrase representations.

To estimate the parameters of the mannequin, the authors use most probability.

The thought is easy: we need to discover the parameters R and b that make the noticed paperwork as possible as attainable underneath the mannequin.

Ranging from the probabilistic formulation of a doc, they introduce a MAP estimate θ̂ₖ for every doc dₖ. Then, by taking the logarithm of the probability and including regularization phrases, they acquire the target perform used to be taught the phrase illustration matrix R and the bias vector b:

νRF2+dokDλθ^ok22+i=1Noklogp(wi|θ^ok;R,b)nu |R|_F^2 + sum_{d_k in D} lambda |hat{theta}_k|_2^2 + sum_{i=1}^{N_k} log p(w_i mid hat{theta}_k; R, b)

which is maximized with respect to R and b. The hyperparameters within the mannequin are the regularization weights (λ and ν) and the phrase vector dimensionality β.

On this step, we be taught the semantic illustration matrix. This matrix captures how phrases relate to one another based mostly on the contexts by which they seem.

Sentiment element

The semantic mannequin alone can be taught that phrases happen in comparable contexts. However this isn’t sufficient to seize sentiment.

For instance, great and horrible could each happen in film critiques, however they categorical reverse opinions.

To resolve this, the paper provides a supervised sentiment goal:

p(s=1|w;R,ψ)=σ(ψϕw+bc)p(s = 1 mid w; R, psi) = sigma(psi^high phi_w + b_c)

The vector ψ defines a sentiment path within the phrase vector area. Right here, solely the labelled knowledge are used.

If a phrase vector lies on one facet of the hyperplane, it’s thought of optimistic. If it lies on the opposite facet, it’s thought of unfavourable.

They mixed the sentiment goal and the sentiment half to construct the ultimate and the complete goal studying:

νRF2+ok=1|D|λθ^ok22+i=1NoklogP(wi|θ^ok;R,b)+ok=1|D|1|Sok|i=1NoklogP(sok|wi;R,ψ,bc)start{aligned} nu |R|_F^2 &+ sum_{ok=1}^ lambda |hat{theta}_k|_2^2 + sum_{i=1}^{N_k} log P(w_i mid hat{theta}_k; R, b) &+ sum_{ok=1}^ frac{1}S_k sum_{i=1}^{N_k} log P(s_k mid w_i; R, psi, b_c) finish{aligned}

The primary half learns semantic similarity. The second half injects sentiment info. The regularization phrases stop the vectors from rising too giant.

|SokS_k| denotes the variety of paperwork within the dataset with the identical rounded worth of soks_k. The weighting 1|Sok|frac{1}S_k is launched to fight the well-known imbalance in rankings current in evaluate collections.

Classification and outcomes

As soon as the phrase illustration matrix R has been discovered, we will use it to construct document-level options.

The target is now to categorise every film evaluate as optimistic or unfavourable.

To do that, the authors prepare a linear SVM on the 25,000 labeled coaching critiques and consider it on the 25,000 labeled check critiques.

The essential query is just not solely whether or not the phrase vectors are significant, however whether or not they assist enhance sentiment classification.

To reply this query, we consider a number of doc representations and evaluate them with the outcomes reported in Desk 2 of the paper.

The one factor that adjustments from one configuration to a different is the best way every evaluate is represented earlier than being handed to the classifier.

1. Bag of Phrases baseline

The primary illustration is an ordinary Bag of Phrases. Within the paper, this baseline is reported as Bag of Phrases (bnc). The notation means:

  • b = binary weighting
  • n = no IDF weighting
  • c = cosine normalization

A evaluate or doc is represented by a vector v of measurement 5000, as a result of the vocabulary accommodates 5,000 phrases.

For every phrase j within the vocabulary:

νj={1if phrase j seems in the evaluate0in any other casenu_j = start{instances} 1 & textual content{if phrase } j textual content{ seems within the evaluate} 0 & textual content{in any other case} finish{instances}

So this illustration solely data whether or not a phrase seems no less than as soon as. It doesn’t depend what number of occasions it seems.

Then the vector is normalized by its Euclidean norm:

νbnc=νν2nu_{bnc} = frac{nu}_2

This offers the Bag of Phrases baseline used to coach the SVM.

This baseline is powerful as a result of sentiment classification usually depends on direct lexical clues. Phrases corresponding to glorious, boring, terrible, or nice already carry helpful sentiment info.

2. Semantic-only phrase vector illustration

The second illustration makes use of the phrase vectors discovered by the semantic-only mannequin.

The authors first characterize a doc as a Bag of Phrases vector v. Then they compute a dense doc illustration by multiplying this vector by the discovered matrix:

zsemantic=Rsemantic×νz_{textual content{semantic}} = R_{textual content{semantic}} occasions nu

The place Rsemantic50×5000, ν5000zsemantic50R_{textual content{semantic}} in mathbb{R}^{50 occasions 5000}, nu in mathbb{R}^{5000} quadimpliesquad z_{textual content{semantic}} in mathbb{R}^{50}

This vector could be interpreted as a weighted mixture of the phrase vectors that seem within the evaluate.

Within the paper, when producing doc options by way of the product Rv, the authors use bnn weighting for v. This implies:

  • b = binary weighting
  • n = no IDF weighting
  • n = no cosine normalization earlier than projection

Then, after computing Rv, they apply cosine normalization to the ultimate dense vector.

So the ultimate illustration is:

zsemantic=RsemanticνRsemanticν2bar{z}_{textual content{semantic}} = frac{R_{textual content{semantic}} nu}{| R_{textual content{semantic}} nu |_2}

This illustration makes use of semantic info discovered from the coaching critiques, together with each labeled and unlabeled paperwork.

3. Full semantic + sentiment illustration

The third illustration follows the identical building, however makes use of the complete matrix Rfull​.

This matrix is discovered with each elements of the mannequin:

  • the semantic goal, which learns contextual similarity between phrases;
  • The sentiment goal, which injects polarity info from the star rankings.

For every doc, we compute:

zfull=Rfullνz_{textual content{full}} = R_{textual content{full}} nu

Then we normalize:

zfull=RfullνRfullν2bar{z}_{textual content{full}} = frac{R_{textual content{full}} nu}{| R_{textual content{full}} nu |_2}

The instinct is that RfullR_{full} ought to produce doc options that seize each what the evaluate is about and whether or not the language is optimistic or unfavourable.

That is the primary contribution of the paper: studying phrase vectors that mix semantic similarity and sentiment orientation.

4. Full illustration + Bag of Phrases

The ultimate configuration combines the discovered dense illustration with the unique Bag of Phrases illustration.

We concatenate the 2 representations to acquire:

x=[zfullνbnc]x = left[ bar{z}_{text{full}} ;middle|; nu_{bnc} right]

This offers the classifier two complementary sources of knowledge:

  • a dense 50-dimensional illustration discovered by the mannequin;
  • a sparse lexical illustration that preserves precise word-presence info.

This mixture is beneficial as a result of phrase vectors can generalize throughout comparable phrases, whereas Bag of Phrases options preserve exact lexical proof.

For instance, the dense illustration could be taught that great and superb are shut, whereas the Bag of Phrases illustration nonetheless preserves the precise presence of every phrase.

We then prepare a linear SVM on the labeled coaching set and consider it on the check set.

This permits us to reply two questions.

First, do the discovered phrase vectors enhance sentiment classification?

Second, does including sentiment info to the phrase vectors assist past semantic info alone?

Implementation in Python

We implement the mannequin in 5 steps:

  1. Load and clear the IMDb dataset
  2.  Construct the vocabulary
  3. Prepare the semantic element
  4. Prepare the complete semantic + sentiment mannequin
  5. Consider the discovered representations utilizing SVM

The desk under reveals the closest neighbors of chosen goal phrases within the discovered vector area.

For every goal phrase, we report the 5 most comparable phrases in line with cosine similarity. The complete mannequin, which mixes the semantic and sentiment goals, tends to retrieve phrases which are shut each in that means and in sentiment orientation. The semantic-only mannequin captures contextual and lexical similarity, nevertheless it doesn’t explicitly use sentiment labels throughout coaching.

The desk under compares our outcomes with the outcomes reported within the paper. For every illustration, we prepare a linear SVM on the labeled coaching critiques and report the classification accuracy on the check set. This permits us to judge how nicely every doc illustration performs on the IMDb sentiment classification process.

Our consequence vs outcomes paper.

The complete mannequin could be very near the consequence reported within the paper. This implies that the sentiment goal is carried out appropriately.

The biggest hole seems within the semantic-only mannequin. This will likely come from optimization particulars, preprocessing, or the best way document-level options are constructed for classification.

Conclusion

On this article, we reproduced the primary elements of the mannequin proposed by Maas et al. (2011).

We carried out the semantic goal, added the sentiment goal, and evaluated the discovered phrase vectors on IMDb sentiment classification.

The mannequin reveals how unlabeled knowledge can assist be taught semantic construction, whereas labeled knowledge can inject sentiment info into the identical vector area.

This can be a easy however highly effective concept: phrase vectors shouldn’t solely seize what phrases imply, but in addition how they really feel.

Whereas this publish doesn’t cowl each element of the paper, we extremely advocate studying the authors’ unique work. Our objective was to share the concepts that impressed us and the enjoyment we discovered each in studying the paper and scripting this publish.

We hope you take pleasure in it as a lot as we did.

Picture Credit

All photos and visualizations on this article have been created by the creator utilizing Python (pandas, matplotlib, seaborn, and plotly) and excel, except in any other case said.

References

[1] 𝗔𝗻𝗱𝗿𝗲𝘄 𝗟. 𝗠𝗮𝗮𝘀, 𝗥𝗮𝘆𝗺𝗼𝗻𝗱 𝗘. 𝗗𝗮𝗹𝘆, 𝗣𝗲𝘁𝗲𝗿 𝗧. 𝗣𝗵𝗮𝗺, 𝗗𝗮𝗻 𝗛𝘂𝗮𝗻𝗴, 𝗔𝗻𝗱𝗿𝗲𝘄 𝗬. 𝗡𝗴, 𝗮𝗻𝗱 𝗖𝗵𝗿𝗶𝘀𝘁𝗼𝗽𝗵𝗲𝗿 𝗣𝗼𝘁𝘁𝘀. 2011. Studying Phrase Vectors for Sentiment Evaluation. In Proceedings of the forty ninth Annual Assembly of the Affiliation for Computational Linguistics: Human Language Applied sciences, pages 142–150, Portland, Oregon, USA. Affiliation for Computational Linguistics.

Dataset: IMDb Giant Film Evaluate Dataset (CC BY 4.0).

Related Articles

Latest Articles