View on GitHub


Starter code to solve real world text data problems. Includes: Gensim Word2Vec, phrase embeddings, Text Classification with Logistic Regression, word count with pyspark, simple text preprocessing, pre-trained embeddings and more.

Learn how to use pre-trained GloVe/Word2Vec embeddings with Gensim. Its super simple.

Running the Pre-trained Embeddings Notebook

  1. From the command line, first, clone this repo.
    git clone <this repo url>
  2. Next, switch to the pre-trained-embeddings directory of this repo.
    cd  nlp-in-practice/pre-trained-embeddings
  3. Then, run jupyter notebook
    jupyter notebook
  4. Select Pre-trained embeddings.ipynb and now re-run the cells.