DeepAI AI Chat
Log In Sign Up

Going Beyond T-SNE: Exposing whatlies in Text Embeddings

by   Vincent D. Warmerdam, et al.

We introduce whatlies, an open source toolkit for visually inspecting word and sentence embeddings. The project offers a unified and extensible API with current support for a range of popular embedding backends including spaCy, tfhub, huggingface transformers, gensim, fastText and BytePair embeddings. The package combines a domain specific language for vector arithmetic with visualisation tools that make exploring word embeddings more intuitive and concise. It offers support for many popular dimensionality reduction techniques as well as many interactive visualisations that can either be statically exported or shared via Jupyter notebooks. The project documentation is available from


page 4

page 5

page 6

page 7


Fusing Vector Space Models for Domain-Specific Applications

We address the problem of tuning word embeddings for specific use cases ...

Simple and Effective Dimensionality Reduction for Word Embeddings

Word embeddings have become the basic building blocks for several natura...

ETNLP: A Toolkit for Extraction, Evaluation and Visualization of Pre-trained Word Embeddings

In this paper, we introduce a comprehensive toolkit, ETNLP, which can ev...

Interactive Visualization of Spatial Omics Neighborhoods

Dimensionality reduction of spatial omic data can reveal shared, spatial...

VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations

Word vector embeddings have been shown to contain and amplify biases in ...

Enabling Open-World Specification Mining via Unsupervised Learning

Many programming tasks require using both domain-specific code and well-...

ActUp: Analyzing and Consolidating tSNE and UMAP

tSNE and UMAP are popular dimensionality reduction algorithms due to the...

Code Repositories


toolkit to help visualise - what lies in word embeddings

view repo