MLOps Community
+00:00 GMT

Vector Similarity Search: From Basics to Production

Vector Similarity Search: From Basics to Production
# Semantic Smilarity
# Vector Embeddings
# Vector Similarity

This post was written in collaboration with our sponsors from Redis

August 11, 2022
Samuel Partee
Samuel Partee
Vector Similarity Search: From Basics to Production

This post was written in collaboration with our sponsors from Redis.

by Sam Partee

Introduction

Search capability is ingrained into our daily life. Arguments are commonly ended with the conclusion, “just google it”. Users have come to expect that nearly every application and website provide some type of search functionality. With effective search becoming ever-increasingly relevant (pun intended), finding new methods and architectures to improve search results is critical for architects and developers. Starting from the basics, this blog post will describe AI-powered search capabilities within Redis that utilize vector embeddings created by deep learning models.

Vector Embeddings

What is a vector embedding? Simply put, vector embeddings are lists of numbers that can represent many types of data.

Vector embeddings are quite flexible. Audio, video, text, and images can all be represented as vector embeddings. This quality makes vector embeddings the swiss-army knife of the data scientist’s toolkit.

To explain why embeddings provide such utility, let us look at prior methods for dealing with text data such as categorical values in tabular data. Data scientists sometimes utilize methods like one-hot encoding to transform categorical features into numerical values. These encodings would create a column for every type of category. A value of 1 means the item belongs to the category specified by that column. Inversely, a value of 0 signifies the item does not belong to that category.

For example, consider book genres: “Fiction”, “Nonfiction”, and “Biography”. Each of these genres could be encoded into one-hot vectors, however, such vectors would be very sparse as books often don’t belong to more than a couple genres. The figure below shows how this encoding would work. Notice there are twice as many zeros as there are ones. For a category like book genres, this sparsity will become exponentially worse as more genres are added to the dataset.

Sparsity can present challenges for ML models. With each new genre, the representation of the encoding grows in size, and hence the dataset can become computationally expensive to utilize.

For Book genres, or any categorical data with a relatively small number of categories, we may be able to get away with simple one-hot encoding, however, what about the entire English language? Such encoding methods would become impractical with a corpus of that size.

Enter vector embeddings.

Vector embeddings present a fixed size representation that does not grow with the number of observations in the data. The resultant vector created by a model, usually something like 384 floating point values, is significantly more dense of a representation than other encoding methods like one-hot encoding. This means more information is present in less bytes and hence is computationally less expensive to utilize. As you’ll read later, these dense representations can be used for a multitude of purposes such as reverse image search, chatbots, Q&A, and recommendation systems.

Creating Vector Embeddings

In order to understand how vector embeddings are created, a brief introduction to modern Deep Learning models is helpful.

Machine Learning models do not consume unstructured data. In order for a model to understand text or images, we must transform them into a numerical representation. Prior to Machine Learning, such representations were often created “by hand” through Feature Engineering.

With the advent of Deep Learning, non-linear feature interactions in complex data are learned by the model instead of being engineered manually. When an input traverses through a Deep Learning model, new representations of that input data are created in different shapes and sizes. Each layer often focuses on a different aspect of the input. This aspect of Deep Learning, “automatically” generating feature representations from inputs, forms the foundation of how vector embeddings are created.

For example, consider the famous ResNet model trained on the ImageNet dataset. ResNet is a Convolutional Neural Network (CNN) commonly used for image-related tasks. In this case, ResNet is trained to predict which of 1000 classes that an object in an image belongs to.

During training, ResNet will capture feature information present in the image by passing it through a number of convolutional, pooling and fully connected layers. The layers will capture features like edges, lines, and corners and group them into “buckets” that are passed to the following layer. Because of the space invariant quality of CNN’s, it doesn’t matter where an edge or line appears in the image, these features will always be mapped to the same bucket. These layers will become successively smaller through the layers of the model until a fully connected layer of 1000 floating point values is presented as the output. Each value represents 1 of 1000 classes. The higher the value, the greater the probability the object in an image belongs to that class.

ResNet, and other image classification models like it, answers the question: “what type of object is in this image?”. However, these classifications are less useful to answer prompts such as: “What images are similar to this image?”. For this question, we need to compare the images together. Despite not being trained for specifically this task, ResNet is still useful as it can capture a dense representation of an image.

Simply put, CNN’s and other models like it, learn useful representations of data in order to perform tasks like image classification. These representations can be extracted while an input is passing through the layers of a model. The extracted layer, also referred to as a latent space, is usually a layer close to the output of the model. In the figure above, this could be the layers with 768, or 500 hidden units. The extracted layer, or latent space, provides a dense representation packed with information about present features that is computationally feasible for tasks like visual similarity search.

This is our vector embedding.

A wealth of pre-trained models exist that can easily be used for creating vector embeddings. The Huggingface Model Hub contains many models that can create embeddings for different types of data. For example, the all-MiniLM-L6-v2 Model is hosted and runnable online, no expertise or install required.

Packages like sentence_transformers, also from HuggingFace, provide easy-to-use models for tasks like semantic similarity search, visual search, and many others. To create embeddings with these models, only a few lines of Python are needed:

Vector Embeddings for Semantic Similarity Search

Semantic Similarity Search is the process by which pieces of text are compared in order to find which contain the most similar meaning. While this might seem easy for an average human being, languages are quite complex. Distilling unstructured text data down into a format that a Machine Learning model can understand has been the subject of study for many Natural Language Processing researchers.

Vector Embeddings provide a method for anyone, not just NLP researcher or data scientists, to perform semantic similarity search. They provide a meaningful, computationally efficient, numerical representation that can be created by pre-trained models “out of the box”. Below, an example of semantic similarity is shown that outlines the vector embeddings created with the sentence_transformers library shown above.

Let’s take the following sentences:

– “That is a happy dog”

– “That is a very happy person”

– “Today is a sunny day”

Each of these sentences can be transformed into a vector embedding. Below, a simplified representation highlights the position of these example sentences in 2-dimensional vector space relative to one another. This is useful in order to visually gauge how effective our embeddings represent the semantic meaning of text. More on that below.

Assume we want to compare these sentences to “That is a happy person”. First, we create the vector embedding for the query sentence.

Next, we need to compare the distance between our query vector embedding and the vector embeddings in our dataset.

There are many ways to calculate the distance between vectors. Each has their own benefits and drawbacks when it comes to semantic search, but we will save that for a separate post. Below some of the common distance metrics are shown.

For this example, we will use the cosine similarity which measures the distance between the inner product space of two vectors.

In Python, this looks like

Running this calculation between our query vector and the other three vectors in the plot above, we can determine how similar the sentences are to one another.

As you might have assumed, “That is a very happy person” is the most similar sentence to “That is a happy person”. This example captures only one of many possible use cases for vector embeddings: Semantic Similarity Search.

The Python code to run this entire example is listed below

After installing NumPy and sentence_transformers, running this script should result in the following calculations

The results of this script should line up with the results that you see on the HuggingFace inference API for the model chosen.

Vector Embeddings for Search in Production

Now, I’d love to say “that’s it” and go build this capability into your platform, however, as many engineers realize every day, development and production are two different beasts. After learning a bit more you may start asking questions like:

  1. Where do I store these vectors?
  2. What should the API look like?
  3. How do I combine this with my traditional search capability like filtering?

Luckily, the good folks at Redis decided to figure out these questions for you and build Vector Similarity Search (VSS) functionality into the existing RediSearch module. This essentially turns Redis into a low-latency, vector database.

The VSS capability is built as a new feature of the RediSearch module. It allows developers to store a vector just as easily as any other field in a Redis hash. It provides advanced indexing and search capabilities required to perform low-latency search in large vector spaces, typically ranging from tens of thousands to hundreds of millions of vectors distributed across a number of machines.

Redis now supports two types of vector indexing:

1. Flat

2. Hierarchical Navigable Small Worlds (HNSW)

As well as 3 distances metrics:

1 . LP – Euclidean Distance

2. IP – Inner Product

3. COSINE – Cosine Similarity (Like the example shown above)

Below is an example of creating an index with redis-py after the vectors have been loaded into Redis.

Indexes only need to be created once and will automatically re-index as new hashes are stored in Redis. After vectors are loaded into Redis and the index has been created, queries can be formed and executed for all kinds of similarity-based search tasks.

Indexes only need to be created once and will automatically re-index as new hashes are stored in Redis. After vectors are loaded into Redis and the index has been created, queries can be formed and executed for all kinds of similarity based search tasks.

Better still, all of the existing RediSearch features like text, tag and geographic based search can work together with the VSS capability. This is called hybrid queries. With hybrid queries, traditional search functionality can be used as a pre-filter for vector search which can help bound the search space.


The above index creation function (create_flat_index) can easily be adapted to support hybrid queries by adding new fields such as the TagField or TextField from redis-py.

Try it out! -> Redis VSS Demo

Recently, I built a web application to explore these capabilities. The Fashion Product Finder utilizes the new VSS capability in Redis along with my other favorite pieces of the Redis ecosystem like redis-om-python. You can access the application here.

Once you’ve signed up to use the application, you will be greeted with a page that looks something like the following.

To query similar products by their textual representation, find a product that you like and click the By Text button. Likewise, for querying by visual vector search, click the By Image button on a product.

The hybrid search attributes can be set for both gender and category of product such that when a vector search is performed, the returned items are filtered by those tags. Below is an example of the visual vector search when the black watch in the bottom right corner is selected.

This demo is a fun way to explore the capabilities within Redis VSS, however, that is not the only component of the Redis ecosystem used in the application. In fact, Redis is the only database used by this application, storing both product metadata with RedisJSON, and vector data with RediSearch.

You can check out the entire codebase here. Please star and share if you find it useful!

For more information on VSS within Redis and the RediSearch module you can check out the following resources:

Documentation

VSS Documentation

Redis Stack Documentation

Demos

Visual and Semantic Search with the Amazon Product Dataset

Vector-based Sentiment Analysis on Financial news

Articles

Building Intelligent Applications with Redis VSS

Rediscover Redis for VSS

Dive in
Related
Blog
Redis Vector Search Engineering Lab Review
By Samuel Partee • Dec 8th, 2022 Views 314
Blog
Redis Vector Search Engineering Lab Review
By Samuel Partee • Dec 8th, 2022 Views 314
Blog
5 Principles You Need To Know About Continuous ML Data Intelligence
By Vikram Chatterji • Jul 9th, 2022 Views 244
Blog
Three Pitfalls To Avoid With Embeddings
By Aparna Dhinakaran • Aug 4th, 2022 Views 208
Blog
AI regulations are here. Are you ready?
By Krishna Gade • Jun 14th, 2022 Views 205