Weaviate 1.31
implements the MUVERA encoding algorithm for multi-vector embeddings. In this blog, we dive the algorithm in detail, including what MUVERA is, how it works, and whether it might make sense for you.
Let's start by reviewing what multi-vector models are, and the challenges that MUVERA looks to solve.
- MUVERA converts multi-vector embeddings (like ColBERT/ColPali) into single fixed-size vectors, dramatically reducing memory and computational costs
- On our tests (LoTTE dataset):
- Memory footprint reduced by ~70%
- Import times improved from 20+ minutes to 3-6 minutes
- Key tradeoffs:
- Some loss in recall quality (can be mitigated by increasing HNSW ef values)
- Higher ef values reduce query throughput
- Best suited for:
- Large-scale deployments where memory costs are significant
- Use cases that can tolerate slight recall degradation
- Applications requiring faster indexing speeds
- Available in Weaviate 1.31+ with simple configuration options
Challenges with multi-vector embeddings
State-of-the-art multi-vector models can dramatically improve retrieval performance by capturing more semantic information than single-vector models. ColBERT models preserve token-level meanings in text, while ColPali/ColQwen models identify and preserve information from different parts of an image, like figures in PDFs as well as textual information.

These advantages make multi-vector models a great fit for many use cases. However, multi-vector embeddings carry two potential disadvantages over their single-vector cousins, owing to their size and relative complexity.
Challenge 1: Memory footprint
Multi-vector embeddings comprise multiple vectors, each one representing a part of the object, such as a token (text) or a patch (image). Although each vector in a multi-vector embedding is smaller, the whole embedding tends to be larger than a typical single-vector embedding.

This can lead to a higher memory footprint in use, as many vector search systems use in-memory indexes such as HNSW.
How much larger? Well, as you can see from the image above, the total number of vectors in a multi-vector index will be greater than the single-vector index by average_vectors_per_embedding
/ (ratio_of_vector_length
). As multi-vector embeddings can comprise hundreds up to thousands vectors per document, this can be much larger.
If we embed a million documents of ~100 tokens per document, a single-vector embedding model (of 768 dimensional single-precision, 32-bit, floating point numbers) may require 768 * 1M * 4 bytes = ~3.1GB
of memory. On the other hand, a multi-vector embedding model (of 96 dimensions) may require a whopping 96 * 100 * 1M * 4 bytes = ~40GB
!

And of course, higher memory use also means higher costs, whether it is your own hardware or via cloud infrastructure such as Weaviate Cloud.
Challenge 2: Speed
A system using multi-vector embeddings can also suffer from slower import and search speeds due to the increased size and complexity.
Vector search involves finding the most relevant embedding(s) from a sea of embeddings. HNSW speeds this up by building a multi-layered graph of vectors to speed up traversing to the right region of vectors and retrieving the right ones.
Multi-vector embeddings introduce additional complexities to this. Multiple vectors must be indexed into the graph per object at ingestion time, and at query time more comparisons are required to retrieve vectors and to calculate their overall similarity.
Given a document embedding D and a query embedding Q, a maxSim
operator is typically used to compute the similarity:
This is a non-linear operation which loops over each query token and compute the similarity between that query token and all the document token choosing the most similar one. In other words, the MaxSim searches the "best-match" for a query term among all the document ones.
While an elegant method, this calculation is another overhead that is unique to multi-vector embeddings.
To improve the performance of multi-vector embeddings, Google's research group proposed MUVERA (Multi-Vector Retrieval via Fixed Dimensional Encodings) which aims to reduce the memory occupancy to store the multi-vector embeddings using fixed encodings.
MUVERA to the rescue
MUVERA encodes multi-vector embeddings into single vector embeddings by building fixed dimensional encodings (FDE) whose length is independent of the number of vectors in the original multi-vector embedding. This reduces the size of the embedding, and improves efficiency of calculations.
But how does MUVERA do this while minimizing losses in recall?
The key idea
MUVERA takes as input a multi-vector embedding () and transforms it into a single-vector embedding ():
In order to say if this function is "good enough" we want to maximize the similarity of the two key measures:
- similarity of the encoded embeddings (aka single-vector embedding), and
- similarity of the multi-vector embeddings
The more similar these two are, the better that is as an approximation of . Ideally, we want:
This transform simplifies the approximate nearest neighbor search problem for multi-vector embeddings to that involving only single vector embeddings.

Recall the above scenario with a million documents and 100 vectors per document. Without the MUVERA encoding we would have 100M
vectors to be indexed. Instead, using the FDE vector we will be working with only 1M
vectors, leading to an HNSW graph of only 1% of the size!
So how does MUVERA achieve this goal?
How MUVERA works
Recall that the main goal here is to build an FDE which approximates the semantics of the multi-vector embedding .
MUVERA does this in four steps:
- Space partitioning
- Dimensionality reduction
- Multiple repetitions
- Final projection
Space partitioning
The first step consists of partitioning the vector space into "buckets". This can be achieved using k-means clustering or using locally sensitive hashing (LSH) functions.
Ideally, we want a partitioning function that given an input vector will return a bucket id where . Here is an illustration, assuming a set of 8 clusters:


As shown in the figure, each vector is assigned to one of the clusters. Two further calculations are done - one to derive a representative (e.g. average or centroid) sub-vector, and another to fill empty clusters.
As we said, a function is needed to map from into .
The first approach could be to use a clustering algorithm like K-Means that clusters data by assigning points to the nearest centroid. This algorithm is widely adopted, but it has two drawbacks:
- It requires some data to be trained on
- It's data-dependent, its quality depends on the data distribution which may not be known a priori, and it may drift over time (e.g. over the life of the downstream application)
To make the partition function data oblivious we may prefer the family of Locality Sensitive Hashing (LSH) functions. In particular, on the SimHash.
The SimHash algorithm relies on few simple steps:
Sample (parameter chosen by the user) Gaussian vectors
When computing the closest bucket to a vector , we will be computing the dot-product between and vectors. From the distances, we will set the i-th bit to 1 only if the corresponding distance is greater than 0, 0 otherwise.
Mathematically speaking, we have:
The value will be a number composed of bits. The total number of buckets we can have is .
From this point on, in the description of MUVERA algorithm we will be considering as partitioning function the SimHash.
Now that the partitioning function is ready, we can create a sub-vector for each bucket as follows:
The vector contains the sum of all the token embeddings that belong to cluster . In addition, we have a normalization factor which counts how many token embeddings from have been mapped to cluster .
The space partitioning step is the same for the query encoding except for the multiplication term.
We create one sub-vector for each bucket, at the end we will end up with sub-vectors . To get one single vector we will be concatenating all sub-vectors:
Let's now make some consideration on the dimensionality of the outcome. Assuming the dimensionality of the original token embeddings is then the length of will be .
Where no vectors end up on a particular cluster, a nearest vector is assigned to it to calculate the sub-vector, so that no clusters end up being empty.
Dimensionality reduction
To reduce the dependency from the dimensionality of the token embeddings the next step is to reduce the dimensionality. This helps to manage the length of the final encoded vector by applying a random linear projection to make the resulting sub-vectors smaller.
Given the parameter we create a random matrix with which is used to reduce the dimensionality of each sub-vector.
For each we compute a matrix-vector multiplication:
The resulting sub-vector will have length .
Now the FDE vector will become:
Notice the dimensionality will no longer be , but where .

Applying a random projection may seem... random! However, this is part of the secret sauce of making MUVERA work. The original authors chose the particular distribution of the random matrics to follow the Johnson Lindenstrauss Lemma, which in turn preserves the dot products between vectors that we care about.
Multiple Repetitions
To increase the accuracy coming from the approximation steps, we can repeat the processes above (space partitioning + dimensionality reduction) multiple times concatenating all vectors together.
Suppose we repeat the partition and dim reduction steps times obtaining different single-vector we can concatenate them obtaining only one single-vector.
The resulting vector will have dimensionality .
Final Projection
The vector obtained concatenating all the single vectors could lead to quite long vectors. In order to reduce its dimensionality we can apply a final random projection the same described in step 2 to the overall vector. Using the random matrix where is a parameter
This will reduce the final FDE dimensionality yet again.
MUVERA parameters
During the description of the algorithm we have seen four parameters:
- : number of Gaussian vectors sampled, the number of buckets we will have is
- : dimensionality of the sub-vectors representing a bucket in the space
- : number of times the partition step and dimensionality step are executed
- : final dimensionality of the FDE vector -
At the end of the day, these are the key tunable parameters from the user perspective. In terms of the Weaviate implementation - we've chosen what we see as sensible defaults for the majority of use cases.
But as always, you can tune these to suit your particular needs. But before you do that - here's some real-life results from our internal testing.
In the Weaviate implementation, we do not implement the final projection as we found it to produce limited benefits, against the added step. So the user-selectable parameters are k_sim
, d_proj
and r_reps
.
Impact of MUVERA
So what are the effects of MUVERA? As in - what's in it for you?
To evaluate this, we used the LoTTE benchmark, in particular lotte-lifestyle
which is made up of around 119k documents
. Each document was encoded using colbertv2.0
which produced an average of 130
vectors per document.
This would produce 15M
vectors of dimensionality 128
. This means the total number of floating points stored would be 1.9B
, leading to around 8 GB
of memory. It might seem that extreme, until you realize that this is a small-ish dataset - and that's even before we add the HNSW graph itself!
On the other hand, if we enable MUVERA with the following parameters:
We would encode each document using one vector of dimensionality 2560 = 2^4*16*10
, ending up with 304M
floating points stored. An instant saving of almost 80%
in memory footprint.
In addition to this, when enabling MUVERA we would have a HNSW (Hierarchical Navigable Small World Graph) with 119k
nodes whereas without it, the graph would have 15M
nodes, each node having many (e.g. 32-128) edges to other nodes. At a maxConnections
value of 128, for example, this could save tens of gigabytes of memory.
Take a look at the example outputs below from our experimentation. We compare four scenarios:
- Raw multi-vector embeddings + SQ
- MUVERA with scalar quantization (SQ)
The experiments have been run on the following machine:
- CPU/GPU: AMD Ryzen 7 PRO 8700GE w/ Radeon 780M Graphics
- RAM: 64GB
Clear benefits: Memory & ingestion speed
As expected, there is a significant reduction in memory footprint, going from a baseline of around 12 GB to under 1GB for MUVERA.
What does that mean in terms of dollar amounts? Well, the difference in compute costs at a hyperscaler for a server with 12 GB of memory and another with 1 GB would be many tens of thousands of dollars per year, if not hundreds of thousands.
If your dataset size ranges in tens or millions or more, as many of our users' datasets do, this may already be a strong motivator for MUVERA.
Not only that, the time taken to import the data is hugely different here. By that, we refer to the time taken for the object data to be added to Weaviate, and the vectors added to the HNSW index. Since adding multi-vector embedding requires tens or hundreds of vectors to be added to the index, this adds significant overhead.
With MUVERA, this is reduced significantly once again. Adding the ~110k objects in the LoTTE dataset required over 20 minutes (or, only around 100 objects/s) in the baseline case, but it was down to around 3-6 minutes in the various MUVERA scenarios.
Again, when considering working at scale - you may want to consider a ~3 hour job to import a million objects may be viable for you. If not, that may be another strong incentive for you to consider MUVERA.
It should be noted that at the same ef
value, enabling MUVERA increases query throughput as well, owing to only needing to deal with one vector per object rather than multiple. But - that's not quite an apples-to-apples comparison. And here's why.
Costs: Recall & query throughput
As we can see in the charts, the main drawback of MUVERA is a loss on the recall side. The downside also may seem particularly challenging, since multi-vector models achieve such high recall values in the first place.
However, some of this can be mitigated through the HNSW search settings. As the graphs show, setting a higher ef
value in the query settings (e.g. >512) can increase recall to 80%+, and over 90% at 2048.
As ef
increases the retrieved candidate set, it does have a knock-on effect of decreasing the query throughput (measured by queries per second, or qps
).
In other words - the main tradeoffs of enabling MUVERA are going to be the reduction in recall, and associated reduction in throughput by needing to use higher ef
values.
Nothing's ever so simple, right? 😅 What's clear from these charts is that there is a definite case to be made for using MUVERA. The specifics, however, would very much depend on your priorities.
Wrap-up

In summary, MUVERA offers a compelling path forward for working with multi-vector models at scale.
By transforming multi-vector representations into fixed dimensional encodings, it delivers significant memory savings and improved query speeds while maintaining relatively strong retrieval quality.
Weaviate's implementation of MUVERA enables the ability to further compress these encodings with quantization for production deployments at scale, dramatically reducing the cost and overhead needed for multi-vector embeddings.
As always - using the Weaviate implementation (available form Weaviate 1.31
onwards) may be the easiest part of all this. You may be surprised to hear that just a couple of lines of code is all you need to enable MUVERA in a Weaviate collection.
If your use case could benefit from using multi-vector embeddings, and your use case may require a non-trivial dataset size, MUVERA could be a part of the solution set for you. We encourage you to check it out.
Ready to start building?
Check out the Quickstart tutorial, or build amazing apps with a free trial of Weaviate Cloud (WCD).
Don't want to miss another blog post?
Sign up for our bi-weekly newsletter to stay updated!
By submitting, I agree to the Terms of Service and Privacy Policy.