ANN Benchmark
This vector database benchmark is designed to measure and illustrate Weaviate's Approximate Nearest Neighbor (ANN) performance for a range of real-life use cases.
This is not a comparative benchmark that runs Weaviate against competing vector database solutions.
To discuss trade-offs with other solutions, contact sales.
To make the most of this vector database benchmark, you can look at it from different perspectives:
- The overall performance – Review the benchmark results to draw conclusions about what to expect from Weaviate in a production setting.
- Expectation for your use case – Find the dataset closest to your production use case, and estimate Weaviate's expected performance for your use case.
- Fine Tuning – If you don't get the results you expect. Find the optimal combinations of the configuration parameters (
efConstruction
,maxConnections
andef
) to achieve the best results for your production configuration. (See HNSW Configuration Tips)
Measured Metrics
For each benchmark test, we set these HNSW parameters:
efConstruction
- Controls the search quality at build time.maxConnections
- The number of outgoing edges a node can have in the HNSW graph.ef
- Controls the search quality at query time.
For good starting point values and performance tuning advice, see HNSW Configuration Tips.
For each set of parameters, we've run 10,000 requests, and we measured the following metrics:
- The Recall@1, Recall@10, Recall@100 - by comparing Weaviate's results to the ground truths specified in each dataset.
- Multi-threaded Queries per Second (QPS) - The overall throughput you can achieve with each configuration.
- Individual Request Latency (mean) - The mean latency over all 10,000 requests.
- P99 Latency - 99% of all requests (9,900 out of 10,000) have a latency that is lower than or equal to this number – this shows how fast
- Import time - Since varying build parameters has an effect on import time, the import time is also included.
By request, we mean: An unfiltered vector search across the entire dataset for the given test. All latency and throughput results represent the end-to-end time that your users would also experience. In particular, these means:
- Each request time includes the network overhead for sending the results over the wire. In the test setup, the client and server machines were located in the same VPC.
- Each request includes retrieving all the matched objects from disk. This is
a significant difference from
ann-benchmarks
, where the embedded libraries only return the matched IDs.
This benchmark is open source, so you can reproduce the results yourself.
Benchmark Results
This section contains datasets modeled after the ANN Benchmarks. Pick a dataset that is closest to your production workload:
Dataset | Number of Objects | Vector Dimensions | Distance metric | Use case |
---|---|---|---|---|
SIFT1M | 1 M | 128 | Euclidean | This dataset reflects a common use case with a small number of objects. |
Glove-25 | 1.28 M | 25 | Cosine | Because of the smaller vectors, Weaviate can achieve the highest throughput on this dataset. |
Deep Image 96 | 10 M | 96 | Cosine | This dataset gives a good indication of expected speed and throughput when datasets grow. It is about 10 times larger than SIFT1M, but the throughput is only slightly lower. |
GIST 960 | 1 M | 960 | Euclidean | This dataset highlights the cost of high-dimensional vector comparisons. It has the lowest throughput of the sample datasets. Use this one if you run high-dimensional loads. |
Benchmark Datasets
These are the results for each dataset:
- SIFT1M
- Glove-25
- Deep Image 96
- GIST 960
QPS vs Recall for SIFT1M
- Limit 1
- Limit 10
- Limit 100
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
64 | 8 | 64 | 90.91% | 11445 | 381 | 2.59ms | 3.44ms | 186s |
512 | 8 | 64 | 95.74% | 11391 | 380 | 2.6ms | 3.4ms | 286s |
128 | 16 | 64 | 98.52% | 10443 | 348 | 2.83ms | 3.77ms | 204s |
512 | 16 | 64 | 98.69% | 10287 | 343 | 2.87ms | 3.94ms | 314s |
128 | 32 | 64 | 98.92% | 9760 | 325 | 3.03ms | 4.15ms | 203s |
256 | 32 | 64 | 99.0% | 9462 | 315 | 3.13ms | 4.36ms | 243s |
512 | 32 | 64 | 99.22% | 9249 | 308 | 3.2ms | 4.68ms | 351s |
512 | 32 | 128 | 99.29% | 7155 | 238 | 4.14ms | 5.84ms | 351s |
128 | 32 | 256 | 99.34% | 5694 | 190 | 5.21ms | 6.94ms | 203s |
256 | 32 | 512 | 99.37% | 3578 | 119 | 8.27ms | 11.2ms | 243s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
128 | 8 | 64 | 91.33% | 16576 | 553 | 1.75ms | 2.82ms | 178s |
256 | 8 | 64 | 91.98% | 16474 | 549 | 1.76ms | 2.87ms | 205s |
512 | 8 | 64 | 92.13% | 16368 | 546 | 1.77ms | 2.85ms | 272s |
64 | 16 | 64 | 96.56% | 15003 | 500 | 1.93ms | 2.94ms | 160s |
512 | 16 | 64 | 97.95% | 14996 | 500 | 1.92ms | 2.78ms | 308s |
64 | 64 | 64 | 98.04% | 14197 | 473 | 2.05ms | 3.14ms | 167s |
128 | 32 | 64 | 99.06% | 13482 | 449 | 2.17ms | 3.07ms | 184s |
256 | 32 | 64 | 99.44% | 13237 | 441 | 2.2ms | 3.22ms | 261s |
512 | 32 | 64 | 99.56% | 12661 | 422 | 2.31ms | 3.32ms | 354s |
256 | 64 | 64 | 99.63% | 12014 | 400 | 2.43ms | 3.37ms | 276s |
512 | 64 | 64 | 99.76% | 11300 | 377 | 2.58ms | 3.56ms | 388s |
512 | 32 | 128 | 99.76% | 9365 | 312 | 3.14ms | 4.73ms | 354s |
256 | 64 | 128 | 99.79% | 8669 | 289 | 3.34ms | 4.67ms | 276s |
512 | 64 | 128 | 99.89% | 7990 | 266 | 3.65ms | 5.09ms | 388s |
256 | 32 | 256 | 99.95% | 6771 | 226 | 4.32ms | 5.84ms | 261s |
512 | 32 | 256 | 99.97% | 6286 | 210 | 4.66ms | 6.33ms | 354s |
512 | 64 | 256 | 99.99% | 5225 | 174 | 5.55ms | 8.11ms | 388s |
256 | 32 | 512 | 100.0% | 4281 | 143 | 6.84ms | 9.55ms | 261s |
512 | 32 | 512 | 100.0% | 3917 | 131 | 7.47ms | 10.33ms | 354s |
256 | 64 | 512 | 100.0% | 3611 | 120 | 8.03ms | 12.03ms | 276s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
64 | 8 | 64 | 78.16% | 6202 | 207 | 4.55ms | 6.21ms | 152s |
256 | 8 | 64 | 80.07% | 6044 | 201 | 4.59ms | 8.59ms | 205s |
64 | 8 | 128 | 81.93% | 5968 | 199 | 4.73ms | 6.98ms | 152s |
512 | 16 | 64 | 91.28% | 5930 | 198 | 4.75ms | 6.86ms | 308s |
64 | 64 | 64 | 92.52% | 5768 | 192 | 4.91ms | 6.38ms | 167s |
128 | 16 | 128 | 93.17% | 5650 | 188 | 5.02ms | 6.47ms | 185s |
128 | 32 | 64 | 94.91% | 5543 | 185 | 5.13ms | 6.81ms | 184s |
256 | 32 | 64 | 96.07% | 5524 | 184 | 5.12ms | 6.71ms | 261s |
512 | 32 | 64 | 96.45% | 5321 | 177 | 5.32ms | 7.51ms | 354s |
128 | 32 | 128 | 96.54% | 5254 | 175 | 5.42ms | 7.01ms | 184s |
256 | 32 | 128 | 97.48% | 5235 | 175 | 5.43ms | 7.34ms | 261s |
512 | 32 | 128 | 97.79% | 5045 | 168 | 5.65ms | 7.15ms | 354s |
256 | 64 | 128 | 98.21% | 4889 | 163 | 5.86ms | 7.75ms | 276s |
512 | 64 | 128 | 98.75% | 4667 | 156 | 6.13ms | 7.85ms | 388s |
128 | 32 | 256 | 99.01% | 4298 | 143 | 6.71ms | 8.76ms | 184s |
256 | 32 | 256 | 99.43% | 4242 | 141 | 6.77ms | 8.74ms | 261s |
512 | 32 | 256 | 99.57% | 4069 | 136 | 7.1ms | 9.01ms | 354s |
256 | 64 | 256 | 99.61% | 3854 | 128 | 7.47ms | 10.13ms | 276s |
512 | 64 | 256 | 99.79% | 3634 | 121 | 7.92ms | 10.88ms | 388s |
256 | 32 | 512 | 99.92% | 3158 | 105 | 9.18ms | 12.12ms | 261s |
512 | 32 | 512 | 99.95% | 2956 | 99 | 9.8ms | 12.86ms | 354s |
512 | 64 | 512 | 99.98% | 2581 | 86 | 11.21ms | 15.68ms | 388s |
How to read the results table
- Choose the desired limit using the tab selector above the table.
The limit describes how many objects are returned for a query. Different use cases require different levels of QPS and returned objects per query.
For example, at 100 QPS andlimit 100
(100 objects per query) 10,000 objects will be returned in total. At 1,000 QPS andlimit 10
(10 objects per query), you will also receive 10,000 objects in total as each request contains fewer objects, but you can send more requests in the same timespan.
Pick the value that matches your desired limit in production most closely. - Pick the desired configuration
The first three columns represent the different input parameters to configure the HNSW index. These inputs lead to the results shown in columns four through six. - Recall/Throughput Trade-Off at a glance
The highlighted columns (Recall, QPS) reflect the Recall/QPS trade-off. Generally, as the Recall improves, the throughput drops. Pick the row that represents a combination that satisfies your requirements. Since the benchmark is multi-threaded and running on a 30-core machine, the QPS/vCore columns shows the throughput per single CPU core. You can use this column to extrapolate what the throughput would be like on a machine of different size. See also this section belowoutlining what changes to expect when running on different hardware. - Latencies
Besides the overall throughput, columns seven and eight show the latencies for individual requests. The Mean Latency columns shows the mean over all 10,000 test queries. The p99 Latency shows the maximum latency for the 99th-percentile of requests. In other words, 9,900 out of 10,000 queries will have a latency equal to or lower than the specified number. The difference between mean and p99 helps you get an impression how stable the request times are in a highly concurrent setup. - Import times
Changing the configuration parameters can also have an effect on the time it takes to import the dataset. This is shown in the last column.
Recommended configuration for SIFT1M
This is the recommended configuration for this dataset. It balances recall, latency, and throughput to give you a good overview of Weaviate's performance.
efConstruction | maxConnections | ef | Recall@10 | QPS (Limit 10) | Mean Latency (Limit 10) | p99 Latency (Limit 10) |
---|---|---|---|---|---|---|
128 | 32 | 64 | 98.83% | 8905 | 3.31ms | 4.49ms |
QPS vs Recall for Glove-25
- Limit 1
- Limit 10
- Limit 100
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
128 | 8 | 64 | 94.47% | 19752 | 658 | 1.48ms | 2.49ms | 178s |
512 | 8 | 64 | 95.3% | 19704 | 657 | 1.48ms | 2.61ms | 272s |
512 | 16 | 64 | 99.26% | 17583 | 586 | 1.65ms | 2.82ms | 308s |
256 | 16 | 64 | 99.27% | 17177 | 573 | 1.7ms | 2.67ms | 232s |
128 | 32 | 64 | 99.72% | 15443 | 515 | 1.9ms | 2.83ms | 184s |
256 | 32 | 64 | 99.84% | 15187 | 506 | 1.93ms | 2.82ms | 261s |
512 | 32 | 64 | 99.89% | 14401 | 480 | 2.04ms | 2.93ms | 354s |
256 | 64 | 64 | 99.9% | 13490 | 450 | 2.17ms | 3.17ms | 276s |
512 | 64 | 64 | 99.96% | 12626 | 421 | 2.32ms | 3.37ms | 388s |
512 | 64 | 128 | 99.98% | 8665 | 289 | 3.38ms | 4.82ms | 388s |
256 | 32 | 256 | 99.98% | 7191 | 240 | 4.07ms | 5.77ms | 261s |
128 | 64 | 256 | 99.99% | 6958 | 232 | 4.18ms | 6.17ms | 195s |
512 | 32 | 256 | 100.0% | 6694 | 223 | 4.39ms | 5.99ms | 354s |
128 | 32 | 512 | 100.0% | 4568 | 152 | 6.4ms | 9.27ms | 184s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
128 | 8 | 64 | 91.33% | 16576 | 553 | 1.75ms | 2.82ms | 178s |
256 | 8 | 64 | 91.98% | 16474 | 549 | 1.76ms | 2.87ms | 205s |
512 | 8 | 64 | 92.13% | 16368 | 546 | 1.77ms | 2.85ms | 272s |
64 | 16 | 64 | 96.56% | 15003 | 500 | 1.93ms | 2.94ms | 160s |
512 | 16 | 64 | 97.95% | 14996 | 500 | 1.92ms | 2.78ms | 308s |
64 | 64 | 64 | 98.04% | 14197 | 473 | 2.05ms | 3.14ms | 167s |
128 | 32 | 64 | 99.06% | 13482 | 449 | 2.17ms | 3.07ms | 184s |
256 | 32 | 64 | 99.44% | 13237 | 441 | 2.2ms | 3.22ms | 261s |
512 | 32 | 64 | 99.56% | 12661 | 422 | 2.31ms | 3.32ms | 354s |
256 | 64 | 64 | 99.63% | 12014 | 400 | 2.43ms | 3.37ms | 276s |
512 | 64 | 64 | 99.76% | 11300 | 377 | 2.58ms | 3.56ms | 388s |
512 | 32 | 128 | 99.76% | 9365 | 312 | 3.14ms | 4.73ms | 354s |
256 | 64 | 128 | 99.79% | 8669 | 289 | 3.34ms | 4.67ms | 276s |
512 | 64 | 128 | 99.89% | 7990 | 266 | 3.65ms | 5.09ms | 388s |
256 | 32 | 256 | 99.95% | 6771 | 226 | 4.32ms | 5.84ms | 261s |
512 | 32 | 256 | 99.97% | 6286 | 210 | 4.66ms | 6.33ms | 354s |
512 | 64 | 256 | 99.99% | 5225 | 174 | 5.55ms | 8.11ms | 388s |
256 | 32 | 512 | 100.0% | 4281 | 143 | 6.84ms | 9.55ms | 261s |
512 | 32 | 512 | 100.0% | 3917 | 131 | 7.47ms | 10.33ms | 354s |
256 | 64 | 512 | 100.0% | 3611 | 120 | 8.03ms | 12.03ms | 276s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
64 | 8 | 64 | 78.16% | 6202 | 207 | 4.55ms | 6.21ms | 152s |
256 | 8 | 64 | 80.07% | 6044 | 201 | 4.59ms | 8.59ms | 205s |
64 | 8 | 128 | 81.93% | 5968 | 199 | 4.73ms | 6.98ms | 152s |
512 | 16 | 64 | 91.28% | 5930 | 198 | 4.75ms | 6.86ms | 308s |
64 | 64 | 64 | 92.52% | 5768 | 192 | 4.91ms | 6.38ms | 167s |
128 | 16 | 128 | 93.17% | 5650 | 188 | 5.02ms | 6.47ms | 185s |
128 | 32 | 64 | 94.91% | 5543 | 185 | 5.13ms | 6.81ms | 184s |
256 | 32 | 64 | 96.07% | 5524 | 184 | 5.12ms | 6.71ms | 261s |
512 | 32 | 64 | 96.45% | 5321 | 177 | 5.32ms | 7.51ms | 354s |
128 | 32 | 128 | 96.54% | 5254 | 175 | 5.42ms | 7.01ms | 184s |
256 | 32 | 128 | 97.48% | 5235 | 175 | 5.43ms | 7.34ms | 261s |
512 | 32 | 128 | 97.79% | 5045 | 168 | 5.65ms | 7.15ms | 354s |
256 | 64 | 128 | 98.21% | 4889 | 163 | 5.86ms | 7.75ms | 276s |
512 | 64 | 128 | 98.75% | 4667 | 156 | 6.13ms | 7.85ms | 388s |
128 | 32 | 256 | 99.01% | 4298 | 143 | 6.71ms | 8.76ms | 184s |
256 | 32 | 256 | 99.43% | 4242 | 141 | 6.77ms | 8.74ms | 261s |
512 | 32 | 256 | 99.57% | 4069 | 136 | 7.1ms | 9.01ms | 354s |
256 | 64 | 256 | 99.61% | 3854 | 128 | 7.47ms | 10.13ms | 276s |
512 | 64 | 256 | 99.79% | 3634 | 121 | 7.92ms | 10.88ms | 388s |
256 | 32 | 512 | 99.92% | 3158 | 105 | 9.18ms | 12.12ms | 261s |
512 | 32 | 512 | 99.95% | 2956 | 99 | 9.8ms | 12.86ms | 354s |
512 | 64 | 512 | 99.98% | 2581 | 86 | 11.21ms | 15.68ms | 388s |
How to read the results table
- Choose the desired limit using the tab selector above the table.
The limit describes how many objects are returned for a query. Different use cases require different levels of QPS and returned objects per query.
For example, at 100 QPS andlimit 100
(100 objects per query) 10,000 objects will be returned in total. At 1,000 QPS andlimit 10
(10 objects per query), you will also receive 10,000 objects in total as each request contains fewer objects, but you can send more requests in the same timespan.
Pick the value that matches your desired limit in production most closely. - Pick the desired configuration
The first three columns represent the different input parameters to configure the HNSW index. These inputs lead to the results shown in columns four through six. - Recall/Throughput Trade-Off at a glance
The highlighted columns (Recall, QPS) reflect the Recall/QPS trade-off. Generally, as the Recall improves, the throughput drops. Pick the row that represents a combination that satisfies your requirements. Since the benchmark is multi-threaded and running on a 30-core machine, the QPS/vCore columns shows the throughput per single CPU core. You can use this column to extrapolate what the throughput would be like on a machine of different size. See also this section belowoutlining what changes to expect when running on different hardware. - Latencies
Besides the overall throughput, columns seven and eight show the latencies for individual requests. The Mean Latency columns shows the mean over all 10,000 test queries. The p99 Latency shows the maximum latency for the 99th-percentile of requests. In other words, 9,900 out of 10,000 queries will have a latency equal to or lower than the specified number. The difference between mean and p99 helps you get an impression how stable the request times are in a highly concurrent setup. - Import times
Changing the configuration parameters can also have an effect on the time it takes to import the dataset. This is shown in the last column.
Recommended configuration for Glove-25
This is the recommended configuration for this dataset. It balances recall, latency, and throughput to give you a good overview of Weaviate's performance.
efConstruction | maxConnections | ef | Recall@10 | QPS (Limit 10) | Mean Latency (Limit 10) | p99 Latency (Limit 10) |
---|---|---|---|---|---|---|
64 | 16 | 64 | 95.56% | 15003 | 1.93ms | 2.94ms |
QPS vs Recall for Deep Image 96
- Limit 1
- Limit 10
- Limit 100
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
64 | 16 | 64 | 94.44% | 9301 | 310 | 3.14ms | 7.21ms | 3305s |
128 | 16 | 64 | 96.06% | 8957 | 299 | 3.28ms | 7.24ms | 3804s |
64 | 64 | 64 | 96.84% | 8760 | 292 | 3.36ms | 6.97ms | 3253s |
128 | 32 | 64 | 97.88% | 8473 | 282 | 3.48ms | 7.4ms | 3533s |
128 | 64 | 64 | 98.27% | 7984 | 266 | 3.66ms | 7.52ms | 3631s |
256 | 32 | 64 | 98.78% | 7916 | 264 | 3.71ms | 7.83ms | 4295s |
512 | 32 | 64 | 98.95% | 7876 | 263 | 3.73ms | 7.47ms | 5477s |
256 | 64 | 64 | 99.06% | 7839 | 261 | 3.75ms | 7.21ms | 4392s |
512 | 64 | 64 | 99.32% | 7238 | 241 | 4.05ms | 7.67ms | 6039s |
256 | 64 | 128 | 99.42% | 5767 | 192 | 5.1ms | 8.39ms | 4392s |
512 | 64 | 128 | 99.52% | 5509 | 184 | 5.34ms | 8.7ms | 6039s |
256 | 32 | 256 | 99.66% | 4672 | 156 | 6.32ms | 10.11ms | 4295s |
512 | 32 | 256 | 99.82% | 4467 | 149 | 6.62ms | 10.29ms | 5477s |
512 | 64 | 256 | 99.9% | 3683 | 123 | 7.97ms | 12.72ms | 6039s |
512 | 32 | 512 | 99.94% | 2842 | 95 | 10.37ms | 15.25ms | 5477s |
512 | 64 | 512 | 99.95% | 2288 | 76 | 12.84ms | 20.72ms | 6039s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
64 | 16 | 64 | 91.58% | 8679 | 289 | 3.35ms | 7.3ms | 3305s |
128 | 16 | 64 | 93.68% | 8402 | 280 | 3.47ms | 6.9ms | 3804s |
64 | 32 | 64 | 94.11% | 8255 | 275 | 3.55ms | 7.61ms | 3275s |
64 | 64 | 64 | 94.67% | 8184 | 273 | 3.58ms | 7.19ms | 3253s |
128 | 64 | 64 | 96.95% | 7575 | 253 | 3.88ms | 7.79ms | 3631s |
256 | 32 | 64 | 97.53% | 7539 | 251 | 3.87ms | 7.81ms | 4295s |
512 | 32 | 64 | 97.92% | 7399 | 247 | 3.96ms | 8.04ms | 5477s |
256 | 64 | 64 | 98.15% | 7287 | 243 | 4.02ms | 7.3ms | 4392s |
512 | 64 | 64 | 98.76% | 6838 | 228 | 4.27ms | 7.96ms | 6039s |
256 | 64 | 128 | 98.77% | 5658 | 189 | 5.2ms | 8.7ms | 4392s |
512 | 64 | 128 | 99.23% | 5233 | 174 | 5.62ms | 9.25ms | 6039s |
256 | 32 | 256 | 99.44% | 4454 | 148 | 6.58ms | 10.11ms | 4295s |
512 | 32 | 256 | 99.61% | 4270 | 142 | 6.89ms | 10.77ms | 5477s |
512 | 64 | 256 | 99.78% | 3534 | 118 | 8.26ms | 12.97ms | 6039s |
256 | 32 | 512 | 99.8% | 2932 | 98 | 10.04ms | 14.79ms | 4295s |
512 | 32 | 512 | 99.88% | 2767 | 92 | 10.64ms | 15.67ms | 5477s |
512 | 64 | 512 | 99.93% | 2233 | 74 | 13.12ms | 21.24ms | 6039s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
512 | 8 | 64 | 72.88% | 4734 | 158 | 6.06ms | 9.94ms | 4327s |
64 | 16 | 64 | 82.08% | 4645 | 155 | 6.25ms | 9.31ms | 3305s |
512 | 16 | 64 | 85.81% | 4556 | 152 | 6.33ms | 9.56ms | 4922s |
64 | 32 | 64 | 86.23% | 4492 | 150 | 6.43ms | 9.82ms | 3275s |
64 | 64 | 64 | 87.25% | 4488 | 150 | 6.45ms | 9.36ms | 3253s |
64 | 32 | 128 | 89.05% | 4347 | 145 | 6.67ms | 10.05ms | 3275s |
512 | 16 | 128 | 89.08% | 4347 | 145 | 6.65ms | 10.31ms | 4922s |
64 | 64 | 128 | 89.88% | 4284 | 143 | 6.78ms | 9.86ms | 3253s |
256 | 32 | 64 | 91.99% | 4146 | 138 | 7.01ms | 10.36ms | 4295s |
512 | 32 | 64 | 92.7% | 4092 | 136 | 7.08ms | 10.33ms | 5477s |
256 | 64 | 64 | 93.85% | 3917 | 131 | 7.39ms | 10.68ms | 4392s |
256 | 32 | 128 | 94.22% | 3913 | 130 | 7.43ms | 10.74ms | 4295s |
512 | 32 | 128 | 94.83% | 3856 | 129 | 7.54ms | 11.08ms | 5477s |
512 | 64 | 64 | 95.14% | 3816 | 127 | 7.6ms | 11.23ms | 6039s |
256 | 64 | 128 | 95.65% | 3688 | 123 | 7.9ms | 11.12ms | 4392s |
128 | 32 | 256 | 96.9% | 3317 | 111 | 8.78ms | 12.5ms | 3533s |
256 | 32 | 256 | 97.91% | 3182 | 106 | 9.19ms | 12.91ms | 4295s |
512 | 32 | 256 | 98.29% | 3090 | 103 | 9.48ms | 13.16ms | 5477s |
256 | 64 | 256 | 98.48% | 2896 | 97 | 10.1ms | 14.27ms | 4392s |
512 | 64 | 256 | 99.02% | 2707 | 90 | 10.78ms | 15.47ms | 6039s |
256 | 32 | 512 | 99.34% | 2310 | 77 | 12.65ms | 17.56ms | 4295s |
512 | 32 | 512 | 99.52% | 2200 | 73 | 13.27ms | 18.76ms | 5477s |
256 | 64 | 512 | 99.53% | 2032 | 68 | 14.3ms | 21.44ms | 4392s |
512 | 64 | 512 | 99.75% | 1879 | 63 | 15.56ms | 23.65ms | 6039s |
How to read the results table
- Choose the desired limit using the tab selector above the table.
The limit describes how many objects are returned for a query. Different use cases require different levels of QPS and returned objects per query.
For example, at 100 QPS andlimit 100
(100 objects per query) 10,000 objects will be returned in total. At 1,000 QPS andlimit 10
(10 objects per query), you will also receive 10,000 objects in total as each request contains fewer objects, but you can send more requests in the same timespan.
Pick the value that matches your desired limit in production most closely. - Pick the desired configuration
The first three columns represent the different input parameters to configure the HNSW index. These inputs lead to the results shown in columns four through six. - Recall/Throughput Trade-Off at a glance
The highlighted columns (Recall, QPS) reflect the Recall/QPS trade-off. Generally, as the Recall improves, the throughput drops. Pick the row that represents a combination that satisfies your requirements. Since the benchmark is multi-threaded and running on a 30-core machine, the QPS/vCore columns shows the throughput per single CPU core. You can use this column to extrapolate what the throughput would be like on a machine of different size. See also this section belowoutlining what changes to expect when running on different hardware. - Latencies
Besides the overall throughput, columns seven and eight show the latencies for individual requests. The Mean Latency columns shows the mean over all 10,000 test queries. The p99 Latency shows the maximum latency for the 99th-percentile of requests. In other words, 9,900 out of 10,000 queries will have a latency equal to or lower than the specified number. The difference between mean and p99 helps you get an impression how stable the request times are in a highly concurrent setup. - Import times
Changing the configuration parameters can also have an effect on the time it takes to import the dataset. This is shown in the last column.
Recommended configuration for Deep Image 96
This is the recommended configuration for this dataset. It balances recall, latency, and throughput to give you a good overview of Weaviate's performance.
efConstruction | maxConnections | ef | Recall@10 | QPS (Limit 10) | Mean Latency (Limit 10) | p99 Latency (Limit 10) |
---|---|---|---|---|---|---|
128 | 32 | 64 | 96.43% | 6112 | 4.7ms | 15.87ms |
QPS vs Recall for GIST 960
- Limit 1
- Limit 10
- Limit 100
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
64 | 8 | 64 | 66.6% | 2759 | 92 | 10.59ms | 13.77ms | 1832s |
128 | 8 | 64 | 70.7% | 2734 | 91 | 10.7ms | 13.97ms | 1861s |
512 | 8 | 64 | 75.0% | 2724 | 91 | 10.78ms | 14.87ms | 2065s |
64 | 16 | 64 | 79.8% | 2618 | 87 | 11.04ms | 14.69ms | 1838s |
128 | 16 | 64 | 83.9% | 2577 | 86 | 11.21ms | 15.55ms | 1904s |
256 | 16 | 64 | 87.1% | 2518 | 84 | 11.54ms | 14.49ms | 2016s |
128 | 32 | 64 | 89.6% | 2425 | 81 | 11.85ms | 15.37ms | 1931s |
256 | 32 | 64 | 92.6% | 2388 | 80 | 12.09ms | 15.99ms | 2074s |
256 | 64 | 64 | 94.1% | 2207 | 74 | 13.08ms | 18.56ms | 2130s |
512 | 32 | 64 | 94.6% | 2073 | 69 | 14.11ms | 17.37ms | 2361s |
512 | 32 | 128 | 96.2% | 1985 | 66 | 14.67ms | 19.32ms | 2361s |
512 | 64 | 64 | 96.2% | 1951 | 65 | 14.7ms | 19.61ms | 2457s |
512 | 16 | 256 | 96.2% | 1839 | 61 | 15.9ms | 19.84ms | 2217s |
512 | 64 | 128 | 96.7% | 1603 | 53 | 18.06ms | 24.44ms | 2457s |
512 | 32 | 256 | 98.7% | 1514 | 50 | 19.16ms | 24.43ms | 2361s |
512 | 32 | 512 | 99.1% | 999 | 33 | 29.12ms | 38.89ms | 2361s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
128 | 8 | 64 | 65.88% | 2649 | 88 | 11.02ms | 14.89ms | 1861s |
512 | 8 | 64 | 69.68% | 2625 | 88 | 11.03ms | 15.08ms | 2065s |
64 | 16 | 64 | 74.17% | 2557 | 85 | 11.29ms | 15.54ms | 1838s |
128 | 16 | 64 | 80.23% | 2518 | 84 | 11.5ms | 15.54ms | 1904s |
64 | 64 | 64 | 81.98% | 2387 | 80 | 12.11ms | 16.31ms | 1860s |
256 | 16 | 64 | 82.87% | 2355 | 79 | 12.37ms | 17.67ms | 2016s |
128 | 64 | 64 | 88.11% | 2312 | 77 | 12.56ms | 16.57ms | 1952s |
256 | 32 | 64 | 89.83% | 2297 | 77 | 12.55ms | 18.98ms | 2074s |
512 | 32 | 64 | 91.85% | 2002 | 67 | 14.47ms | 20.03ms | 2361s |
256 | 64 | 64 | 92.04% | 1937 | 65 | 14.93ms | 21.48ms | 2130s |
512 | 32 | 128 | 94.14% | 1935 | 65 | 15.05ms | 19.86ms | 2361s |
512 | 64 | 64 | 94.72% | 1860 | 62 | 15.56ms | 21.91ms | 2457s |
512 | 64 | 128 | 95.99% | 1569 | 52 | 18.38ms | 24.81ms | 2457s |
256 | 32 | 256 | 96.48% | 1556 | 52 | 18.6ms | 24.68ms | 2074s |
512 | 32 | 256 | 97.76% | 1483 | 49 | 19.53ms | 25.24ms | 2361s |
512 | 64 | 256 | 98.62% | 1286 | 43 | 22.3ms | 30.06ms | 2457s |
512 | 32 | 512 | 99.16% | 981 | 33 | 29.53ms | 37.97ms | 2361s |
512 | 64 | 512 | 99.47% | 880 | 29 | 32.45ms | 44.66ms | 2457s |
efConstruction | maxConnections | ef | Recall | QPS | QPS/vCore | Mean Latency | p99 Latency | Import time |
---|---|---|---|---|---|---|---|---|
512 | 8 | 64 | 56.05% | 1997 | 67 | 14.5ms | 20.28ms | 2065s |
256 | 8 | 128 | 60.26% | 1945 | 65 | 14.66ms | 18.39ms | 1938s |
64 | 16 | 64 | 61.8% | 1862 | 62 | 15.42ms | 20.05ms | 1838s |
128 | 16 | 64 | 68.05% | 1832 | 61 | 15.61ms | 20.05ms | 1904s |
512 | 16 | 128 | 77.53% | 1802 | 60 | 16.1ms | 19.07ms | 2217s |
128 | 64 | 64 | 78.26% | 1744 | 58 | 16.59ms | 21.48ms | 1952s |
128 | 32 | 128 | 79.71% | 1713 | 57 | 16.68ms | 21.37ms | 1931s |
256 | 32 | 64 | 80.3% | 1652 | 55 | 17.49ms | 23.8ms | 2074s |
512 | 32 | 128 | 86.91% | 1624 | 54 | 17.83ms | 23.28ms | 2361s |
512 | 16 | 256 | 88.31% | 1515 | 51 | 19.08ms | 24.64ms | 2217s |
128 | 32 | 256 | 89.11% | 1477 | 49 | 19.72ms | 25.54ms | 1931s |
256 | 32 | 256 | 92.63% | 1361 | 45 | 21.34ms | 28.19ms | 2074s |
512 | 32 | 256 | 94.49% | 1308 | 44 | 22.17ms | 29.1ms | 2361s |
512 | 64 | 256 | 96.44% | 1152 | 38 | 24.88ms | 33.15ms | 2457s |
256 | 32 | 512 | 96.94% | 1001 | 33 | 28.71ms | 36.11ms | 2074s |
256 | 64 | 512 | 97.87% | 893 | 30 | 31.91ms | 42.6ms | 2130s |
512 | 32 | 512 | 98.04% | 870 | 29 | 32.84ms | 42.31ms | 2361s |
512 | 64 | 512 | 98.8% | 812 | 27 | 34.96ms | 47.45ms | 2457s |
How to read the results table
- Choose the desired limit using the tab selector above the table.
The limit describes how many objects are returned for a query. Different use cases require different levels of QPS and returned objects per query.
For example, at 100 QPS andlimit 100
(100 objects per query) 10,000 objects will be returned in total. At 1,000 QPS andlimit 10
(10 objects per query), you will also receive 10,000 objects in total as each request contains fewer objects, but you can send more requests in the same timespan.
Pick the value that matches your desired limit in production most closely. - Pick the desired configuration
The first three columns represent the different input parameters to configure the HNSW index. These inputs lead to the results shown in columns four through six. - Recall/Throughput Trade-Off at a glance
The highlighted columns (Recall, QPS) reflect the Recall/QPS trade-off. Generally, as the Recall improves, the throughput drops. Pick the row that represents a combination that satisfies your requirements. Since the benchmark is multi-threaded and running on a 30-core machine, the QPS/vCore columns shows the throughput per single CPU core. You can use this column to extrapolate what the throughput would be like on a machine of different size. See also this section belowoutlining what changes to expect when running on different hardware. - Latencies
Besides the overall throughput, columns seven and eight show the latencies for individual requests. The Mean Latency columns shows the mean over all 10,000 test queries. The p99 Latency shows the maximum latency for the 99th-percentile of requests. In other words, 9,900 out of 10,000 queries will have a latency equal to or lower than the specified number. The difference between mean and p99 helps you get an impression how stable the request times are in a highly concurrent setup. - Import times
Changing the configuration parameters can also have an effect on the time it takes to import the dataset. This is shown in the last column.
Recommended configuration for GIST 960
This is the recommended configuration for this dataset. It balances recall, latency, and throughput to give you a good overview of Weaviate's performance.
efConstruction | maxConnections | ef | Recall@10 | QPS (Limit 10) | Mean Latency (Limit 10) | p99 Latency (Limit 10) |
---|---|---|---|---|---|---|
512 | 32 | 128 | 94.14% | 1935 | 15.05ms | 19.86ms |
Benchmark Setup
Scripts
This benchmark is open source, so you can reproduce the results yourself.
Hardware
This benchmark test uses two GCP instances within the same VPC:
- Benchmark – a
c2-standard-30
instance with 30 vCPU cores and 120 GB memory – to host Weaviate. - Script – a smaller instance with 8 vCPU – to run benchmarking scripts.
c2-standard-30
:- It is large enough to show that Weaviate is a highly concurrent vector search engine.
- It scales well while running thousands of searches across multiple threads.
- It is small enough to represent a typical production case without inducing high costs.
Based on your throughput requirements, it is very likely that you will run Weaviate on a considerably smaller or larger machine in production.
We have outlined in the Benchmark FAQs what you should expect when altering the configuration or setup parameters.
Experiment Setup
We modeled our dataset selection after ann-benchmarks. The same test queries are used to test speed, throughput, and recall. The provided ground truths are used to calculate the recall.
We use Weaviate's Python client to import data. We use Go to measure the concurrent (multi-threaded) queries. Each language has its own performance characteristics. You may get different results if you use a different language to send your queries.
For maximum throughput, we recommend using the Go or Java client libraries.
The complete import and test scripts are available here.
Benchmark FAQ
How can I get the most performance for my use case?
If your use case is similar to one of the benchmark tests, use the recommended HNSW parameter configurations to start tuning.
For more instructions on how to tune your configuration for best performance, see HNSW Configuration Tips.
What is the difference between latency and throughput?
The latency refers to the time it takes to complete a single request. This is typically measured by taking a mean or percentile distribution of all requests. For example, a mean latency of 5ms means that a single request takes, on average, 5ms to complete. This does not say anything about how many queries can be answered in a given timeframe.
If Weaviate were single-threaded, the throughput per second would roughly equal to 1s divided by mean latency. For example, with a mean latency of 5ms, this would mean that 200 requests can be answered in a second.
However, in reality, you often don't have a single user sending one query after another. Instead, you have multiple users sending queries. This makes the querying side concurrent. Similarly, Weaviate can handle concurrent incoming requests. We can identify how many concurrent requests can be served by measuring the throughput.
We can take our single-thread calculation from before and multiply it with the number of server CPU cores. This will give us a rough estimate of what the server can handle concurrently. However, it would be best never to trust this calculation alone and continuously measure the actual throughput. This is because such scaling may not always be linear. For example, there may be synchronization mechanisms used to make concurrent access safe, such as locks. Not only do these mechanisms have a cost themselves, but if implemented incorrectly, they can also lead to congestion, which would further decrease the concurrent throughput. As a result, you cannot perform a single-threaded benchmark and extrapolate what the numbers would be like in a multi-threaded setting.
All throughput numbers ("QPS") outlined in this benchmark are actual multi-threaded measurements on a 30-core machine, not estimations.
What is a p99 latency?
The mean latency gives you an average value of all requests measured. This is a good indication of how long a user will have to wait on average for their request to be completed. Based on this mean value, you cannot make any promises to your users about wait times. 90 out of 100 users might see a considerably better time, but the remaining 10 might see a significantly worse time.
Percentile-based latencies are used to give a more precise indication. A 99th-percentile latency - or "p99 latency" for short - indicates the slowest request that 99% of requests experience. In other words, 99% of your users will experience a time equal to or better than the stated value. This is a much better guarantee than a mean value.
In production settings, requirements - as stated in SLAs - are often a combination of throughput and a percentile latency. For example, the statement "3000 QPS at p95 latency of 20ms" conveys the following meaning.
- 3000 requests need to be successfully completed per second
- 95% of users must see a latency of 20ms or lower.
- There is no assumption about the remaining 5% of users, implicitly tolerating that they will experience higher latencies than 20ms.
The higher the percentile (e.g. p99 over p95) the "safer" the quoted latency becomes. We have thus decided to use p99-latencies instead of p95-latencies in our measurements.
What happens if I run with fewer or more CPU cores than on the example test machine?
The benchmark outlines a QPS per core measurement. This can help you make a rough estimation of how the throughput would vary on smaller or larger machines. If you do not need the stated throughput, you can run with fewer CPU cores. If you need more throughput, you can run with more CPU cores.
Adding more CPUs reaches a point of diminishing returns because of synchronization mechanisms, disk, and memory bottlenecks. Beyond that point, you should scale horizontally instead of vertically. Horizontal scaling with replication will be available in Weaviate soon.
What are ef
, efConstruction
, and maxConnections
?
These parameters refer to the HNSW build and query parameters. They represent a trade-off between recall, latency & throughput, index size, and memory consumption. This trade-off is highlighted in the benchmark results.
I can't match the same latencies/throughput in my own setup. How can I debug this?
If you are encountering other numbers in your own dataset, here are a couple of hints to look at:
What CPU architecture are you using? The benchmarks above were run on a GCP
c2
CPU type, which is based onamd64
architecture. Weaviate also supportsarm64
architecture, but not all optimizations are present. If your machine shows maximum CPU usage but you cannot achieve the same throughput, consider switching the CPU type to the one used in this benchmark.Are you using an actual dataset or random vectors? HNSW is known to perform considerably worse with random vectors than with real-world datasets. This is due to the distribution of points in real-world datasets compared to randomly generated vectors. If you cannot achieve the performance (or recall) outlined above with random vectors, switch to an actual dataset.
Are your disks fast enough? While the ANN search itself is CPU-bound, the objects must be read from disk after the search has been completed. Weaviate uses memory-mapped files to speed this process up. However, if not enough memory is present or the operating system has allocated the cached pages elsewhere, a physical disk read needs to occur. If your disk is slow, it could then be that your benchmark is bottlenecked by those disks.
Are you using more than 2 million vectors? If yes, make sure to set the vector cache large enough for maximum performance.
Where can I find the scripts to run this benchmark myself?
The repository is located here.
Questions and feedback
If you have any questions or feedback, let us know in the user forum.