FAQ

Weaviate on Stackoverflow badge Weaviate issues on Github badge Weaviate total Docker pulls badge

💡 You are looking at older or release candidate documentation. The current Weaviate version is v1.15.1

Aha, you have a question! We hope that you can find the answer here, but if you don't, you can reach us via Stackoverflow (make sure to tag your question with weaviate), Github, or Slack. If your question serves a general-purpose, we will add it to this page.


Q: Why would I use Weaviate as my vector search engine engine?

A: Our goal is three-folded. Firstly, we want to make it as easy as possible for others to create their own semantic systems or vector search engines (hence, we are API based). Secondly, we have a strong focus on the semantic element (the “knowledge” in “vector search engine,” if you will). Our ultimate goal is to have Weaviate help you manage, index, and “understand” your data so that you can build newer, better, and faster applications. And thirdly, we want you to be able to run it everywhere. This is the reason why Weaviate comes containerized.

Q: Do you offer Weaviate as a managed service?

A: Yes, it is called the Weaviate Console.

Q: Can I train my own text2vec-contextionary vectorizer module?

A: Not yet (but soon), you can currently use the available contextionaries in a variety of languages and use the transfer learning feature to add custom concepts if needed. Sign up to our newsletter or Slack channel to keep updated about the release of custom contextionary training

Q: Why does Weaviate have a schema and not an ontology?

A: We use a schema because it focusses on the representation of your data (in our case in the GraphQL API) but you can use a Weaviate schema to express an ontology. One of Weaviate’s core features is that it semantically interprets your schema (and with that your ontology) so that you can search for concepts rather than formally defined entities.

Q: What is the difference between a Weaviate data schema, ontologies and taxonomies?

A: Read about how taxonomies, ontologies and schemas are related to Weaviate in this blog post.

Q: Can I use Weaviate to create a traditional knowledge graph.

A: Yes, you can! Weaviate support ontology, RDF-like definitions in its schema, and it runs out of the box. It is scalable, and the GraphQL API will allow you to query through your knowledge graph easily. But now you are here. We like to suggest you really try its semantic features. After all, you are creating a knowledge graph 😉.

Q: Why isn’t there a text2vec-contextionary in my language?

A: Because you are probably one of the first that needs one! Ping us here on Github, and we will make sure in the next iteration it will become available (unless you want it in Silbo Gomero or another language which is whistled).

Q: How to deal with custom terminology?

A: Sometimes, users work with custom terminology, which often comes in the form of abbreviations or jargon. We are currently working on an additional API endpoint, which allows you to add custom synonyms. You can find the state of the implementation here. You can also signup for our newsletter to receive an update when it is ready.

Q: How can you index data near-realtime without losing semantic meaning?

A: Every data object gets its vector representation based on its semantic meaning. In a nutshell, we calculate the vector position of the data object based on the words and concepts used in the data object. The existing model in the contextionary gives already enough context. If you want to get in the nitty-gritty, you can browse the code here, but you can also ask a specific question on Stackoverflow and tag it with Weaviate.

Q: How do you deal with words that have multiple meanings?

A: How can Weaviate interpret that you mean a company, as in business, and not as the division of the army? We do this based on the structure of the schema and the data you add. A schema in Weaviate might contain a company class with the property name and the value Apple. This simple representation (company, name, apple) is already enough to gravitate the vector position of the data object towards businesses or the iPhone. You can read here how we do this, or you can ask a specific question on Stackoverflow and tag it with Weaviate.

Q: Can I connect my own module?

Yes!

Q: What is the difference between Weaviate and for example Elasticsearch?

A: Other database systems like Elasticsearch rely on inverted indices, which makes search super fast. Weaviate also uses inverted indices to store data and values. But additionally, Weaviate is also a vector-native search database, which means that data is stored as vectors, which enables semantic search. This combination of data storage is unique, and enables fast, filtered and semantic search from end-to-end.

Q: How can slow queries be optimized?

A: Queries containing deeply nested references that need to be filtered or resolved can take some time. Read on optimization strategies here.

Q: Data import takes long / is slow (slower than before v1.0.0), what is causing this and what can I do?

A: The first supported vector index type HNSW is super fast at query time, but slower on vectorization. This means that adding and updating data objects costs relatively more time. When there are other vector index types available, you van try another vector index type.

Q: Why did you use GraphQL instead of SPARQL?

A: Two words, user experience. We want to make it as simple as possible to integrate Weaviate into your stack, and we believe that GraphQL is the answer to this. The community and client libraries around GraphQL are enormous, and you can use almost all of them with Weaviate.

Q: Do I need to know about Docker (Compose) to use Weaviate?

A: Weaviate uses Docker images as a means to distribute releases and uses Docker Compose to tie a module-rich runtime together. If you are new to those technologies, we recommend reading the Docker Introduction for Weaviate Users.

Q: Can I request a feature in Weaviate?

A: Sure (also, feel free to issue a pull request 😉) you can add those requests here. The only thing you need is a Github account, and while you’re there, make sure to give us a star 😇.

Q: Does Weaviate require NFS volumes on Kubernetes?

A: By default, no NFS volumes are active. In a production setting, we recommend turning etcd disaster recovery on which requires an NFS volume. However, the helm docs contain instructions on how to deploy an nfs-provisioner. For more details, see also this stack overflow answer.

A: The mixed structured vector searches in Weaviate are pre-filter. There is an inverted index which is queried first to basically form an allow-list, in the HNSW search the allow list is then used to treat non-allowed doc ids only as nodes to follow connections, but not to add to the result set.

Q: What would you say is more important for query speed in Weaviate: More CPU power, or more RAM?

More concretely: If you had to pick between a machine that has 16 GB of RAM and 2 CPUs, or a machine that has 8 GB of RAM and 4 CPUs, which would you pick?

A: This is a very difficult to answer 100% correctly, because there are several factors in play:

  • The vector search itself. This part is CPU-bound, however only with regards to throughput: A single search is single-threaded. Multiple parallel searches can use multiple threads. So if you measure the time of a single request (otherwise idle), it will be the same whether the machine has 1 core or 100. However, if your QPS approach the throughput of a CPU, you’ll see massive benefits by adding more Cores
  • The retrieval of the objects. Once the vector search part is done, we are essentially left with a list of n IDs which need to be resolved to actual objects. This is IO-bound in general. However, all disk files are memory-mapped. So generally, more mem will allow you to hold more of the disk state in memory. In real life however, it’s not that simple. Searches are rarely evenly distributed. So let’s pretend that 90% of searches will return just 10% of objects (because these are more popular search results). Then if those 10% of the disk objects are already cached in mem, there’s no benefit in adding more memory.

Taking the above in mind: we can carefully say: If throughput is the problem, increase CPU, if response time is the problem increase mem. However, note that the latter only adds value if there are more things that can be cached. If you have enough mem to cache your entire disk state (or at least the parts that are relevant for most queries), additional memory won’t add any additional benefit. If we are talking about imports on the other hand, they are almost always CPU-bound because of the cost of creating the HNSW index. So, if you can resize between import and query, my recommendation would be roughly prefer CPUs while importing and then gradually replace CPU with memory at query time - until you see no more benefits. (This assumes that there is a separation between importing and querying which might not always be the case in real life).

Q: With relation to “filtered vector search”: Since this is a two-phase pipeline, how big can that list of IDs get? Do you know how that size might affect query performance?

A: Essentially the list ids uses the internal doc id which is a uint64 or 8 bytes per ID. The list can grow as long as you have memory available. So for example with 2GB of free memory, it could hold 250M ids, with 20GB it could hold 2.5B ids, etc.

Performance wise there are two things to consider:

  1. Building the lookup list
  2. Filtering the results when vector searching

Building the list is a typical inverted index look up, so depending on the operator this is just a single read on == (or a set of range reads, e.g. for >7, we’d read the value rows from 7 to infinity). This process is pretty efficient, similar to how the same thing would happen in a traditional search engine, such as elasticsearch

Performing the filtering during the vector search depends on whether the filter is very restrictive or very loose. In the case you mentioned where a lot of IDs are included, it will be very efficient. Because the equivalent of an unfiltered search would be the one where your ID list contains all possible IDs. So the HNSW index would behave normally. There is however, a small penalty whenever a list is present: We need to check if the current ID is contained an the allow-list. This is essentially a hashmap lookup, so it should be O(1) per object. Nevertheless, there is a slight performance penalty.

Now the other extreme, a very restrictive list, i.e few IDs on the list, actually takes considerably more time. Because the HNSW index will find neighboring IDs, but since they’re not contained, they cannot be added as result candidates, meaning that all we can do with them is evaluating their connections, but not the points themselves. In the extreme case of a list that is very, very restrictive, say just 10 objects out of 1B in the worst case the search would become exhaustive if you the filtered ids are very far from the query. In this extreme case, it would actually be much more efficient to just skip the index and do a brute-force indexless vector search on the 10 ids. So, there is a cut-off when a brute-force search becomes more efficient than a heavily-restricted vector search with HNSW. We do not yet have any optimization to discovery such a cut-off point and skip the index, but this should be fairly simple to implement if this ever becomes an actual problem.

Q: Are there any ‘best practices’ or guidelines to consider when designing a schema? E.g. if I was looking to perform a semantic search over a the content of a Book would I look to have Chapter and Paragraph represented in the schema etc, would this be preferred over including the entire content of the novel in a single property?

A: As a rule of thumb, the smaller the units, the more accurate the search will be. Two objects of e.g. a sentence would most likely contain more information in their vector embedding than a common vector (which is essentially just the mean of sentences). At the same time more objects leads to a higher import time and (since each vector also makes up some data) more space. (E.g. when using transformers, a single vector is 768xfloat32 = 3KB. This can easily make a difference if you have millions, etc.) of vectors. As a rule of thumb, the more vectors you have the more memory you’re going to need.

So, basically, it’s a set of tradeoffs. Personally we’ve had great success with using paragraphs as individual units, as there’s little benefit in going even more granular, but it’s still much more precise than whole chapters, etc.

You can use cross-references to link e.g. chapters to paragraphs. Note that resolving a cross-references takes a slight performance penalty. Essentially resolving A1->B1 is the same cost as looking up both A1 and B1 indvidually. This cost however, will probably only matter at really large scale.

Q: If I run a cluster (multiple instances) of Weaviate, do all the instances have to share a filesystem(PERSISTENCE_DATA_PATH)?

A: Horizontal Scalability is currently under development with an ETA for end of Q3 in 2021. Once it’s in it will not share filesystems, but each node will be truly independent with their own filesystem. The partitioning and replication strategies are modelled after Cassandra and Elasticsearch (Cassandra with regards to the virtual nodes in a ring, ES with regards to shards, because we need explicit shards for searching, whereas cassandra would only support direct access by (primary and partitioning) key. Until then Weaviate is currently confined to a single node. If you were to try to run two processes trying to access the same classes, I think it would fail, as each process would try to obtain a lock on the same files.

We have recently implemented a LSM tree based approached to storage in Weaviate v1.5: Object and Index storage is no longer done using a B+Tree approach (bolb/bbolt), but uses a custom LSM tree approach. This speeds up imports by over 100% depending on the use case.

Q: With your aggregations I could not see how to do time buckets, is this possible?

A: At the moment, we cannot aggregate over timeseries into time buckets yet, but architecturally there’s nothing in the way. If there is demand, this seems like a nice feature request, you can submit an issue here. (We’re a very small company though and the priority is on Horizontal Scaling at the moment.)

Q: Add support to multiple versions of the query/document embedding models to co-exist at a given time? (helps with live experiments of new model versions)

A: You can create multiple classes in the Weaviate schema, where one class will act like a namespace in Kubernetes or an index in Elasticsearch. So the spaces will be completely independent, this allows space 1 to use completely different embeddings from space 2. The configured vectorizer is always scoped only to a single class. You can also use Weaviate’s Cross-Reference features to make a graph-like connection between an object of Class 1 to the corresponding object of Class 2 to make it easy to see the equivalent in the other space.

Still have a question?

Look at the:

  1. Knowledge base of old issues. Or,
  2. For questions: Stackoverflow. Or,
  3. For issues: Github. Or,
  4. Ask your question in the Slack channel: Slack.

Edit on Github

Table of Contents