← Back to Blogs
Skip to main content

Weaviate 1.2 release - transformer models

· 2 min read

Weaviate 1.2 release - transformer models

In the v1.0 release of Weaviate (docsGitHub) we introduced the concept of modules. Weaviate modules are used to extend the vector database with vectorizers or functionality that can be used to query your dataset. With the release of Weaviate v1.2, we have introduced the use of transformers (DistilBERT, BERT, RoBERTa, Sentence-BERT, etc) to vectorize and semantically search through your data.

Weaviate v1.2 introduction video

What are transformers?

A transformer (e.g., BERT) is a deep learning model that is used for NLP tasks. Within Weaviate the transformer module can be used to vectorize and query your data.

Getting started with out-of-the-box transformers in Weaviate

By selecting the text-module in the Weaviate configuration tool, you can run Weaviate with transformers in one command. You can learn more about the Weaviate transformer module here.

Weaviate configurator — selecting the Transformers module Weaviate configurator — selecting the Transformers module

Custom transformer models

You can also use custom transformer models that are compatible with Hugging Face's AutoModel and AutoTokenzier. Learn more about using custom models in Weaviate here.

Q&A style questions on your own dataset answered in milliseconds

Weaviate now allows you to get to sub-50ms results by using transformers on your own data. You can learn more about Weaviate’s speed in combination with transformers in this article.

Ready to start building?

Check out the Quickstart tutorial, or build amazing apps with a free trial of Weaviate Cloud (WCD).

Don't want to miss another blog post?

Sign up for our bi-weekly newsletter to stay updated!


By submitting, I agree to the Terms of Service and Privacy Policy.