Skip to main content

GPT4All Embeddings with Weaviate

New Documentation

The model provider integration pages are new and still undergoing improvements. We appreciate any feedback on this forum thread.

Weaviate's integration with GPT4All's models allows you to access their models' capabilities directly from Weaviate.

Configure a Weaviate vector index to use an GPT4All embedding model, and Weaviate will generate embeddings for various operations using the specified model via the GPT4All inference container. This feature is called the vectorizer.

At import time, Weaviate generates text object embeddings and saves them into the index. For vector and hybrid search operations, Weaviate converts text queries into embeddings.

Embedding integration illustration

This module is optimized for CPU using the ggml library, allowing for fast inference even without a GPU.

Requirements

Currently, the GPT4All integration is only available for amd64/x86_64 architecture devices, as the gpt4all library currently does not support ARM devices, such as Apple M-series.

Weaviate configuration

Your Weaviate instance must be configured with the GPT4All vectorizer integration (text2vec-gpt4all) module.

For Weaviate Cloud (WCD) users

This integration is not available for Weaviate Cloud (WCD) serverless instances, as it requires a locally running GPT4All instance.

For self-hosted users

Configure the integration

To use this integration, you must configure the container image of the GPT4All model, and the inference endpoint of the containerized model.

The following example shows how to configure the GPT4All integration in Weaviate:

Docker Option 1: Use a pre-configured docker-compose.yml file

Follow the instructions on the Weaviate Docker installation configurator to download a pre-configured docker-compose.yml file with a selected model


Docker Option 2: Add the configuration manually

Alternatively, add the configuration to the docker-compose.yml file manually as in the example below.

version: '3.4'
services:
weaviate:
# Other Weaviate configuration
environment:
GPT4ALL_INFERENCE_API: http://text2vec-gpt4all:8080 # Set the inference API endpoint
t2v-gpt4all: # Set the name of the inference container
image: cr.weaviate.io/semitechnologies/gpt4all-inference:all-MiniLM-L6-v2
  • GPT4ALL_INFERENCE_API environment variable sets the inference API endpoint
  • t2v-gpt4all is the name of the inference container
  • image is the container image

Credentials

As this integration connects to a local GPT4All container, no additional credentials (e.g. API key) are required. Connect to Weaviate as usual, such as in the examples below.

import weaviate

client = weaviate.connect_to_local()

# Work with Weaviate

client.close()

Configure the vectorizer

Configure a Weaviate index to use an GPT4All embedding model by setting the vectorizer as follows:

from weaviate.classes.config import Configure

client.collections.create(
"DemoCollection",
vectorizer_config=[
Configure.NamedVectors.text2vec_gpt4all(
name="title_vector",
source_properties=["title"],
)
],
# Additional parameters not shown
)

Currently, the only available model is all-MiniLM-L6-v2.

Data import

After configuring the vectorizer, import data into Weaviate. Weaviate generates embeddings for text objects using the specified model.

collection = client.collections.get("DemoCollection")

with collection.batch.dynamic() as batch:
for src_obj in source_objects:
weaviate_obj = {
"title": src_obj["title"],
"description": src_obj["description"],
}

# The model provider integration will automatically vectorize the object
batch.add_object(
properties=weaviate_obj,
# vector=vector # Optionally provide a pre-obtained vector
)
Re-use existing vectors

If you already have a compatible model vector available, you can provide it directly to Weaviate. This can be useful if you have already generated embeddings using the same model and want to use them in Weaviate, such as when migrating data from another system.

Searches

Once the vectorizer is configured, Weaviate will perform vector and hybrid search operations using the specified GPT4All model.

Embedding integration at search illustration

When you perform a vector search, Weaviate converts the text query into an embedding using the specified model and returns the most similar objects from the database.

The query below returns the n most similar objects from the database, set by limit.

collection = client.collections.get("DemoCollection")

response = collection.query.near_text(
query="A holiday film", # The model provider integration will automatically vectorize the query
limit=2
)

for obj in response.objects:
print(obj.properties["title"])
What is a hybrid search?

A hybrid search performs a vector search and a keyword (BM25) search, before combining the results to return the best matching objects from the database.

When you perform a hybrid search, Weaviate converts the text query into an embedding using the specified model and returns the best scoring objects from the database.

The query below returns the n best scoring objects from the database, set by limit.

collection = client.collections.get("DemoCollection")

response = collection.query.hybrid(
query="A holiday film", # The model provider integration will automatically vectorize the query
limit=2
)

for obj in response.objects:
print(obj.properties["title"])

References

Available models

Currently, the only available model is all-MiniLM-L6-v2.

Further resources

Code examples

Once the integrations are configured at the collection, the data management and search operations in Weaviate work identically to any other collection. See the following model-agnostic examples:

  • The how-to: manage data guides show how to perform data operations (i.e. create, update, delete).
  • The how-to: search guides show how to perform search operations (i.e. vector, keyword, hybrid) as well as retrieval augmented generation.

External resources

If you have any questions or feedback, let us know in the user forum.