Skip to main content

text2vec-gpt4all

Overview

Availability
  • text2vec-gpt added in v1.21
  • Currently, text2vec-gpt4all is only available for amd64/x86_64 architecture devices.
    • This is as the gpt4all library currently does not support ARM devices, such as Apple M-series.

The text2vec-gpt4all module enables Weaviate to obtain vectors using the gpt4all library.

Key notes:

  • This module is not available on Weaviate Cloud Services (WCS).
  • This module is optimized for CPU using the ggml library, allowing for fast inference even without a GPU.
  • Enabling this module will enable the nearText search operator.
  • By default, input text longer than 256 tokens is mean-pooled with an overlapping context window up to the number of tokens in your input.
  • Currently, the only available model is all-MiniLM-L6-v2.

Weaviate instance configuration

Not applicable to WCS

This module is not available on Weaviate Cloud Services.

Docker Compose file

To use text2vec-gpt4all, you must enable it in your Docker Compose file (docker-compose.yml). You can do so manually, or create one using the Weaviate configuration tool.

Parameters

  • ENABLE_MODULES (Required): The modules to enable. Include text2vec-gpt4all to enable the module.
  • DEFAULT_VECTORIZER_MODULE (Optional): The default vectorizer module. You can set this to text2vec-gpt4all to make it the default for all classes.

Example

This configuration enables text2vec-gpt4all, sets it as the default vectorizer, and sets the API keys.

---
version: '3.4'
services:
weaviate:
command:
- --host
- 0.0.0.0
- --port
- '8080'
- --scheme
- http
image: cr.weaviate.io/semitechnologies/weaviate:1.24.10
ports:
- 8080:8080
- 50051:50051
restart: on-failure:0
environment:
QUERY_DEFAULTS_LIMIT: 25
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
DEFAULT_VECTORIZER_MODULE: 'text2vec-gpt4all'
ENABLE_MODULES: 'text2vec-gpt4all'
GPT4ALL_INFERENCE_API: 'http://text2vec-gpt4all:8080'
CLUSTER_HOSTNAME: 'node1'
text2vec-gpt4all:
image: cr.weaviate.io/semitechnologies/gpt4all-inference:all-MiniLM-L6-v2
...

Class configuration

You can configure how the module will behave in each class through the Weaviate schema.

Example

The following example configures the Article class by setting the vectorizer to text2vec-gpt4all:

{
"classes": [
{
"class": "Article",
"description": "A class called article",
"vectorizer": "text2vec-gpt4all"
}
]
}

Vectorization settings

You can set vectorizer behavior using the moduleConfig section under each class and property:

Class-level

  • vectorizer - what module to use to vectorize the data.
  • vectorizeClassName – whether to vectorize the class name. Default: true.

Property-level

  • skip – whether to skip vectorizing the property altogether. Default: false
  • vectorizePropertyName – whether to vectorize the property name. Default: false

Example

{
"classes": [
{
"class": "Article",
"description": "A class called article",
"vectorizer": "text2vec-gpt4all",
"moduleConfig": {
"text2vec-gpt4all": {
"vectorizeClassName": false
}
},
"properties": [
{
"name": "content",
"dataType": ["text"],
"description": "Content that will be vectorized",
"moduleConfig": {
"text2vec-gpt4all": {
"skip": false,
"vectorizePropertyName": false
}
}
}
]
}
]
}

Additional information

Available models

Currently, the only available model is all-MiniLM-L6-v2.

CPU optimized inference

The text2vec-gpt4all module is optimized for CPU inference and should be noticeably faster then text2vec-transformers in CPU-only (i.e. no CUDA acceleration) usage. You can read more about expected inference times here.

Usage advice - chunking text with gpt4all

text2vec-gpt4all will truncate input text longer than 256 tokens (word pieces).

Accordingly, this model is not suitable for use cases where larger chunks are required. In these cases, we recommend using other models that support longer input lengths, such as by selecting one from the text2vec-transformers module or text2vec-openai.

Usage example

This is an example of a nearText query with text2vec-gpt4all.

import weaviate
import weaviate.classes as wvc
from weaviate.collections.classes.grpc import Move
import os

client = weaviate.connect_to_local()

try:
publications = client.collections.get("Publication")

response = publications.query.near_text(
query="fashion",
distance=0.6,
move_to=Move(force=0.85, concepts="haute couture"),
move_away=Move(force=0.45, concepts="finance"),
return_metadata=wvc.query.MetadataQuery(distance=True),
limit=2
)

for o in response.objects:
print(o.properties)
print(o.metadata)

finally:
client.close()

Model license(s)

The text2vec-gpt4all module uses the gpt4all library, which in turn uses the all-MiniLM-L6-v2 model. Please refer to the respective documentation for more information on their respective licenses.

It is your responsibility to evaluate whether the terms of its license(s), if any, are appropriate for your intended use.

Questions and feedback

If you have any questions or feedback, let us know in our user forum.