Skip to main content

Locally Hosted ImageBind Embeddings + Weaviate

New Documentation

The model provider integration pages are new and still undergoing improvements. We appreciate any feedback on this forum thread.

Weaviate's integration with the Meta ImageBind library allows you to access its capabilities directly from Weaviate. The ImageBind model supports multiple modalities (text, image, audio, video, thermal, IMU and depth).

Configure a Weaviate vector index to use the ImageBind integration, and configure the Weaviate instance with a model image, and Weaviate will generate embeddings for various operations using the specified model in the ImageBind inference container. This feature is called the vectorizer.

At import time, Weaviate generates multimodal object embeddings and saves them into the index. For vector and hybrid search operations, Weaviate converts queries of one or more modalities into embeddings.

Embedding integration illustration

Requirements

Weaviate configuration

Your Weaviate instance must be configured with the ImageBind vectorizer integration (multi2vec-bind) module.

For Weaviate Cloud (WCD) users

This integration is not available for Weaviate Cloud (WCD) serverless instances, as it requires spinning up a container with the ImageBind model.

Enable the integration module

Configure the integration

To use this integration, you must configure the container image of the ImageBind model, and the inference endpoint of the containerized model.

The following example shows how to configure the ImageBind integration in Weaviate:

Docker Option 1: Use a pre-configured docker-compose.yml file

Follow the instructions on the Weaviate Docker installation configurator to download a pre-configured docker-compose.yml file with a selected model


Docker Option 2: Add the configuration manually

Alternatively, add the configuration to the docker-compose.yml file manually as in the example below.

version: '3.4'
services:
weaviate:
# Other Weaviate configuration
environment:
BIND_INFERENCE_API: http://multi2vec-bind:8080 # Set the inference API endpoint
multi2vec-bind: # Set the name of the inference container
mem_limit: 12g
image: cr.weaviate.io/semitechnologies/multi2vec-bind:imagebind
environment:
ENABLE_CUDA: 0 # Set to 1 to enable
  • BIND_INFERENCE_API environment variable sets the inference API endpoint
  • multi2vec-bind is the name of the inference container
  • image is the container image
  • ENABLE_CUDA environment variable enables GPU usage

Credentials

As this integration runs a local container with the ImageBind model, no additional credentials (e.g. API key) are required.

Configure the vectorizer

Configure a Weaviate index to use an ImageBind embedding model by setting the vectorizer as follows:

from weaviate.classes.config import Configure

client.collections.create(
"DemoCollection",
properties=[
Property(name="title", data_type=DataType.TEXT),
Property(name="poster", data_type=DataType.BLOB),
],
vectorizer_config=[
Configure.NamedVectors.multi2vec_bind(
name="title_vector",
# Define the fields to be used for the vectorization - using image_fields, text_fields, video_fields
image_fields=[
Multi2VecField(name="poster", weight=0.9)
],
text_fields=[
Multi2VecField(name="title", weight=0.1)
]
)
],
# Additional parameters not shown
)

There is only one ImageBind model available.

Data import

After configuring the vectorizer, import data into Weaviate. Weaviate generates embeddings for the objects using the specified model.

collection = client.collections.get("DemoCollection")

with collection.batch.dynamic() as batch:
for src_obj in source_objects:
poster_b64 = url_to_base64(src_obj["poster_path"])
weaviate_obj = {
"title": src_obj["title"],
"description": src_obj["description"],
"poster": poster_b64 # Add the image in base64 encoding
}

# The model provider integration will automatically vectorize the object
batch.add_object(
properties=weaviate_obj,
# vector=vector # Optionally provide a pre-obtained vector
)
Re-use existing vectors

If you already have a compatible model vector available, you can provide it directly to Weaviate. This can be useful if you have already generated embeddings using the same model and want to use them in Weaviate, such as when migrating data from another system.

Searches

Once the vectorizer is configured, Weaviate will perform vector and hybrid search operations using the specified ImageBind model.

Embedding integration at search illustration

When you perform a vector search, Weaviate converts the text query into an embedding using the specified model and returns the most similar objects from the database.

The query below returns the n most similar objects from the database, set by limit.

collection = client.collections.get("DemoCollection")

response = collection.query.near_text(
query="A holiday film", # The model provider integration will automatically vectorize the query
limit=2
)

for obj in response.objects:
print(obj.properties["title"])
What is a hybrid search?

A hybrid search performs a vector search and a keyword (BM25) search, before combining the results to return the best matching objects from the database.

When you perform a hybrid search, Weaviate converts the text query into an embedding using the specified model and returns the best scoring objects from the database.

The query below returns the n best scoring objects from the database, set by limit.

collection = client.collections.get("DemoCollection")

response = collection.query.hybrid(
query="A holiday film", # The model provider integration will automatically vectorize the query
limit=2
)

for obj in response.objects:
print(obj.properties["title"])

When you perform a media search such as a near image search, Weaviate converts the query into an embedding using the specified model and returns the most similar objects from the database.

To perform a near media search such as near image search, convert the media query into a base64 string and pass it to the search query.

The query below returns the n most similar objects to the input image from the database, set by limit.

def url_to_base64(url):
import requests
import base64

image_response = requests.get(url)
content = image_response.content
return base64.b64encode(content).decode("utf-8")


collection = client.collections.get("DemoCollection")

query_b64 = url_to_base64(src_img_path)

response = collection.query.near_image(
near_image=query_b64,
limit=2,
return_properties=["title", "release_date", "tmdb_id", "poster"] # To include the poster property in the response (`blob` properties are not returned by default)
)

for obj in response.objects:
print(obj.properties["title"])

You can perform similar searches for other media types such as audio, video, thermal, IMU, and depth, by using an equivalent search query for the respective media type.

References

Vectorizer parameters

The ImageBind vectorizer supports multiple modalities (text, image, audio, video, thermal, IMU and depth). One or more of these can be specified in the vectorizer configuration as shown.

from weaviate.classes.config import Configure

client.collections.create(
"DemoCollection",
properties=[
Property(name="title", data_type=DataType.TEXT),
Property(name="poster", data_type=DataType.BLOB),
Property(name="sound", data_type=DataType.BLOB),
Property(name="video", data_type=DataType.BLOB),
],
vectorizer_config=[
Configure.NamedVectors.multi2vec_bind(
name="title_vector",
# Define the fields to be used for the vectorization
image_fields=[
Multi2VecField(name="poster", weight=0.7)
],
text_fields=[
Multi2VecField(name="title", weight=0.1)
],
audio_fields=[
Multi2VecField(name="sound", weight=0.1)
],
video_fields=[
Multi2VecField(name="video", weight=0.1)
],
# depth, IMU and thermal fields are also available
)
],
# Additional parameters not shown
)

Available models

There is only one ImageBind model available.

Further resources

Code examples

Once the integrations are configured at the collection, the data management and search operations in Weaviate work identically to any other collection. See the following model-agnostic examples:

  • The how-to: manage data guides show how to perform data operations (i.e. create, update, delete).
  • The how-to: search guides show how to perform search operations (i.e. vector, keyword, hybrid) as well as retrieval augmented generation.

Model licenses

Review the license for the model on the ImageBind page.

It is your responsibility to evaluate whether the terms of its license(s), if any, are appropriate for your intended use.

External resources

If you have any questions or feedback, let us know in the user forum.