Skip to main content


Added in v1.24.3

The multi2vec-palm module uses a Google multimodal embedding model to create vectors from text or images


  • This module enables the nearText and nearImage search operators.

  • multi2vec-palm uses an external API.

    • Check vendor pricing before you vectorize data.
    • Obtain an API key from the vendor.
  • This module is only compatible with Google Vertex AI. It is not compatible with Google AI Studio.

  • The module s not compatible with Auto-schema. Define your collections manually.

Weaviate instance configuration


If you use Weaviate Cloud Services (WCS), this module is already enabled and pre-configured. You cannot edit the configuration in WCS.

Docker Compose file

To use the multi2vec-palm module, enable it in your Docker Compose file. Edit your docker-compose.yml manually or use the Weaviate configuration tool to generate the file.


locationYesNoneWhere the model runs (e.g. "us-central1").
projectIdYes"<Your GCP project>"The name of your GCP project.
modelIdNo"multimodalembedding@001"Current the only model available.
dimensionsNo1408Must be one of: 128, 256, 512, 1408.

Specify the API key as a request header or an environment variable.

  • Request header: X-Palm-Api-Key
  • Environment variable: PALM_APIKEY

Configure multi2vec-palm for VertexAI

This module is only supported in Google Vertex AI. It is not supported in Google AI Studio.

To enable the Vertex AI API on your Google Cloud project, follow Google's instructions.

Vertex AI API key

The Vertex AI API key is called an access token in Google Cloud.

To retrieve your token, install the Google Cloud CLI tool and run this command:

gcloud auth print-access-token

Token expiration


By default, Google Cloud's OAuth 2.0 access tokens have a lifetime of 1 hour. You can create tokens that last up to 12 hours. To create longer lasting tokens, follow the instructions in the Google Cloud IAM Guide.

Since the OAuth token is only valid for a limited time, you must periodically replace the token with a new one. After you generate the new token, you have to re-instantiate your Weaviate client to use it.

You can update the OAuth token manually, but manual updates may not be appropriate for your use case.

You can also automate the OAth token update. Weaviate does not control the OAth token update procedure. However, here are some automation options:

With Google Cloud CLI

If you are using the Google Cloud CLI, write a script to periodically update the token and extract the results.

Python code to extract the token looks like this:

client = re_instantiate_weaviate()

This is the re_instantiate_weaviate function:

import subprocess
import weaviate

def refresh_token() -> str:
result =["gcloud", "auth", "print-access-token"], capture_output=True, text=True)
if result.returncode != 0:
print(f"Error refreshing token: {result.stderr}")
return None
return result.stdout.strip()

def re_instantiate_weaviate() -> weaviate.Client:
token = refresh_token()

client = weaviate.Client(
additional_headers = {
"X-PaLM-Api-Key": token,
return client

# Run this every ~60 minutes
client = re_instantiate_weaviate()
With google-auth

Another way is through Google's own authentication library google-auth.

See the links to google-auth in Python and Node.js libraries.

You can, then, periodically the refresh function (see Python docs) to obtain a renewed token, and re-instantiate the Weaviate client.

For example, you could periodically run:

client = re_instantiate_weaviate()

Where re_instantiate_weaviate is something like:

from google.auth.transport.requests import Request
from google.oauth2.service_account import Credentials
import weaviate
import os

def get_credentials() -> Credentials:
credentials = Credentials.from_service_account_file(
request = Request()
return credentials

def re_instantiate_weaviate() -> weaviate.Client:
credentials = get_credentials()
token = credentials.token

client = weaviate.connect_to_wcs( # e.g. if you use the Weaviate Cloud Service
cluster_url="https://WEAVIATE_INSTANCE_URL", # Replace WEAVIATE_INSTANCE_URL with the URL
auth_credentials=weaviate.auth.AuthApiKey(os.getenv("WCS_DEMO_RO_KEY")), # Replace with your WCS key
"X-PaLM-Api-Key": token,
return client

# Run this every ~60 minutes
client = re_instantiate_weaviate()

The service account key shown above can be generated by following this guide.


This configuration does the following:

  • enables multi2vec-palm
  • sets multi2vec-palm as the default vectorizer
  • uses an environment variable to set the PaLM API key
version: '3.4'
restart: on-failure:0
- 8080:8080
- 50051:50051
ENABLE_MODULES: multi2vec-palm
PALM_APIKEY: sk-replace-with-your-api-key # Or provide the key at query time.

Collection configuration

To specify module behavior in a collection, edit the Weaviate schema.

Vectorization settings

Set vectorizer behavior in the moduleConfig section for each collection and property.

Collection-level settings

vectorizerThe module to use to vectorize the data.
vectorizeClassNameWhen true, vectorize the collection name. Defaults to true.
<media>FieldsMap property names to modalities (under moduleConfig.multi2vec-palm).
One of: textFields, imageFields
weightsChange the contribution of the different modalities when calculating the vector.

Property-level settings

skipWhen true, do not vectorize the property. Defaults to false
vectorizePropertyNameWhen true, vectorize the property name. Defaults to true.
dataTypeThe property's data type. Use in <media>Fields.
One of: text,blob


This collection definition sets the following:

  • The multi2vec-palm module is the vectorizer for the collection MultimodalExample.
  • The name property is text datatype and is a text field.
  • The image property is a blob datatype and is an image field.
"classes": [
"class": "MultimodalExample",
"description": "An example collection for multi2vec-palm",
"vectorizer": "multi2vec-palm",
"moduleConfig": {
"multi2vec-palm": {
"textFields": ["name"],
"imageFields": ["image"],
"properties": [
"dataType": ["text"],
"name": "name"
"dataType": ["blob"],
"name": "image"

Example with weights

The following example adds weights:

  • textFields is 0.7
  • imageFields is 0.3
"classes": [
"class": "MultimodalExample",
"moduleConfig": {
"multi2vec-palm": {
"weights": {
"textFields": [0.7],
"imageFields": [0.3],

blob data objects

Data that has the blob property type must be base64 encoded. To get the base64-encoded value of an image, use the helper methods in the Weaviate clients or run the following command:

cat my_image.png | base64

Additional information

Available models

Currently, the only available model is multimodalembedding@001.

Additional search operators

The multi2vec-palm vectorizer module enables the nearText and nearImage search operators.

These operators can do cross-modal search and retrieval.

All objects are encoded into a single vector space. This means, a query that use one modality, such as text, returns results from all available modalities.

Usage example


import weaviate
import weaviate.classes as wvc
from weaviate.collections.classes.grpc import Move
import os

client = weaviate.connect_to_local()

publications = client.collections.get("Publication")

response = publications.query.near_text(
move_to=Move(force=0.85, concepts="haute couture"),
move_away=Move(force=0.45, concepts="finance"),

for o in response.objects:



import weaviate

client = weaviate.Client("http://localhost:8080")

nearImage = {"image": "/9j/4AAQSkZJRgABAgE..."}

result = (
.get("FashionItem", "image")


Questions and feedback

If you have any questions or feedback, please let us know on our forum. For example, you can: