Skip to main content

Google AI + Weaviate

Google AI offers a wide range of models for natural language processing and generation. Weaviate seamlessly integrates with Google AI Studio and Google Vertex AI APIs, allowing users to leverage Google AI's models directly within the Weaviate database.

These integrations empower developers to build sophisticated AI-driven applications with ease.

Integrations with Google AI

Embedding integration illustration

Google AI's embedding models transform text data into high-dimensional vector representations, capturing semantic meaning and context.

Weaviate integrates with Google AI's embedding models to enable seamless vectorization of data. This integration allows users to perform semantic and hybrid search operations without the need for additional preprocessing or data transformation steps.

Google AI embedding integration page

Generative AI models for RAG

Single prompt RAG integration generates individual outputs per search result

Google AI's generative AI models can generate human-like text based on given prompts and contexts.

Weaviate's generative AI integration enables users to perform retrieval augmented generation (RAG) directly within the Weaviate database. This combines Weaviate's efficient storage and fast retrieval capabilities with Google AI's generative AI models to generate personalized and context-aware responses.

Google AI generative AI integration page

Summary

These integrations enable developers to leverage Google AI's powerful models directly within Weaviate.

In turn, they simplify the process of building AI-driven applications to speed up your development process, so that you can focus on creating innovative solutions.

Credentials

You must provide a valid Google AI API credentials to Weaviate for these integrations.

Vertex AI

Added in

From Weaviate versions 1.24.16, 1.25.3 and 1.26.

You can save your Google Vertex AI credentials and have Weaviate generate the necessary tokens for you. This enables use of IAM service accounts in private deployments that can hold Google credentials.

To do so:

  • Set USE_GOOGLE_AUTH environment variable to true.
  • Have the credentials available in one of the following locations.

Once appropriate credentials are found, Weaviate uses them to generate an access token and authenticates itself against Vertex AI. Upon token expiry, Weaviate generates a replacement access token.

In a containerized environment, you can mount the credentials file to the container. For example, you can mount the credentials file to the /etc/weaviate/ directory and set the GOOGLE_APPLICATION_CREDENTIALS environment variable to /etc/weaviate/google_credentials.json.

Search locations for Google Vertex AI credentials

Once USE_GOOGLE_AUTH is set to true, Weaviate will look for credentials in the following places, preferring the first location found:

  1. A JSON file whose path is specified by the GOOGLE_APPLICATION_CREDENTIALS environment variable. For workload identity federation, refer to this link on how to generate the JSON configuration file for on-prem/non-Google cloud platforms.
  2. A JSON file in a location known to the gcloud command-line tool. On Windows, this is %APPDATA%/gcloud/application_default_credentials.json. On other systems, $HOME/.config/gcloud/application_default_credentials.json.
  3. On Google App Engine standard first generation runtimes (<= Go 1.9) it uses the appengine.AccessToken function.
  4. On Google Compute Engine, Google App Engine standard second generation runtimes (>= Go 1.11), and Google App Engine flexible environment, it fetches credentials from the metadata server.

Get started

Weaviate integrates with both Google AI Studio or Google Vertex AI.

Go to the relevant integration page to learn how to configure Weaviate with the Google models and start using them in your applications.

Other third party integrations

Weaviate integrates with third party systems that provide a wide range of tools and services. For information on particular systems, see Integrations

Questions and feedback

If you have any questions or feedback, let us know in the user forum.