Skip to main content

Model provider integrations

New Documentation

The model provider integration pages are new and still undergoing improvements. We appreciate any feedback on this forum thread.

Weaviate integrates with a variety of self-hosted and API-based models from a range of providers.

This enables an enhanced developed experience, such as the ability to:

  • Import objects directly into Weaviate without having to manually specify embeddings, and
  • Build an integrated retrieval augmented generation (RAG) pipeline with generative AI models.

Model provider integrations

API-based

Model providerEmbeddingsGenerative AIOthers
Anthropic-Text-
Anyscale-Text-
AWSTextText
CohereTextTextReranker
GoogleText, MultimodalText-
Hugging FaceText--
Jina AIText--
Mistral-Text-
OctoAITextText-
OpenAITextText-
Azure OpenAITextText-
Voyage AIText-Reranker

Enable all API-based modules

Experimental feature

Available starting in v1.26.0. This is an experimental feature. Use with caution.

You can enable all API-based integrations at once by by setting the ENABLE_API_BASED_MODULES environment variable to true.

This make all API-based model integrations available for use, such as those for Anthropic, Cohere, OpenAI, and so on. These modules are lightweight, so enabling them all will not significantly increase resource usage.

Read more about enabling all API-based modules.

Locally hosted

Model providerEmbeddingsGenerative AIOthers
GPT4AllText--
Hugging FaceText, Multimodal (CLIP)--
Meta ImageBindMultimodal--
OllamaTextText-