Model provider integrations
The model provider integration pages are new and still undergoing improvements. We appreciate any feedback on this forum thread.
Weaviate integrates with a variety of self-hosted and API-based models from a range of providers.
This enables an enhanced developed experience, such as the ability to:
- Import objects directly into Weaviate without having to manually specify embeddings, and
- Build an integrated retrieval augmented generation (RAG) pipeline with generative AI models.
Model provider integrations
API-based
Model provider | Embeddings | Generative AI | Others |
---|---|---|---|
Anthropic | - | Text | - |
Anyscale | - | Text | - |
AWS | Text | Text | |
Cohere | Text | Text | Reranker |
Text, Multimodal | Text | - | |
Hugging Face | Text | - | - |
Jina AI | Text | - | - |
Mistral | - | Text | - |
OctoAI | Text | Text | - |
OpenAI | Text | Text | - |
Azure OpenAI | Text | Text | - |
Voyage AI | Text | - | Reranker |
Enable all API-based modules
Available starting in v1.26.0
. This is an experimental feature. Use with caution.
You can enable all API-based integrations at once by by setting the ENABLE_API_BASED_MODULES
environment variable to true
.
This make all API-based model integrations available for use, such as those for Anthropic, Cohere, OpenAI, and so on. These modules are lightweight, so enabling them all will not significantly increase resource usage.
Read more about enabling all API-based modules.
Locally hosted
Model provider | Embeddings | Generative AI | Others |
---|---|---|---|
GPT4All | Text | - | - |
Hugging Face | Text, Multimodal (CLIP) | - | - |
Meta ImageBind | Multimodal | - | - |
Ollama | Text | Text | - |