Integration Ecosystem
Find new ways to extend your applications and infrastructure with our integration ecosystem.
Filter by
Technical Tags
Additional
Microsoft Azure
Deploy Weaviate on the Azure Marketplace and access models from OpenAI via the Microsoft Azure API.
Google Cloud Platform
Deploy Weaviate on the GCP Marketplace and seamlessly access models on AI Studio and Vertex AI.
Amazon Web Services
Deploy Weaviate on the AWS Marketplace and use the SageMaker and Bedrock API to access a variety of models.
Context Data
VectorETL enables you to ingest data into Weaviate by using the target connection.
Confluent
Fully managed Apache Kafka service for real-time streaming.
Astronomer
Managed Apache Airflow for data ingestion into Weaviate.
Databricks
Use models hosted on the Databricks platform and ingest Spark data structured from Databricks into Weaviate.
Unstructured
Platform and tools to work with unstructured data for RAG applications.
Replicate
Run machine learning models through a cloud API.
Composio
Manage and integrate tools with language models and AI agents.
Semantic Kernel
Framework by Microsoft for building large language model (LLM) applications.
Deepset
Build with Haystack for a simple way to build LLM applications.
LlamaIndex
Framework for building large language model (LLM) applications and ingesting data.
LangChain
Framework for building large language models (LLM) applications.
DSPy
Framework for programming language model applications.
Anyscale
Seamlessly connect to models on Anyscale directly from Weaviate.
Hugging Face
Weaviate integrates with Hugging Face’s API and Transformers library.
VoyageAI
Access Voyage AI’s models (embedding and reranker) directly from Weaviate.
Anthropic
Access a variety of generative models from Anthropic.
Cohere
Use a variety of embedding, reranker, and language models from Cohere.
OpenAI
Use a variety of embedding and language models from OpenAI
Weights and Biases
Weave by W&B offers data analysis APIs for monitoring applications.
Ragas
Evaluate your retrieval augmented generation (RAG) applications.
Nomic
Run GPT4All models and use Atlas to visualize vector embeddings.
LangWatch
Connect LangWatch to your Weaviate instance to log operational traces.
Langtrace
View vector search queries and generative calls that are made to your Weaviate cluster.
Arize AI
Log search queries sent to Weaviate and requests sent to LLM providers in Phoenix.
Become a Weaviate partner
Interested in learning more about the Weaviate Partner Program?
Fill out the form below to connect with us.