Skip to main content

Generative Search - OpenAI

LICENSEย Weaviate on Stackoverflow badgeย Weaviate issues on GitHub badgeย Weaviate version badgeย Weaviate total Docker pulls badgeย Go Report Card

In shortโ€‹

  • The Generative OpenAI (generative-openai) module generates responses based on the data stored in your Weaviate instance.
  • The module can generate a response for each returned object, or a single response for a group of objects.
  • The module adds a generate {} parameter to the GraphQL _additional {} property of the Get {} queries.
  • Added in Weaviate v1.17.3.
  • The default OpenAI model is gpt-3.5-turbo, but other models (e.g. gpt-4) are supported.
  • For Azure OpenAI, a model must be specified.

Azure OpenAI or OpenAI?โ€‹

tip

This module is compatible with both OpenAI and Azure OpenAI.

The instructions vary slightly based on whether you are using OpenAI directly or Azure OpenAI. Please make sure that you are following the right instructions for your service provider.

The differences are in:

  • Parameter names used in the schema, and
  • Names of the API key to be used.

Introductionโ€‹

generative-openai generates responses based on the data stored in your Weaviate instance.

The module works in two steps:

  1. (Weaviate) Run a search query in Weaviate to find relevant objects.
  2. (OpenAI) Use an OpenAI model to generate a response based on the results (from the previous step) and the provided prompt or task.
note

You can use the Generative OpenAI module with non-OpenAI upstream modules. For example, you could use text2vec-cohere or text2vec-huggingface to vectorize and query your data, but then rely on the generative-openai module to generate a response.

The generative module can provide results for:

  • each returned object - singleResult{ prompt }
  • the group of all results together โ€“ groupedResult{ task }

You need to input both a query and a prompt (for individual responses) or a task (for all responses).

Inference API keyโ€‹

generative-openai requires an API key from OpenAI or Azure OpenAI.

tip

You only need to provide one of the two keys, depending on which service (OpenAI or Azure OpenAI) you are using.

Providing the key to Weaviateโ€‹

You can provide your API key in two ways:

  1. During the configuration of your Docker instance, by adding OPENAI_APIKEY or AZURE_APIKEY as appropriate under environment to your docker-compose file, like this:

    environment:
    OPENAI_APIKEY: 'your-key-goes-here' # For use with OpenAI. Setting this parameter is optional; you can also provide the key at runtime.
    AZURE_APIKEY: 'your-key-goes-here' # For use with Azure OpenAI. Setting this parameter is optional; you can also provide the key at runtime.
    ...
  2. At run-time (recommended), by providing "X-OpenAI-Api-Key" or "X-Azure-Api-Key" through the request header. You can provide it using the Weaviate client, like this:

import weaviate

client = weaviate.Client(
url = "https://some-endpoint.weaviate.network/",
additional_headers = {
"X-OpenAI-Api-Key": "YOUR-OPENAI-API-KEY", # Replace with your API key
"X-Azure-Api-Key": "YOUR-AZURE-API-KEY", # Replace with your API key
}
)

Module configurationโ€‹

Not applicable to WCS

This module is enabled and pre-configured on Weaviate Cloud Services.

Configuration file (Weaviate open source only)โ€‹

You can enable the Generative OpenAI module in your configuration file (e.g. docker-compose.yaml). Add the generative-openai module (alongside any other module you may need) to the ENABLE_MODULES property, like this:

ENABLE_MODULES: 'text2vec-openai,generative-openai'

Here is a full example of a Docker configuration, which uses the generative-openai module in combination with text2vec-openai:

---
version: '3.4'
services:
weaviate:
command:
- --host
- 0.0.0.0
- --port
- '8080'
- --scheme
- http
image:
semitechnologies/weaviate:1.19.6
ports:
- 8080:8080
restart: on-failure:0
environment:
QUERY_DEFAULTS_LIMIT: 25
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
DEFAULT_VECTORIZER_MODULE: 'text2vec-openai'
ENABLE_MODULES: 'text2vec-openai,generative-openai'
OPENAI_APIKEY: sk-foobar # For use with OpenAI. Setting this parameter is optional; you can also provide the key at runtime.
AZURE_APIKEY: sk-foobar # For use with Azure OpenAI. Setting this parameter is optional; you can also provide the key at runtime.
CLUSTER_HOSTNAME: 'node1'

Schema configurationโ€‹

You can define settings for this module in the schema.

OpenAI vs Azure OpenAIโ€‹

  • OpenAI users can optionally set the model parameter.
  • Azure OpenAI users must set the parameters resourceName and deploymentId.

Model parametersโ€‹

You can also configure additional parameters for the generative model through the xxxProperty parameters shown below.

Example schemaโ€‹

For example, the following schema configuration will set Weaviate to use the generative-openai model with the Document class.

{
"classes": [
{
"class": "Document",
"description": "A class called document",
...,
"moduleConfig": {
"generative-openai": {
"model": "gpt-3.5-turbo", // Optional - Defaults to `gpt-3.5-turbo`
"resourceName": "<YOUR-RESOURCE-NAME>", // For Azure OpenAI - Required
"deploymentId": "<YOUR-MODEL-NAME>", // For Azure OpenAI - Required
"temperatureProperty": <temperature>, // Optional, applicable to both OpenAI and Azure OpenAI
"maxTokensProperty": <max_tokens>, // Optional, applicable to both OpenAI and Azure OpenAI
"frequencyPenaltyProperty": <frequency_penalty>, // Optional, applicable to both OpenAI and Azure OpenAI
"presencePenaltyProperty": <presence_penalty>, // Optional, applicable to both OpenAI and Azure OpenAI
"topPProperty": <top_p>, // Optional, applicable to both OpenAI and Azure OpenAI
}
}
}
]
}
New to Weaviate Schemas?

If you are new to Weaviate, check out the Weaviate schema tutorial.

How to useโ€‹

This module extends the _additional {...} property with a generate operator.

generate takes the following arguments:

FieldData TypeRequiredExampleDescription
singleResult {prompt}stringnoSummarize the following in a tweet: {summary}Generates a response for each individual search result. You need to include at least one result field in the prompt, between braces.
groupedResult {task}stringnoExplain why these results are similar to each otherGenerates a single response for all search results

Example of properties in the promptโ€‹

When piping the results to the prompt, at least one field returned by the query must be added to the prompt. If you don't add any fields, Weaviate will throw an error.

For example, assume your schema looks like this:

{
Article {
title
summary
}
}

You can add both title and summary to the prompt by enclosing them in curly brackets:

{
Get {
Article {
title
summary
_additional {
generate(
singleResult: {
prompt: """
Summarize the following in a tweet:

{title} - {summary}
"""
}
) {
singleResult
error
}
}
}
}
}

Example - single resultโ€‹

Here is an example of a query where:

  • we run a vector search (with nearText) to find articles about "Italian food"
  • then we ask the generator module to describe each result as a Facebook ad.
    • the query asks for the summary field, which it then includes in the prompt argument of the generate operator.
{
Get {
Article(
nearText: {
concepts: ["Italian food"]
}
limit: 1
) {
title
summary
_additional {
generate(
singleResult: {
prompt: """
Describe the following as a Facebook Ad: {summary}
"""
}
) {
singleResult
error
}
}
}
}
}

Example response - single resultโ€‹

{
"data": {
"Get": {
"Article": [
{
"_additional": {
"generate": {
"error": null,
"singleResult": "This Facebook Ad will explore the fascinating history of Italian food and how it has evolved over time. Learn from Dr Eva Del Soldato and Diego Zancani, two experts in Italian food history, about how even the emoji for pasta isn't just pasta -- it's a steaming plate of spaghetti heaped with tomato sauce on top. Discover how Italy's complex history has shaped the Italian food we know and love today."
}
},
"summary": "Even the emoji for pasta isn't just pasta -- it's a steaming plate of spaghetti heaped with tomato sauce on top. But while today we think of tomatoes as inextricably linked to Italian food, that hasn't always been the case. \"People tend to think Italian food was always as it is now -- that Dante was eating pizza,\" says Dr Eva Del Soldato , associate professor of romance languages at the University of Pennsylvania, who leads courses on Italian food history. In fact, she says, Italy's complex history -- it wasn't unified until 1861 -- means that what we think of Italian food is, for the most part, a relatively modern concept. Diego Zancani, emeritus professor of medieval and modern languages at Oxford University and author of \"How We Fell in Love with Italian Food,\" agrees.",
"title": "How this fruit became the star of Italian cooking"
}
]
}
}
}

Example - grouped resultโ€‹

Here is an example of a query where:

  • we run a vector search (with nearText) to find publications about finance,
  • then we ask the generator module to explain why these articles are about finance.
{
Get {
Publication(
nearText: {
concepts: ["magazine or newspaper about finance"]
certainty: 0.75
}
) {
name
_additional {
generate(
groupedResult: {
task: "Explain why these magazines or newspapers are about finance"
}
) {
groupedResult
error
}
}
}
}
}

Example response - grouped resultโ€‹

{
"data": {
"Get": {
"Publication": [
{
"_additional": {
"generate": {
"error": null,
"groupedResult": "The Financial Times, Wall Street Journal, and The New York Times Company are all about finance because they provide news and analysis on the latest financial markets, economic trends, and business developments. They also provide advice and commentary on personal finance, investments, and other financial topics."
}
},
"name": "Financial Times"
},
{
"_additional": {
"generate": null
},
"name": "Wall Street Journal"
},
{
"_additional": {
"generate": null
},
"name": "The New York Times Company"
}
]
}
}
}

Additional informationโ€‹

Supported models (OpenAI)โ€‹

You can use any of

The module also supports these legacy models (not recommended)

More resourcesโ€‹

If you can't find the answer to your question here, please look at the:

  1. Frequently Asked Questions. Or,
  2. Knowledge base of old issues. Or,
  3. For questions: Stackoverflow. Or,
  4. For more involved discussion: Weaviate Community Forum. Or,
  5. We also have a Slack channel.