Retrieval Augmented Generation (RAG)
Retrieval Augmented Generation (RAG) combines information retrieval with generative AI models.
In Weaviate, a RAG query consists of two parts: a search query, and a prompt for the model. Weaviate first performs the search, then passes both the search results and your prompt to a generative AI model before returning the generated response.
Configure a generative model provider
v1.30
To use RAG with a generative model integration:
- set a default configuration for the collection and/or
- provide the settings as a part of the query:
- Python Client v4
- JS/TS Client v3
- Go
- Java
from weaviate.classes.generate import GenerativeConfig
from weaviate.classes.query import MetadataQuery
reviews = client.collections.get("WineReviewNV")
response = reviews.generate.near_text(
query="a sweet German white wine",
limit=2,
target_vector="title_country",
single_prompt="Translate this into German: {review_body}",
grouped_task="Summarize these review",
generative_provider=GenerativeConfig.openai(
temperature=0.1,
),
)
for o in response.objects:
print(f"Properties: {o.properties}")
print(f"Single prompt result: {o.generative.text}")
print(f"Grouped task result: {response.generative.text}")
import { generativeParameters } from 'weaviate-client';
const reviews = client.collections.use("WineReviewNV")
const searchResponse = await reviews.generate.nearText("a sweet German white wine", {
singlePrompt: {
prompt: "Translate this into German: {review_body}"
},
groupedTask: {
prompt: "Summarize these review"
},
config: generativeParameters.openAI({
model: "gpt-3.5-turbo",
}),
},{
limit: 2,
targetVector: "title_country",
})
for (const result of searchResponse.objects) {
console.log("Properties:", result.properties)
console.log("Single prompt result:", result.generative?.text )
console.log("Grouped task result:", searchResponse.generative?.text)
}
// Go support coming soon
// Java support coming soon
Example response
Properties: {'country': 'Austria', 'title': 'Gebeshuber 2013 Frizzante Rosé Pinot Noir (Österreichischer Perlwein)', 'review_body': "With notions of cherry and cinnamon on the nose and just slight fizz, this is a refreshing, fruit-driven sparkling rosé that's full of strawberry and cherry notes—it might just be the very definition of easy summer wine. It ends dry, yet refreshing.", 'points': 85, 'price': 21.0}
Single prompt result: Mit Noten von Kirsche und Zimt in der Nase und nur leicht prickelnd, ist dies ein erfrischender, fruchtiger sprudelnder Rosé, der voller Erdbeer- und Kirschnoten steckt - es könnte genau die Definition von leichtem Sommerwein sein. Er endet trocken, aber erfrischend.
Properties: {'price': 27.0, 'points': 89, 'review_body': 'Beautifully perfumed, with acidity, white fruits and a mineral context. The wine is layered with citrus and lime, hints of fresh pineapple acidity. Screw cap.', 'title': 'Stadt Krems 2009 Steinterrassen Riesling (Kremstal)', 'country': 'Austria'}
Single prompt result: Wunderschön parfümiert, mit Säure, weißen Früchten und einem mineralischen Kontext. Der Wein ist mit Zitrus- und Limettennoten durchzogen, mit Anklängen von frischer Ananas-Säure. Schraubverschluss.
Grouped task result: The first review is for the Gebeshuber 2013 Frizzante Rosé Pinot Noir from Austria, describing it as a refreshing and fruit-driven sparkling rosé with cherry and cinnamon notes. It is said to be the perfect easy summer wine, ending dry yet refreshing.
The second review is for the Stadt Krems 2009 Steinterrassen Riesling from Austria, noting its beautiful perfume, acidity, white fruits, and mineral context. The wine is described as layered with citrus and lime flavors, with hints of fresh pineapple acidity. It is sealed with a screw cap.
For more information on the available modeld and their additional options, see the model providers section.
Named vectors
v1.24
Any vector-based search on collections with named vectors configured must include a target
vector name in the query. This allows Weaviate to find the correct vector to compare with the query vector.
- Python Client v4
- Python Client v3
- JS/TS Client v3
- JS/TS Client v2
- GraphQL
from weaviate.classes.query import MetadataQuery
reviews = client.collections.get("WineReviewNV")
response = reviews.generate.near_text(
query="a sweet German white wine",
limit=2,
target_vector="title_country", # Specify the target vector for named vector collections
single_prompt="Translate this into German: {review_body}",
grouped_task="Summarize these review",
return_metadata=MetadataQuery(distance=True),
)
for o in response.objects:
print(f"Properties: {o.properties}")
print(f"Single prompt result: {o.generative.text}")
print(f"Grouped task result: {response.generative.text}")
# Unfortunately, named vectors are not suppored in the v3 API / Python client.
# Please upgrade to the v4 API / Python client to use named vectors.
const myNVCollection = client.collections.use('WineReviewNV');
const result = await myNVCollection.generate.nearText('a sweet German white wine', {
singlePrompt: 'Translate this into German: {review_body}',
groupedTask: 'Summarize these review',
}, {
limit: 2,
targetVector: 'title_country',
}
);
console.log(result.generative?.text); // print groupedTask result
for (let object of result.objects) {
console.log(JSON.stringify(object.properties, null, 2));
console.log(object.generative?.text); // print singlePrompt result
}
result = await client.graphql
.get()
.withClassName('WineReviewNV')
.withNearText({
concepts: ['a sweet German white wine'],
targetVectors: ['title_country'],
})
.withGenerate({
singlePrompt: 'Translate this into German: {review_body}',
groupedTask: 'Summarize these reviews',
})
.withLimit(2)
.withFields('title review_body country')
.do();
console.log(JSON.stringify(result, null, 2));
{
Get {
JeopardyQuestion(
limit: 2
nearText: {
concepts: ["animals in movies"]
}
where: {
path: ["round"]
operator: Equal
valueText: "Double Jeopardy!"
}
) {
question
answer
_additional {
generate(
singleResult: {
prompt: """
Translate this into German: {review_body}
"""
}
groupedResult: {
task: """
Summarize these reviews
"""
}
) {
singleResult
error
}
}
}
}
}
Example response
Properties: {'country': 'Austria', 'title': 'Gebeshuber 2013 Frizzante Rosé Pinot Noir (Österreichischer Perlwein)', 'review_body': "With notions of cherry and cinnamon on the nose and just slight fizz, this is a refreshing, fruit-driven sparkling rosé that's full of strawberry and cherry notes—it might just be the very definition of easy summer wine. It ends dry, yet refreshing.", 'points': 85, 'price': 21.0}
Single prompt result: Mit Noten von Kirsche und Zimt in der Nase und nur leicht prickelnd, ist dies ein erfrischender, fruchtiger sprudelnder Rosé, der voller Erdbeer- und Kirschnoten steckt - es könnte genau die Definition von leichtem Sommerwein sein. Er endet trocken, aber erfrischend.
Properties: {'price': 27.0, 'points': 89, 'review_body': 'Beautifully perfumed, with acidity, white fruits and a mineral context. The wine is layered with citrus and lime, hints of fresh pineapple acidity. Screw cap.', 'title': 'Stadt Krems 2009 Steinterrassen Riesling (Kremstal)', 'country': 'Austria'}
Single prompt result: Wunderschön parfümiert, mit Säure, weißen Früchten und einem mineralischen Kontext. Der Wein ist mit Zitrus- und Limettennoten durchzogen, mit Anklängen von frischer Ananas-Säure. Schraubverschluss.
Grouped task result: The first review is for the Gebeshuber 2013 Frizzante Rosé Pinot Noir from Austria, describing it as a refreshing and fruit-driven sparkling rosé with cherry and cinnamon notes. It is said to be the perfect easy summer wine, ending dry yet refreshing.
The second review is for the Stadt Krems 2009 Steinterrassen Riesling from Austria, noting its beautiful perfume, acidity, white fruits, and mineral context. The wine is described as layered with citrus and lime flavors, with hints of fresh pineapple acidity. It is sealed with a screw cap.
Single prompt search
Single prompt search returns a generated response for each object in the query results.
Define object properties
– using {prop-name}
syntax – to interpolate retrieved content in the prompt.
The properties you use in the prompt do not have to be among the properties you retrieve in the query.
- Python Client v4
- Python Client v3
- JS/TS Client v3
- JS/TS Client v2
- Go
- GraphQL
prompt = (
"Convert this quiz question: {question} and answer: {answer} into a trivia tweet."
)
jeopardy = client.collections.get("JeopardyQuestion")
response = jeopardy.generate.near_text(
query="World history", limit=2, single_prompt=prompt
)
# print source properties and generated responses
for o in response.objects:
print(f"Properties: {o.properties}")
print(f"Single prompt result: {o.generative.text}")
generate_prompt = "Convert this quiz question: {question} and answer: {answer} into a trivia tweet."
response = (
client.query
.get("JeopardyQuestion")
.with_generate(single_prompt=generate_prompt)
.with_near_text({
"concepts": ["World history"]
})
.with_limit(2)
).do()
print(json.dumps(response, indent=2))
let response;
const jeopardy = client.collections.use('JeopardyQuestion');
const prompt = `Convert this quiz question: {question} and answer: {answer} into a trivia tweet.`
response = await jeopardy.generate.nearText('World history', {
singlePrompt: prompt
},{
limit: 2
})
for (let object of response.objects) {
console.log(JSON.stringify(object.properties, null, 2));
console.log(object.generative?.text); // print singlePrompt result
}
generatePrompt = 'Convert this quiz question: {question} and answer: {answer} into a trivia tweet.';
result = await client.graphql
.get()
.withClassName('JeopardyQuestion')
.withGenerate({
singlePrompt: generatePrompt,
})
.withNearText({
concepts: ['World history'],
})
.withFields('round')
.withLimit(2)
.do();
console.log(JSON.stringify(result, null, 2));
generatePrompt := "Convert the following into a question for twitter. Include emojis for fun, but do not include the answer: {question}."
gs := graphql.NewGenerativeSearch().SingleResult(generatePrompt)
response, err := client.GraphQL().Get().
WithClassName("JeopardyQuestion").
WithFields(
graphql.Field{Name: "question"},
).
WithGenerativeSearch(gs).
WithNearText((&graphql.NearTextArgumentBuilder{}).
WithConcepts([]string{"World history"})).
WithLimit(2).
Do(ctx)
generatePrompt := "Convert this quiz question: {question} and answer: {answer} into a trivia tweet."
gs := graphql.NewGenerativeSearch().SingleResult(generatePrompt)
response, err := client.GraphQL().Get().
WithClassName("JeopardyQuestion").
WithFields(
graphql.Field{Name: "question"},
graphql.Field{Name: "answer"},
).
WithGenerativeSearch(gs).
WithNearText((&graphql.NearTextArgumentBuilder{}).
WithConcepts([]string{"World history"})).
WithLimit(2).
Do(ctx)
{
Get {
JeopardyQuestion (
nearText: {
concepts: ["World history"]
}
limit: 2
) {
_additional {
generate(
singleResult: {
prompt: """
Convert this quiz question: {question} and answer: {answer} into a trivia tweet.
"""
}
) {
singleResult
error
}
}
}
}
}
Example response
Property 'question': Including, in 19th century, one quarter of world's land & people, the sun never set on it
Single prompt result: Did you know that in the 19th century, one quarter of the world's land and people were part of an empire where the sun never set? ☀️🌍 #historybuffs #funfact
Property 'question': From Menes to the Ptolemys, this country had more kings than any other in ancient history
Single prompt result: Which country in ancient history had more kings than any other, from Menes to the Ptolemys? 👑🏛️ #historybuffs #ancientkings
Additional parameters
v1.30
You can use generative parameters to specify additional options when performing a single prompt search:
- Python Client v4
- JS/TS Client v3
- Go
- Java
from weaviate.classes.generate import GenerativeConfig, GenerativeParameters
prompt = GenerativeParameters.single_prompt(
prompt="Convert this quiz question: {question} and answer: {answer} into a trivia tweet.",
metadata=True,
debug=True,
)
jeopardy = client.collections.get("JeopardyQuestion")
response = jeopardy.generate.near_text(
query="World history",
limit=2,
single_prompt=prompt,
generative_provider=GenerativeConfig.openai()
)
# print source properties and generated responses
for o in response.objects:
print(f"Properties: {o.properties}")
print(f"Single prompt result: {o.generative.text}")
print(f"Debug: {o.generative.debug}")
print(f"Metadata: {o.generative.metadata}")
import { generativeParameters } from 'weaviate-client';
let response;
const jeopardy = client.collections.use('JeopardyQuestion');
const singlePromptConfig = {
prompt: "Convert this quiz question: {question} and answer: {answer} into a trivia tweet.",
metadata: true,
debug: true,
}
response = await jeopardy.generate.nearText("World history", {
singlePrompt: singlePromptConfig,
config : generativeParameters.openAI()
}, {
limit: 2,
})
// print source properties and generated responses
for ( const object of response.objects) {
console.log("Properties:", object.properties)
console.log("Single prompt result:", object.generative?.text)
console.log("Debug:", object.generative?.debug)
console.log("Metadata:", object.generative?.metadata)
}
// Go support coming soon
// Java support coming soon
Example response
Properties: {'points': 400, 'answer': 'the British Empire', 'air_date': datetime.datetime(1984, 12, 10, 0, 0, tzinfo=datetime.timezone.utc), 'question': "Including, in 19th century, one quarter of world's land & people, the sun never set on it", 'round': 'Double Jeopardy!'}
Single prompt result: Did you know that in the 19th century, the sun never set on the British Empire, which included one quarter of the world's land and people? #triviatuesday #britishempire
Debug: full_prompt: "Convert this quiz question: Including, in 19th century, one quarter of world\'s land & people, the sun never set on it and answer: the British Empire into a trivia tweet."
Metadata: usage {
prompt_tokens: 46
completion_tokens: 43
total_tokens: 89
}
Properties: {'points': 400, 'answer': 'Egypt', 'air_date': datetime.datetime(1989, 9, 5, 0, 0, tzinfo=datetime.timezone.utc), 'question': 'From Menes to the Ptolemys, this country had more kings than any other in ancient history', 'round': 'Double Jeopardy!'}
Single prompt result: Did you know that Egypt had more kings than any other country in ancient history, from Menes to the Ptolemys? #triviathursday #ancienthistory
Debug: full_prompt: "Convert this quiz question: From Menes to the Ptolemys, this country had more kings than any other in ancient history and answer: Egypt into a trivia tweet."
Metadata: usage {
prompt_tokens: 42
completion_tokens: 36
total_tokens: 78
}
Grouped task search
Grouped task search returns one response that includes all of the query results. By default grouped task search uses all object properties
in the prompt.
- Python Client v4
- Python Client v3
- JS/TS Client v3
- JS/TS Client v2
- Go
- GraphQL
task = "What do these animals have in common, if anything?"
jeopardy = client.collections.get("JeopardyQuestion")
response = jeopardy.generate.near_text(
query="Cute animals",
limit=3,
grouped_task=task,
)
# print the generated response
print(f"Grouped task result: {response.generative.text}")
generate_prompt = "What do these animals have in common, if anything?"
response = (
client.query
.get("JeopardyQuestion", ["points"])
.with_generate(grouped_task=generate_prompt)
.with_near_text({
"concepts": ["Cute animals"]
})
.with_limit(3)
).do()
print(json.dumps(response, indent=2))
let response;
const jeopardy = client.collections.use('JeopardyQuestion');
const groupedTaskPrompt = `What do these animals have in common, if anything?`;
response = await jeopardy.generate.nearText('Cute animals',{
groupedTask: groupedTaskPrompt
},{
limit: 3 }
)
console.log(response.generative?.text);
generatePrompt = 'What do these animals have in common, if anything?';
result = await client.graphql
.get()
.withClassName('JeopardyQuestion')
.withGenerate({
groupedTask: generatePrompt,
})
.withNearText({
concepts: ['Cute animals'],
})
.withFields('points')
.withLimit(3)
.do();
console.log(JSON.stringify(result, null, 2));
generatePrompt := "What do these animals have in common, if anything?"
gs := graphql.NewGenerativeSearch().GroupedResult(generatePrompt)
response, err := client.GraphQL().Get().
WithClassName("JeopardyQuestion").
WithFields(
graphql.Field{Name: "points"},
).
WithGenerativeSearch(gs).
WithNearText((&graphql.NearTextArgumentBuilder{}).
WithConcepts([]string{"Cute animals"})).
WithLimit(3).
Do(ctx)
generatePrompt := "What do these animals have in common, if anything?"
gs := graphql.NewGenerativeSearch().GroupedResult(generatePrompt, "answer", "question")
response, err := client.GraphQL().Get().
WithClassName("JeopardyQuestion").
WithFields(
graphql.Field{Name: "question"},
graphql.Field{Name: "points"},
).
WithGenerativeSearch(gs).
WithNearText((&graphql.NearTextArgumentBuilder{}).
WithConcepts([]string{"Australian animals"})).
WithLimit(3).
Do(ctx)
{
Get {
JeopardyQuestion (
nearText: {
concepts: ["Cute animals"]
}
limit: 3
) {
points
_additional {
generate(
groupedResult: {
task: """
What do these animals have in common, if anything?
"""
}
) {
groupedResult
error
}
}
}
}
}
Example response
Grouped task result: All of these animals are mammals.
Set grouped task prompt properties
Define object properties
to use in the prompt. This limits the information in the prompt and reduces prompt length.
- Python Client v4
- Python Client v3
- JS/TS Client v3
- JS/TS Client v2
- Go
- GraphQL
task = "What do these animals have in common, if anything?"
jeopardy = client.collections.get("JeopardyQuestion")
response = jeopardy.generate.near_text(
query="Australian animals",
limit=3,
grouped_task=task,
grouped_properties=["answer", "question"],
)
# print the generated response
for o in response.objects:
print(f"Properties: {o.properties}")
print(f"Grouped task result: {response.generative.text}")
generate_prompt = "What do these animals have in common, if anything?"
response = (
client.query
.get("JeopardyQuestion", ["question points"])
.with_generate(
grouped_task=generate_prompt,
grouped_properties=["answer", "question"] # available since client version 3.19.2
)
.with_near_text({
"concepts": ["Australian animals"]
})
.with_limit(3)
).do()
print(json.dumps(response, indent=2))
let response;
const jeopardy = client.collections.use('JeopardyQuestion');
const generatePrompt = `What do these animals have in common, if anything?`;
response = await jeopardy.generate.nearText('Australian animals', {
groupedTask: generatePrompt,
groupedProperties: ['answer', 'question'],
},{
limit: 3
})
console.log(response.generative?.text);
generatePrompt = 'What do these animals have in common, if anything?';
result = await client.graphql
.get()
.withClassName('JeopardyQuestion')
.withGenerate({
groupedTask: generatePrompt,
groupedProperties: ['answer', 'question'], // available since client version 1.3.2
})
.withNearText({
concepts: ['Australian animals'],
})
.withFields('question points')
.withLimit(3)
.do();
console.log(JSON.stringify(result, null, 2));
generatePrompt := "What do these animals have in common, if anything?"
gs := graphql.NewGenerativeSearch().GroupedResult(generatePrompt, "answer", "question")
response, err := client.GraphQL().Get().
WithClassName("JeopardyQuestion").
WithFields(
graphql.Field{Name: "question"},
graphql.Field{Name: "points"},
).
WithGenerativeSearch(gs).
WithNearText((&graphql.NearTextArgumentBuilder{}).
WithConcepts([]string{"Australian animals"})).
WithLimit(3).
Do(ctx)
{
Get {
JeopardyQuestion (
nearText: {
concepts: ["Australian animals"]
}
limit: 3
) {
question
points
_additional {
generate(
groupedResult: {
task: """
What do these animals have in common, if anything?
"""
properties: ["answer", "question"]
}
) {
groupedResult
error
}
}
}
}
}
Example response
Grouped task result: The commonality among these animals is that they are all native to Australia.
Additional parameters
v1.30
You can use generative parameters to specify additional options when performing grouped tasks:
- Python Client v4
- JS/TS Client v3
- Go
- Java
from weaviate.classes.generate import GenerativeConfig, GenerativeParameters
grouped_task = GenerativeParameters.grouped_task(
prompt="What do these animals have in common, if anything?",
metadata=True,
)
jeopardy = client.collections.get("JeopardyQuestion")
response = jeopardy.generate.near_text(
query="Cute animals",
limit=3,
grouped_task=grouped_task,
generative_provider=GenerativeConfig.openai()
)
# print the generated response
print(f"Grouped task result: {response.generative.text}")
print(f"Metadata: {o.generative.metadata}")
import { generativeParameters } from 'weaviate-client';
let response;
const jeopardy = client.collections.use('JeopardyQuestion');
const groupedTaskPrompt = {
prompt: "What do these animals have in common, if anything?",
metadata: true,
}
response = await jeopardy.generate.nearText("Cute animals", {
groupedTask: groupedTaskPrompt,
config: generativeParameters.openAI()
}, {
limit: 3,
})
// print the generated response
console.log("Grouped task result:", response.generative?.text)
console.log("Metadata:", response.generative?.metadata)
// Go support coming soon
// Java support coming soon
Example response
Grouped task result: They are all animals.
Metadata: usage {
prompt_tokens: 42
completion_tokens: 36
total_tokens: 78
}
Working with images
You can also supply images as a part of the input when performing retrieval augmented generation in both single prompts and grouped tasks. The following fields are available for generative search with images:
images
: A base64 encoded string of the image bytes.image_properties
: Names of the properties in Weaviate that store images for additional context.
- Python Client v4
- JS/TS Client v3
- Go
- Java
import base64
import requests
from weaviate.classes.generate import GenerativeConfig, GenerativeParameters
src_img_path = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Koala_climbing_tree.jpg/500px-Koala_climbing_tree.jpg"
base64_image = base64.b64encode(requests.get(src_img_path).content).decode('utf-8')
prompt = GenerativeParameters.grouped_task(
prompt="Formulate a Jeopardy!-style question about this image",
images=[base64_image], # A list of base64 encoded strings of the image bytes
# image_properties=["img"], # Properties containing images in Weaviate
)
jeopardy = client.collections.get("JeopardyQuestion")
response = jeopardy.generate.near_text(
query="Australian animals",
limit=3,
grouped_task=prompt,
grouped_properties=["answer", "question"],
generative_provider=GenerativeConfig.anthropic(
max_tokens=1000
),
)
# Print the source property and the generated response
for o in response.objects:
print(f"Properties: {o.properties}")
print(f"Grouped task result: {response.generative.text}")
import { generativeParameters } from 'weaviate-client';
let response;
const jeopardy = client.collections.use('JeopardyQuestion');
function arrayBufferToBase64(buffer: ArrayBuffer): string {
const bytes = new Uint8Array(buffer);
let binary = '';
const chunkSize = 1024; // Process in chunks to avoid call stack issues
for (let i = 0; i < bytes.length; i += chunkSize) {
const chunk = bytes.slice(i, Math.min(i + chunkSize, bytes.length));
binary += String.fromCharCode.apply(null, Array.from(chunk));
}
return btoa(binary);
}
const srcImgPath = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Koala_climbing_tree.jpg/500px-Koala_climbing_tree.jpg"
const responseImg = await fetch(srcImgPath);
const image = await responseImg.arrayBuffer();
const base64String = arrayBufferToBase64(image);
const prompt = {
prompt: "Formulate a Jeopardy!-style question about this image",
images: [base64String], // A list of base64 encoded strings of the image bytes
// imageProperties: ["img"], // Properties containing images in Weaviate
}
response = await jeopardy.generate.nearText("Movies", {
groupedTask: prompt,
groupedProperties: ["answer", "question"],
config: generativeParameters.anthropic({
maxTokens: 1000
}),
}, {
limit: 3,
})
// Print the source property and the generated response
for (const item of response.objects) {
console.log("Title property:", item.properties['title'])
}
console.log("Grouped task result:", response.generative?.text)
// Go support coming soon
// Java support coming soon
Example response
Properties: {'points': 800, 'answer': 'sheep', 'air_date': datetime.datetime(2007, 12, 13, 0, 0, tzinfo=datetime.timezone.utc), 'question': 'Australians call this animal a jumbuck or a monkey', 'round': 'Jeopardy!'}
Properties: {'points': 100, 'answer': 'Australia', 'air_date': datetime.datetime(2000, 3, 10, 0, 0, tzinfo=datetime.timezone.utc), 'question': 'An island named for the animal seen <a href="http://www.j-archive.com/media/2000-03-10_J_01.jpg" target="_blank">here</a> belongs to this country [kangaroo]', 'round': 'Jeopardy!'}
Properties: {'points': 300, 'air_date': datetime.datetime(1996, 7, 18, 0, 0, tzinfo=datetime.timezone.utc), 'answer': 'Kangaroo', 'question': 'Found chiefly in Australia, the wallaby is a smaller type of this marsupial', 'round': 'Jeopardy!'}
Grouped task result: I'll formulate a Jeopardy!-style question based on the image of the koala:
Answer: This Australian marsupial, often mistakenly called a bear, spends most of its time in eucalyptus trees.
Question: What is a koala?
Related pages
Questions and feedback
If you have any questions or feedback, let us know in the user forum.