multi2vec-clip
Introductionโ
The multi2vec-clip
module allows you to use a pre-trained Sentence-BERT CLIP model as a Weaviate vectorization module. To use CLIP with Weaviate, the multi2vec-clip
module needs to be enabled. The models typically bring separate inference containers. This allows for efficient scaling and resource planning. Neural-Network-based models run most efficiently on GPU-enabled serves, yet Weaviate is CPU-optimized. This separate-container microservice setup allows you to very easily host (and scale) the model independently on GPU-enabled hardware while keeping Weaviate on cheap CPU-only hardware.
To choose your specific model, you simply need to select the correct Docker container. There is a selection of pre-built Docker images available, but you can also build your own with a simple two-line Dockerfile.
How to useโ
You have three options to select your desired model:
- Use any of our pre-built clip model containers. These model containers are pre-built by us, and packed in a container. (If you think we should support another model out-of-the-box please open an issue or pull request here).
- Use any SBERT CLIP model from Hugging Face Model Hub. Click here to learn how. The
multi2vec-clip
module supports any CLIP-based transformer model compatible withSentenceTransformers
. - Use any private or SBERT Clip model. Click here to learn how. If you have your own CLIP-based
SentenceTransformers
model in a registry or on a local disk, you can use this with Weaviate.
Option 1: Use a pre-built transformer model containerโ
Example docker-compose fileโ
Note: you can also use the Weaviate configuration tool.
You can find an example Docker-compose file below, which will spin up Weaviate with the multi2vec-clip module. In this example we have selected the sentence-transformers/clip-ViT-B-32-multilingual
which works great for vectorizing images and text in the same vector space. It even supports multiple languages. See below for how to select an alternative model.
---
version: '3.4'
services:
weaviate:
command:
- --host
- 0.0.0.0
- --port
- '8080'
- --scheme
- http
image: semitechnologies/weaviate:1.19.6
ports:
- 8080:8080
restart: on-failure:0
environment:
CLIP_INFERENCE_API: 'http://multi2vec-clip:8080'
QUERY_DEFAULTS_LIMIT: 25
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
DEFAULT_VECTORIZER_MODULE: 'multi2vec-clip'
ENABLE_MODULES: 'multi2vec-clip'
CLUSTER_HOSTNAME: 'node1'
multi2vec-clip:
image: semitechnologies/multi2vec-clip:sentence-transformers-clip-ViT-B-32-multilingual-v1
environment:
ENABLE_CUDA: '0'
...
Note that running Weaviate with the multi2vec-clip module but without a GPU will be
slower than on CPUs. Enable CUDA if you have a GPU available (ENABLE_CUDA=1
).
Alternative: configure your custom setupโ
Note: The following steps are only required if you want to manually add the module to an existing setup. If you are starting from scratch, a much more convenient option is to use our configuration and customization tool.
Step 1: Enable the multi2vec-clip
moduleโ
Make sure you set the ENABLE_MODULES=multi2vec-clip
environment variable. Additionally, make this module the default vectorizer, so you don't have to specify it on each schema class: DEFAULT_VECTORIZER_MODULE=multi2vec-clip
Step 2: Run your favorite modelโ
Choose any of our pre-built CLIP models (for building your own model container, see below) and spin it up with your setup.
Use a CUDA-enabled machine for optimal performance.
Step 3: Tell Weaviate where to find the inferenceโ
Set the Weaviate environment variable CLIP_INFERENCE_API
to where your inference container is running, for example CLIP_INFERENCE_API="http://multi2vec-clip:8000"
(Adjust hostname and port accordingly)
You can now use Weaviate normally and all vectorization during import and search time will be done with the selected CLIP transformers model.
Option 2: Use any publicly available SBERT CLIP model from the Hugging Face Model Hubโ
You can build a Docker image which supports any model from the Hugging Face model hub with a two-line Dockerfile. In the following example, we are going to build a custom image for the clip-ViT-B-32
model. Note: This model exists as a pre-built container, you don't have to build it yourself. This is just to outline the process.
Step 1: Create a Dockerfile
โ
Create a new Dockerfile
. We will name it clip.Dockerfile
. Add the following lines to it:
FROM semitechnologies/multi2vec-clip:custom
RUN CLIP_MODEL_NAME=clip-ViT-B-32 TEXT_MODEL_NAME=clip-ViT-B-32 ./download.py
Step 2: Build and tag your Dockerfile.โ
We will tag our Dockerfile as clip-inference
:
docker build -f clip.Dockerfile -t clip-inference .
Step 3: That's it!โ
You can now push your image to your favorite registry or reference it locally in your Weaviate docker-compose.yaml
using the docker tag clip-inference
.
Option 3: Custom build with a private or local modelโ
You can build a Docker image which supports any model which is compatible with
SentenceTransformers
ClIPModel
. Additionally, the text model can be a
regular sentence-transformers model, but it must produce compatible vector
representations. So, only use models that have been specifically trained for
use with CLIP models.
In the following example, we are going to build a custom image for a non-public
model which we have locally stored at ./my-clip-model
and ./my-text-model
.
Both models were trained to produce embeddings which are compatible with one
another.
Create a new Dockerfile
(you do not need to clone this repository, any folder
on your machine is fine), we will name it my-models.Dockerfile
. Add the
following lines to it:
FROM semitechnologies/transformers-inference:custom
COPY ./my-text-model /app/models/text
COPY ./my-clip-model /app/models/clip
The above will make sure that your model ends up in the image at
/app/models/clip
and /app/models/text
respectively.. This path is
important, so that the application can find the
model.
Now you just need to build and tag your Dockerfile, we will tag it as
my-models-inference
:
$ docker build -f my-models.Dockerfile -t my-models-inference .
That's it! You can now push your image to your favorite registry or reference
it locally in your Weaviate docker-compose.yaml
using the Docker tag
my-models-inference
.
To debug if your inference container is working correctly, you can send queries to the vectorizer module's inference container directly, so you can see exactly what vectors it would produce for which input. To do so, you need to expose the inference container. in your Docker-compose add something like
ports:
- "9090:8080"
to your t2v-transformers
.
Then you can send REST requests to it directly, e.g. curl
localhost:9090/vectorize -d '{"texts": ["foo bar"], "images":[]}'
and it will
print the created vector(s) directly.
Schema Configuration for CLIP-vectorized Classesโ
The following is a valid payload for a class that vectorizes both images and
text fields using the multi2vec-clip
module as a vectorizer
:
{
"classes": [
{
"class": "ClipExample",
"moduleConfig": {
"multi2vec-clip": {
"imageFields": [
"image"
],
"textFields": [
"name"
],
"weights": {
"textFields": [0.7],
"imageFields": [0.3]
}
}
},
"properties": [
{
"dataType": [
"text"
],
"name": "name"
},
{
"dataType": [
"blob"
],
"name": "image"
}
],
"vectorIndexType": "hnsw",
"vectorizer": "multi2vec-clip"
}
]
}
Note that:
imageFields
andtextFields
inmoduleConfig.multi2vec-clip
do not both need to be set. However, at least one of both must be set.weights
inmoduleConfig.multi2vec-clip
is optional. If only a single property, the property takes all the weight. If multiple properties exist and no weights are specified, the properties are equal-weighted.
You can then import data objects for the class as usual. Fill the text
or
string
fields with text and/or fill the blob
fields with a base64-encoded
image.
Limitationsโ
- As of
v1.9.0
, the module requires explicit creation of a class. If you rely on auto-schema to create the class for you, it will be missing the required configuration about which fields should be vectorized. This will be addressed in a future release. You can always manually update a class schema config that was incorrectly created by Auto-schema and fix it yourself.
Additional GraphQL API parametersโ
nearTextโ
The multi2vec-clip
vectorizer module adds two search operators for Get {}
and Explore {}
GraphQL functions: nearText: {}
and nearImage: {}
. These
operators can be used for semantically searching both text and images in your
dataset.
Note: In the same query, you cannot use multiple 'near'
filters, or a 'near'
filter along with an 'ask'
filter!
Example GraphQL Get(nearText{}
) operatorโ
- GraphQL
- Python
- JavaScript
- Go
- Java
- Curl
{
Get{
Publication(
nearText: {
concepts: ["fashion"],
distance: 0.6 # prior to v1.14 use "certainty" instead of "distance"
moveAwayFrom: {
concepts: ["finance"],
force: 0.45
},
moveTo: {
concepts: ["haute couture"],
force: 0.85
}
}
){
name
_additional {
certainty # only supported if distance==cosine.
distance # always supported
}
}
}
}
import weaviate
client = weaviate.Client("http://localhost:8080")
nearText = {
"concepts": ["fashion"],
"distance": 0.6, # prior to v1.14 use "certainty" instead of "distance"
"moveAwayFrom": {
"concepts": ["finance"],
"force": 0.45
},
"moveTo": {
"concepts": ["haute couture"],
"force": 0.85
}
}
result = (
client.query
.get("Publication", "name")
.with_additional(["certainty OR distance"]) # note that certainty is only supported if distance==cosine
.with_near_text(nearText)
.do()
)
print(result)
const weaviate = require('weaviate-client');
const client = weaviate.client({
scheme: 'http',
host: 'localhost:8080',
});
client.graphql
.get()
.withClassName('Publication')
.withFields('name _additional{certainty distance}') // note that certainty is only supported if distance==cosine
.withNearText({
concepts: ['fashion'],
distance: 0.6, // prior to v1.14 use certainty instead of distance
moveAwayFrom: {
concepts: ['finance'],
force: 0.45
},
moveTo: {
concepts: ['haute couture'],
force: 0.85
}
})
.do()
.then(console.log)
.catch(console.error);
package main
import (
"context"
"fmt"
"github.com/weaviate/weaviate-go-client/v4/weaviate"
"github.com/weaviate/weaviate-go-client/v4/weaviate/graphql"
)
func main() {
cfg := weaviate.Config{
Host: "localhost:8080",
Scheme: "http",
}
client, err := weaviate.NewClient(cfg)
if err != nil {
panic(err)
}
className := "Publication"
name := graphql.Field{Name: "name"}
_additional := graphql.Field{
Name: "_additional", Fields: []graphql.Field{
{Name: "certainty"}, // only supported if distance==cosine
{Name: "distance"}, // always supported
},
}
concepts := []string{"fashion"}
distance := float32(0.6)
moveAwayFrom := &graphql.MoveParameters{
Concepts: []string{"finance"},
Force: 0.45,
}
moveTo := &graphql.MoveParameters{
Concepts: []string{"haute couture"},
Force: 0.85,
}
nearText := client.GraphQL().NearTextArgBuilder().
WithConcepts(concepts).
WithDistance(distance). // use WithCertainty(certainty) prior to v1.14
WithMoveTo(moveTo).
WithMoveAwayFrom(moveAwayFrom)
ctx := context.Background()
result, err := client.GraphQL().Get().
WithClassName(className).
WithFields(name, _additional).
WithNearText(nearText).
Do(ctx)
if err != nil {
panic(err)
}
fmt.Printf("%v", result)
}
package io.weaviate;
import io.weaviate.client.Config;
import io.weaviate.client.WeaviateClient;
import io.weaviate.client.base.Result;
import io.weaviate.client.v1.graphql.model.GraphQLResponse;
import io.weaviate.client.v1.graphql.query.argument.NearTextArgument;
import io.weaviate.client.v1.graphql.query.argument.NearTextMoveParameters;
import io.weaviate.client.v1.graphql.query.fields.Field;
public class App {
public static void main(String[] args) {
Config config = new Config("http", "localhost:8080");
WeaviateClient client = new WeaviateClient(config);
NearTextMoveParameters moveTo = NearTextMoveParameters.builder()
.concepts(new String[]{ "haute couture" }).force(0.85f).build();
NearTextMoveParameters moveAway = NearTextMoveParameters.builder()
.concepts(new String[]{ "finance" }).force(0.45f)
.build();
NearTextArgument nearText = client.graphQL().arguments().nearTextArgBuilder()
.concepts(new String[]{ "fashion" })
.distance(0.6f) // use .certainty(0.7f) prior to v1.14
.moveTo(moveTo)
.moveAwayFrom(moveAway)
.build();
Field name = Field.builder().name("name").build();
Field _additional = Field.builder()
.name("_additional")
.fields(new Field[]{
Field.builder().name("certainty").build(), // only supported if distance==cosine
Field.builder().name("distance").build(), // always supported
}).build();
Result<GraphQLResponse> result = client.graphQL().get()
.withClassName("Publication")
.withFields(name, _additional)
.withNearText(nearText)
.run();
if (result.hasErrors()) {
System.out.println(result.getError());
return;
}
System.out.println(result.getResult());
}
}
$ echo '{
"query": "{
Get{
Publication(
nearText: {
concepts: [\"fashion\"],
distance: 0.6, // use certainty instead of distance prior to v1.14
moveAwayFrom: {
concepts: [\"finance\"],
force: 0.45
},
moveTo: {
concepts: [\"haute couture\"],
force: 0.85
}
}
){
name
_additional {
certainty // only supported if distance==cosine
distance // always supported
}
}
}
}"
}' | curl \
-X POST \
-H 'Content-Type: application/json' \
-d @- \
http://localhost:8080/v1/graphql
๐ข Try out this GraphQL example in the Weaviate Console.
Example GraphQL Get(nearImage{}
) operatorโ
- GraphQL
- Python
- JavaScript
- Go
- Java
- Curl
{
Get {
FashionItem(nearImage: {
image: "/9j/4AAQSkZJRgABAgE..."
}) {
image
}
}
}
import weaviate
client = weaviate.Client("http://localhost:8080")
nearImage = {"image": "/9j/4AAQSkZJRgABAgE..."}
result = (
client.query
.get("FashionItem", "image")
.with_near_image(nearImage)
.do()
)
print(result)
const weaviate = require('weaviate-client');
const client = weaviate.client({
scheme: 'http',
host: 'localhost:8080',
});
client.graphql
.get()
.withClassName('FashionItem')
.withFields('image')
.withNearImage({image: '/9j/4AAQSkZJRgABAgE...'})
.do()
.then(res => {
console.log(res)
})
.catch(err => {
console.error(err)
});
package main
import (
"context"
"fmt"
"github.com/weaviate/weaviate-go-client/v4/weaviate"
"github.com/weaviate/weaviate-go-client/v4/weaviate/graphql"
)
func main() {
cfg := weaviate.Config{
Host: "localhost:8080",
Scheme: "http",
}
client, err := weaviate.NewClient(cfg)
if err != nil {
panic(err)
}
className := "FashionItem"
image := graphql.Field{Name: "image"}
nearImage := client.GraphQL().NearImageArgBuilder().WithImage("/9j/4AAQSkZJRgABAgE...")
ctx := context.Background()
result, err := client.GraphQL().Get().
WithClassName(className).
WithFields(image).
WithNearImage(nearImage).
Do(ctx)
if err != nil {
panic(err)
}
fmt.Printf("%v", result)
}
package io.weaviate;
import io.weaviate.client.Config;
import io.weaviate.client.WeaviateClient;
import io.weaviate.client.base.Result;
import io.weaviate.client.v1.graphql.model.GraphQLResponse;
import io.weaviate.client.v1.graphql.query.argument.NearImageArgument;
import io.weaviate.client.v1.graphql.query.fields.Field;
public class App {
public static void main(String[] args) {
Config config = new Config("http", "localhost:8080");
WeaviateClient client = new WeaviateClient(config);
String className = "FashionItem";
Field image = Field.builder().name("image").build();
NearImageArgument nearImage = client.graphQL().arguments().nearImageArgBuilder()
.image("/9j/4AAQSkZJRgABAgE...")
.build();
Result<GraphQLResponse> result = client.graphQL().get()
.withClassName(className)
.withFields(image)
.withNearImage(nearImage)
.run();
if (result.hasErrors()) {
System.out.println(result.getError());
return;
}
System.out.println(result.getResult());
}
}
$ echo '{
"query": "{
Get {
FashionItem(nearImage: {
image: "/9j/4AAQSkZJRgABAgE..."
}) {
image
}
}
}"
}' | curl \
-X POST \
-H 'Content-Type: application/json' \
-d @- \
http://localhost:8080/v1/graphql
Alternatively, you can use a helper function in the Python, Java or Go client (not with the JavaScript client). With an encoder function, you can input your image as png
file, and the helper function encodes this to a base64
encoded value.
- GraphQL
- Python
- JavaScript
- Go
- Java
- Curl
# GraphQL doesn't support png->base64 encoding, so please use a base64 encoded image in your query
{
Get {
FashionItem(nearImage: {
image: "/9j/4AAQSkZJRgABAgE..."
}) {
image
}
}
}
import weaviate
client = weaviate.Client("http://localhost:8080")
nearImage = {"image": "my_image_path.png"}
result = (
client.query
.get("FashionItem", "image")
.with_near_image(nearImage, encode=True)
.do()
)
print(result)
## OR use the weaviate.utils function:
client = weaviate.Client("http://localhost:8080")
encoded_image = weaviate.util.image_encoder_b64("my_image_path.png")
nearImage = {"image": "encoded_image"}
result = (
client.query
.get("FashionItem", "image")
.with_near_image(nearImage)
.do()
)
print(result)
// The JavaScript client doesn't support image encoding via a helper function, you need to give the input image as base64 format yourself
const weaviate = require('weaviate-client');
const client = weaviate.client({
scheme: 'http',
host: 'localhost:8080',
});
client.graphql
.get()
.withClassName('FashionItem')
.withFields('image')
.withNearImage({image: '/9j/4AAQSkZJRgABAgE...'})
.do()
.then(res => {
console.log(res)
})
.catch(err => {
console.error(err)
});
package main
import (
"context"
"fmt"
"os"
"github.com/weaviate/weaviate-go-client/v4/weaviate"
"github.com/weaviate/weaviate-go-client/v4/weaviate/graphql"
)
func main() {
cfg := weaviate.Config{
Host: "localhost:8080",
Scheme: "http",
}
client, err := weaviate.NewClient(cfg)
if err != nil {
panic(err)
}
className := "FashionItem"
image := graphql.Field{Name: "image"}
filename := "my_image_path.png"
file, err := os.Open(filename)
if err != nil {
panic(err)
}
nearImage := client.GraphQL().NearImageArgBuilder().WithReader(file)
ctx := context.Background()
result, err := client.GraphQL().Get().
WithClassName(className).
WithFields(image).
WithNearImage(nearImage).
Do(ctx)
if err != nil {
panic(err)
}
fmt.Printf("%v", result)
}
package io.weaviate;
import java.io.File;
import io.weaviate.client.Config;
import io.weaviate.client.WeaviateClient;
import io.weaviate.client.base.Result;
import io.weaviate.client.v1.graphql.model.GraphQLResponse;
import io.weaviate.client.v1.graphql.query.argument.NearImageArgument;
import io.weaviate.client.v1.graphql.query.fields.Field;
public class App {
public static void main(String[] args) {
Config config = new Config("http", "localhost:8080");
WeaviateClient client = new WeaviateClient(config);
File imageFile = new File("my_image_path.png");
String className = "FashionItem";
Field image = Field.builder().name("image").build();
NearImageArgument nearImage = client.graphQL().arguments().nearImageArgBuilder()
.imageFile(imageFile)
.build();
Result<GraphQLResponse> result = client.graphQL().get()
.withClassName(className)
.withFields(image)
.withNearImage(nearImage)
.run();
if (result.hasErrors()) {
System.out.println(result.getError());
return;
}
System.out.println(result.getResult());
}
}
$ echo '{
"query": "{
Get {
FashionItem(nearImage: {
image: "/9j/4AAQSkZJRgABAgE..."
}) {
image
}
}
}"
}' | curl \
-X POST \
-H 'Content-Type: application/json' \
-d @- \
http://localhost:8080/v1/graphql
Distanceโ
You can set a maximum allowed distance
, which will be used to determine which
data results to return. The interpretation of the value of the distance field
depends on the distance metric used.
If the distance metric is cosine
you can also use certainty
instead of
distance
. Certainty normalizes the distance in a range of 0..1, where 0
represents a perfect opposite (cosine distance of 2) and 1 represents vectors
with an identical angle (cosine distance of 0). Certainty is not available on
non-cosine distance metrics.
Movingโ
Because pagination is not possible in multidimensional storage, you can improve your results with additional explore functions which can move away from semantic concepts or towards semantic concepts. E.g., if you look for the concept 'New York Times' but don't want to find the city New York, you can use the moveAwayFrom{}
function by using the words 'New York'. This is also a way to exclude concepts and to deal with negations (not
operators in similar query languages). Concepts in the moveAwayFrom{}
filter are not per definition excluded from the result, but the resulting concepts are further away from the concepts in this filter.
Moving can be done based on concepts
and/or objects
.
concepts
requires a list of one or more wordsobjects
requires a list of one or more objects, given by theirid
orbeacon
. For example:
{
Get{
Publication(
nearText: {
concepts: ["fashion"],
distance: 0.6,
moveTo: {
objects: [{
beacon: "weaviate://localhost/Article/e5dc4a4c-ef0f-3aed-89a3-a73435c6bbcf"
}, {
id: "9f0c7463-8633-30ff-99e9-fd84349018f5"
}],
concepts: ["summer"],
force: 0.9
}
}
){
name
_additional {
distance
id
}
}
}
}
More resourcesโ
If you can't find the answer to your question here, please look at the:
- Frequently Asked Questions. Or,
- Knowledge base of old issues. Or,
- For questions: Stackoverflow. Or,
- For issues: GitHub. Or,
- Ask your question in the Slack channel.