Legacy (v3) API (DEPRECATED)
v3
client is deprecatedThis document relates to the legacy v3
client and API.
Starting in December 2024, the v4
client installations will no longer include the v3
API (i.e. the weaviate.Client
class). This will help us to provide the best developer experience for you, support for the latest features, and clearly separate the two.
The v3
client will continue to get critical security updates and bugfixes for the foreseeable future, but it will not support any new features.
What does this mean for me?
To take advantage of the latest developments on the Weaviate core database, we recommend migrating your codebase to use the v4
client API.
Our documentation includes a migration guide here, and many code examples include both v3
and v4
syntax. We will be adding more dedicated resources for you to ease the migration experience.
If you have an existing codebase and Weaviate core database that you expect to remain static, we recommend pinning the version in your requirements file (e.g. requirements.txt
), like so:
weaviate-client>=3.26.7,<4.0.0
We appreciate that code migration can be cumbersome, but we feel strongly that the end experience and feature set will make your time worthwhile.
If you have specific requests for migration documentation or resources, please reach out through our GitHub repository.
Installation and setup
Requirements
The v3
client is not to be used with with the gRPC API that was introduced in Weaviate 1.22
. You can still use Weaviate 1.22
and newer with the v3
client, however it will not take advantage of improvements made with the gRPC API. For the gRPC API, use the v4
client.
Installation
The v3
Python library is available on PyPI.org. The package can be installed using pip. The client is developed and tested for Python 3.7 and higher.
pip install "weaviate-client==3.*"
Set-up
Now you can use the client in your Python scripts as follows:
import weaviate
client = weaviate.Client("https://WEAVIATE_INSTANCE_URL") # Replace WEAVIATE_INSTANCE_URL with your instance URL.
assert client.is_ready() # Will return True if the client is connected & the server is ready to accept requests
Or, with additional arguments such as those below:
import weaviate
client = weaviate.Client(
url="https://WEAVIATE_INSTANCE_URL", # URL of your Weaviate instance
auth_client_secret=auth_config, # (Optional) If the Weaviate instance requires authentication
timeout_config=(5, 15), # (Optional) Set connection timeout & read timeout time in seconds
additional_headers={ # (Optional) Any additional headers; e.g. keys for API inference services
"X-Cohere-Api-Key": "YOUR-COHERE-API-KEY", # Replace with your Cohere key
"X-HuggingFace-Api-Key": "YOUR-HUGGINGFACE-API-KEY", # Replace with your Hugging Face key
"X-OpenAI-Api-Key": "YOUR-OPENAI-API-KEY", # Replace with your OpenAI key
}
)
assert client.is_ready() # Will return True if the client is connected & the server is ready to accept requests
Authentication
For more comprehensive information on configuring authentication with Weaviate, refer to the authentication page.
The Python client offers multiple options for authenticating against Weaviate, including multiple OIDC authentication flows.
The suitable authentication options and methods for the client largely depend on the specific configuration of the Weaviate instance.
WCD authentication
Each Weaviate instance in Weaviate Cloud (WCD) is pre-configured to act as a token issuer for OIDC authentication.
See our WCD authentication documentation for instructions on how to authenticate against WCD with your preferred Weaviate client.
API key authentication
3.14.0
.If you use an API key to authenticate, instantiate the client like this:
import weaviate
auth_config = weaviate.auth.AuthApiKey(api_key="YOUR-WEAVIATE-API-KEY") # Replace with your Weaviate instance API key
# Instantiate the client with the auth config
client = weaviate.Client(
url="https://WEAVIATE_INSTANCE_URL", # Replace with your Weaviate endpoint
auth_client_secret=auth_config
)
OIDC authentication
To authenticate against Weaviate with OIDC, you must select a flow made available by the identity provider and create the flow-specific authentication configuration.
This configuration will then be used by the Weaviate client to authenticate. The configuration includes secrets that help the client obtain an access token
and, if configured, a refresh token
.
The access token
is added to the HTTP header of each request and is utilized for authentication with Weaviate. Typically, this token has a limited lifespan, and the refresh token
can be employed to obtain a new set of tokens when necessary.
Resource Owner Password Flow
This OIDC flow uses the username and password to obtain required tokens for authentication.
Note that not every provider automatically includes a refresh token
and an appropriate scope might be required that depends on your identity provider. The client uses offline_access as the default scope. This works with some providers, but as it depends on the configuration of the identity providers, we ask you to refer to the identity provider's documentation.
Without a refresh token, there is no possibility to acquire a new access token
and the client becomes unauthenticated after expiration.
The Weaviate client does not save the username or password used.
They are only used to obtain the first tokens, after which existing tokens will be used to obtain subsequent tokens if possible.
import weaviate
resource_owner_config = weaviate.AuthClientPassword(
username = "user",
password = "pass",
scope = "offline_access" # optional, depends on the configuration of your identity provider (not required with WCD)
)
# Initiate the client with the auth config
client = weaviate.Client("http://localhost:8080", auth_client_secret=resource_owner_config)
Client Credentials flow
This OIDC flow uses a client secret
to obtain required tokens for authentication.
This flow is recommended for server-to-server communication without end-users and authenticates an application to Weaviate. This authentication flow is typically regarded as more secure than the resource owner password flow: a compromised client secret can be simply revoked, whereas a compromised password may have larger implications beyond the scope of breached authentication.
To authenticate a client secret most identity providers require a scope to be specified. This scope depends on the configuration of the identity providers, so we ask you to refer to the identity provider's documentation.
Most providers do not include a refresh token in their response so client secret
is saved in the client to obtain a new access token
on expiration of the existing one.
import weaviate
client_credentials_config = weaviate.AuthClientCredentials(
client_secret = "client_secret",
scope = "scope1 scope2" # optional, depends on the configuration of your identity provider (not required with WCD)
)
# Initiate the client with the auth config
client = weaviate.Client("https://localhost:8080", auth_client_secret=client_credentials_config)
Refresh Token flow
Any other OIDC authentication method can be used to obtain tokens directly from your identity provider, for example by using this step-by-step guide of the hybrid flow.
If no refresh token
is provided, there is no possibility to obtain a new access token
and the client becomes unauthenticated after expiration.
import weaviate
bearer_config = weaviate.AuthBearerToken(
access_token="some token"
expires_in=300 # in seconds, by default 60s
refresh_token="other token" # Optional
)
# Initiate the client with the auth config
client = weaviate.Client("https://localhost:8080", auth_client_secret=bearer_config)
Custom headers
You can pass custom headers to the client, which are added at initialization:
client = weaviate.Client(
url="https://localhost:8080",
additional_headers={"HeaderKey": "HeaderValue"},
)
Neural Search Frameworks
There is a variety of neural search frameworks that use Weaviate under the hood to store, search through, and retrieve vectors.
References documentation
On this Weaviate documentation website, you will find how to use the Python client for all RESTful endpoints and GraphQL functions. For each reference, a code block is included with an example of how to use the function with the Python (and other) clients. The Python client, however, has additional functionalities, which are covered in the full client documentation on weaviate-python-client.readthedocs.io. Some of these additional functions are highlighted here below.
Example: client.schema.create(schema)
Instead of adding classes one by one using the RESTful v1/schema
endpoint, you can upload a full schema in JSON format at once using the Python client. Use the function client.schema.create(schema)
as follows:
import weaviate
client = weaviate.Client("http://localhost:8080")
schema = {
"classes": [{
"class": "Publication",
"description": "A publication with an online source",
"properties": [
{
"dataType": [
"text"
],
"description": "Name of the publication",
"name": "name"
},
{
"dataType": [
"Article"
],
"description": "The articles this publication has",
"name": "hasArticles"
},
{
"dataType": [
"geoCoordinates"
],
"description": "Geo location of the HQ",
"name": "headquartersGeoLocation"
}
]
}, {
"class": "Article",
"description": "A written text, for example a news article or blog post",
"properties": [
{
"dataType": [
"text"
],
"description": "Title of the article",
"name": "title"
},
{
"dataType": [
"text"
],
"description": "The content of the article",
"name": "content"
}
]
}, {
"class": "Author",
"description": "The writer of an article",
"properties": [
{
"dataType": [
"text"
],
"description": "Name of the author",
"name": "name"
},
{
"dataType": [
"Article"
],
"description": "Articles this author wrote",
"name": "wroteArticles"
},
{
"dataType": [
"Publication"
],
"description": "The publication this author writes for",
"name": "writesFor"
}
]
}]
}
client.schema.create(schema)
Example: Blog Post on How to get started with Weaviate and the Python client
A full example of how to use the Python client for Weaviate can be found in this article on Towards Data Science.
Batching
Batching is a way of importing/creating objects
and references
in bulk using a single API request to the Weaviate server. With Python this can be done using 3 different methods:
- Auto-batching
- Dynamic-batching
- Manual-batching
Generally, we recommend use of client.batch
in a context manager, which will automatically flush the batch when exiting. This is the easiest way to use the batching functionality.
The following parameters have the greatest impact on the batch import speed:
Parameter | Type | Recommended value | Purpose |
---|---|---|---|
batch_size | integer | 50 - 200 | Initial batch size |
num_workers | integer | 1 - 2 | Maximum number of parallel workers |
dynamic | boolean | True | If true, dynamically adjust the batch_size based on the number of items in the batch |
Multi-threading batch import
3.9.0
.Multi-threading Batch import works with both Auto-batching
and Dynamic-batching
.
To use it, set the number of workers (threads) using the .configure(...)
(same as .__call__(...)
) by setting the argument num_workers
in the batch configuration. See also Batch configuration below.
Multithreading is disabled by default (num_workers=1). Use with care to not overload your Weaviate instance.
Example
client.batch( # or client.batch.configure(
batch_size=100,
dynamic=True,
num_workers=4,
)
Auto-batching
This method allows the Python client to handle all the object
and reference
import/creation. This means that you do NOT have to explicitly import/create objects and cross-references. All you need to do is add everything you want imported/created to the Batch
, and the Batch
is going to take care of creating the objects and cross-references among them. To enable auto-batching we need to configure batch_size
to be a positive integer (by default None
) (see Batch configuration below for more information). The Batch
is going to import/create objects, then create cross-references, if the number of objects + number of references == batch_size
. See example below:
import weaviate
from weaviate.util import generate_uuid5
client = weaviate.Client("http://localhost:8080")
# create schema
schema = {
"classes": [
{
"class": "Author",
"properties": [
{
"name": "name",
"dataType": ["text"]
},
{
"name": "wroteBooks",
"dataType": ["Book"]
}
]
},
{
"class": "Book",
"properties": [
{
"name": "title",
"dataType": ["text"]
},
{
"name": "ofAuthor",
"dataType": ["Author"]
}
]
}
]
}
client.schema.create(schema)
author = {
"name": "Jane Doe",
}
book_1 = {
"title": "Jane's Book 1"
}
book_2 = {
"title": "Jane's Book 2"
}
client.batch.configure(
batch_size=5, # int value for batch_size enables auto-batching, see Batch configuration section below
)
with client.batch as batch:
# add author
uuid_author = generate_uuid5(author, "Author")
batch.add_data_object(
data_object=author,
class_name="Author",
uuid=uuid_author,
)
# add book_1
uuid_book_1 = generate_uuid5(book_1, "Book")
batch.add_data_object(
data_object=book_1,
class_name="Book",
uuid=uuid_book_1,
)
# add references author ---> book_1
batch.add_reference(
from_object_uuid=uuid_author,
from_object_class_name="Author",
from_property_name="wroteBooks",
to_object_uuid=uuid_book_1,
to_object_class_name="Book",
)
# add references author <--- book_1
batch.add_reference(
from_object_uuid=uuid_book_1,
from_object_class_name="Book",
from_property_name="ofAuthor",
to_object_uuid=uuid_author,
to_object_class_name="Author",
)
# add book_2
uuid_book_2 = generate_uuid5(book_2, "Book")
batch.add_data_object(
data_object=book_2,
class_name="Book",
uuid=uuid_book_2,
)
# add references author ---> book_2
batch.add_reference(
from_object_uuid=uuid_author,
from_object_class_name="Author",
from_property_name="wroteBooks",
to_object_uuid=uuid_book_2,
to_object_class_name="Book",
)
# add references author <--- book_2
batch.add_reference(
from_object_uuid=uuid_book_2,
from_object_class_name="Book",
from_property_name="ofAuthor",
to_object_uuid=uuid_author,
to_object_class_name="Author",
)
# NOTE: When exiting context manager the method `batch.flush()` is called
# done, everything is imported/created
Dynamic-batching
This method allows the Python client to handle all object and cross-reference import/creations in a dynamic manner. This means that the user does NOT have to explicitly import/create objects and cross-reference (same as with Auto-batching. To enable dynamic-batching we need to configure batch_size
to be a positive integer (by default None
) AND set dynamic
to True
(by default False
) (see Batch-configuration below for more information). For this method the Batch
is going to compute the recommended_num_objects
and recommended_num_references
after the first Batch
creation, where the batch_size
is used for recommended_num_objects
and recommended_num_references
as the initial value. The Batch
is going to import/create objects then references, if current number of objects reached recommended_num_objects
OR current number of reference reached recommended_num_references
. See example below:
import weaviate
from weaviate.util import generate_uuid5
client = weaviate.Client("http://localhost:8080")
# create schema
schema = {
"classes": [
{
"class": "Author",
"properties": [
{
"name": "name",
"dataType": ["text"]
},
{
"name": "wroteBooks",
"dataType": ["Book"]
}
]
},
{
"class": "Book",
"properties": [
{
"name": "title",
"dataType": ["text"]
},
{
"name": "ofAuthor",
"dataType": ["Author"]
}
]
}
]
}
client.schema.create(schema)
author = {
"name": "Jane Doe",
}
book_1 = {
"title": "Jane's Book 1"
}
book_2 = {
"title": "Jane's Book 2"
}
client.batch.configure(
batch_size=5, # int value for batch_size enables auto-batching, see Batch configuration section below
dynamic=True, # makes it dynamic
)
with client.batch as batch:
# add author
uuid_author = generate_uuid5(author, "Author")
batch.add_data_object(
data_object=author,
class_name="Author",
uuid=uuid_author,
)
# add book_1
uuid_book_1 = generate_uuid5(book_1, "Book")
batch.add_data_object(
data_object=book_1,
class_name="Book",
uuid=uuid_book_1,
)
# add references author ---> book_1
batch.add_reference(
from_object_uuid=uuid_author,
from_object_class_name="Author",
from_property_name="wroteBooks",
to_object_uuid=uuid_book_1,
to_object_class_name="Book",
)
# add references author <--- book_1
batch.add_reference(
from_object_uuid=uuid_book_1,
from_object_class_name="Book",
from_property_name="ofAuthor",
to_object_uuid=uuid_author,
to_object_class_name="Author",
)
# add book_2
uuid_book_2 = generate_uuid5(book_2, "Book")
batch.add_data_object(
data_object=book_2,
class_name="Book",
uuid=uuid_book_2,
)
# add references author ---> book_2
batch.add_reference(
from_object_uuid=uuid_author,
from_object_class_name="Author",
from_property_name="wroteBooks",
to_object_uuid=uuid_book_2,
to_object_class_name="Book",
)
# add references author <--- book_2
batch.add_reference(
from_object_uuid=uuid_book_2,
from_object_class_name="Book",
from_property_name="ofAuthor",
to_object_uuid=uuid_author,
to_object_class_name="Author",
)
# NOTE: When exiting context manager the method `batch.flush()` is called
# done, everything is imported/created
Manual-batching
This method gives the user total control over the Batch
, meaning the Batch
is NOT going to perform any import/creation implicitly but will leave it to the user's discretion. See example below:
import weaviate
from weaviate.util import generate_uuid5
client = weaviate.Client("http://localhost:8080")
# create schema
schema = {
"classes": [
{
"class": "Author",
"properties": [
{
"name": "name",
"dataType": ["text"]
},
{
"name": "wroteBooks",
"dataType": ["Book"]
}
]
},
{
"class": "Book",
"properties": [
{
"name": "title",
"dataType": ["text"]
},
{
"name": "ofAuthor",
"dataType": ["Author"]
}
]
}
]
}
client.schema.create(schema)
author = {
"name": "Jane Doe",
}
book_1 = {
"title": "Jane's Book 1"
}
book_2 = {
"title": "Jane's Book 2"
}
client.batch.configure(
batch_size=None, # None disable any automatic functionality
)
with client.batch as batch:
# add author
uuid_author = generate_uuid5(author, "Author")
batch.add_data_object(
data_object=author,
class_name="Author",
uuid=uuid_author,
)
# add book_1
uuid_book_1 = generate_uuid5(book_1, "Book")
batch.add_data_object(
data_object=book_1,
class_name="Book",
uuid=uuid_book_1,
)
result = batch.create_objects() # <----- implicit object creation
# add references author ---> book_1
batch.add_reference(
from_object_uuid=uuid_author,
from_object_class_name="Author",
from_property_name="wroteBooks",
to_object_uuid=uuid_book_1,
to_object_class_name="Book",
)
# add references author <--- book_1
batch.add_reference(
from_object_uuid=uuid_book_1,
from_object_class_name="Book",
from_property_name="ofAuthor",
to_object_uuid=uuid_author,
to_object_class_name="Author",
)
result = batch.create_references() # <----- implicit reference creation
# add book_2
uuid_book_2 = generate_uuid5(book_2, "Book")
batch.add_data_object(
data_object=book_2,
class_name="Book",
uuid=uuid_book_2,
)
result = batch.create_objects() # <----- implicit object creation
# add references author ---> book_2
batch.add_reference(
from_object_uuid=uuid_author,
from_object_class_name="Author",
from_property_name="wroteBooks",
to_object_uuid=uuid_book_2,
to_object_class_name="Book",
)
# add references author <--- book_2
batch.add_reference(
from_object_uuid=uuid_book_2,
from_object_class_name="Book",
from_property_name="ofAuthor",
to_object_uuid=uuid_author,
to_object_class_name="Author",
)
result = batch.create_references() # <----- implicit reference creation
# NOTE: When exiting context manager the method `batch.flush()` is called
# done, everything is imported/created
Batch configuration
The Batch
object can be configured using the batch.configure()
method or the batch()
(i.e. call batch object, __call__
) method. They are the same function. In the examples above we saw that we can configure the batch_size
and dynamic
parameters. Here are more available parameters:
batch_size
- (int
orNone
: defaultNone
): If it isint
then auto-/dynamic-batching is enabled. For Auto-batching, if number of objects + number of references ==batch_size
then theBatch
is going to import/create current objects then references (see Auto-batching for more info). For Dynamic-batching it is used as the initial value forrecommended_num_objects
andrecommended_num_references
(see Dynamic batching for more info). A value ofNone
means it is Manual-batching—no automatic object/reference import/creation.dynamic
- (bool
, default:False
): Enables/disables Dynamic-batching. Does not have any effect ifbatch_size
isNone
.creation_time
- (int
orfloat
, default:10
): It is the interval of time in which the batch import/create should be done. It used to computerecommended_num_objects
andrecommended_num_references
, consequently has an impact for Dynamic-batching.callback
(Optional[Callable[[dict], None]], defaultweaviate.util.check_batch_result
): It is a callback function on the results of thebatch.create_objects()
andbatch.create_references()
. It is used for error handling for Auto-/Dynamic-batching. Has no effect ifbatch_size
isNone
.timeout_retries
- (int
, default3
): Number of attempts to import/create a batch before resulting inTimeoutError
.connection_error_retries
- (int
, default3
): Number of attempts to import/create a batch before resulting inConnectionError
. NOTE: Added inweaviate-client 3.9.0
.num_workers
- (int
, default1
): The maximal number of concurrent threads to run batch import. Only used for non-MANUAL batching. i.e. is used only with AUTO or DYNAMIC batching. Use with care to not overload your Weaviate instance. NOTE: Added inweaviate-client 3.9.0
.
NOTE: You have to specify all the configurations that you want at each call of this method, otherwise some setting are going to be replaced by default values.
client.batch(
batch_size=100,
dynamic=False,
creation_time=5,
timeout_retries=3,
connection_error_retries=5,
callback=None,
num_workers=1,
)
Tips & Tricks
- There is no limit to how many objects/references one could add to a batch before committing/creating it. However a too large batch can lead to a TimeOut error, which means that Weaviate could not process and create all the objects from the batch in the specified time (the timeout configuration can be set like this or this). Note that setting a timeout configuration higher that 60s would require some changes to the docker-compose.yml/helm chart file.
- The
batch
class in the Python Client can be used in three ways:- Case 1: Everything should be done by the user, i.e. the user should add the objects/object-references and create them whenever the user wants. To create one of the data type use these methods of this class:
create_objects
,create_references
andflush
. This case has the Batch instance's batch_size set to None (see docs for theconfigure
or__call__
method). Can be used in a context manager, see below. - Case 2: Batch auto-creates when full. This can be achieved by setting the Batch instance's batch_size set to a positive integer (see docs for the
configure
or__call__
method). The batch_size in this case corresponds to the sum of added objects and references. This case does not require the user to create the batch/s, but it can be done. Also to create non-full batches (last batches) that do not meet the requirement to be auto-created use theflush
method. Can be used in a context manager, see below. - Case 3: Similar to Case II but uses dynamic batching, i.e. auto-creates either objects or references when one of them reached the
recommended_num_objects
orrecommended_num_references
respectively. See docs for theconfigure
or__call__
method for how to enable it. - Context-manager support: Can be use with the with statement. When it exists the context-manager it calls the flush method for you. Can be combined with
configure
or__call__
method, in order to set it to the desired Case.
- Case 1: Everything should be done by the user, i.e. the user should add the objects/object-references and create them whenever the user wants. To create one of the data type use these methods of this class:
Error Handling
Creating objects in Batch
is faster then creating each object/reference individually but it comes at the cost of skipping some validation steps. Skipping some validation steps at object/reference level can result in some objects that failed to create or some references that could not be added. In this case the Batch
does not fail but individual objects/references might and we can make sure that everything was imported/created without errors by checking the returned value of the batch.create_objects()
and batch.create_references()
. Here are examples how to catch and handle errors on individual Batch
objects/references.
Lets define a function that checks for such errors and prints them:
def check_batch_result(results: dict):
"""
Check batch results for errors.
Parameters
----------
results : dict
The Weaviate batch creation return value.
"""
if results is not None:
for result in results:
if "result" in result and "errors" in result["result"]:
if "error" in result["result"]["errors"]:
print(result["result"])
Now we can use this function to print the error messages at item (object/reference) level. Lets look how we can do it using Auto-/Dynamic-batching where we never implicitly call the create
methods:
client.batch(
batch_size=100,
dynamic=True,
creation_time=5,
timeout_retries=3,
connection_error_retries=3,
callback=check_batch_result,
)
# done, easy as that
For Manual-batching we can call the function on the returned value:
# on objects
result = client.batch.create_object()
check_batch_result(result)
# on references
result = client.batch.create_references()
check_batch_result(result)
Example code
The following Python code can be used to handle errors on individual data objects in the batch.
import weaviate
client = weaviate.Client("http://localhost:8080")
def check_batch_result(results: dict):
"""
Check batch results for errors.
Parameters
----------
results : dict
The Weaviate batch creation return value, i.e. returned value of the client.batch.create_objects().
"""
if results is not None:
for result in results:
if 'result' in result and 'errors' in result['result']:
if 'error' in result['result']['errors']:
print("We got an error!", result)
object_to_add = {
"name": "Jane Doe",
"writesFor": [{
"beacon": "weaviate://localhost/f81bfe5e-16ba-4615-a516-46c2ae2e5a80"
}]
}
client.batch.configure(
# `batch_size` takes an `int` value to enable auto-batching
# (`None` is used for manual batching)
batch_size=100,
# dynamically update the `batch_size` based on import speed
dynamic=False,
# `timeout_retries` takes an `int` value to retry on time outs
timeout_retries=3,
# checks for batch-item creation errors
# this is the default in weaviate-client >= 3.6.0
callback=check_batch_result,
consistency_level=weaviate.data.replication.ConsistencyLevel.ALL, # default QUORUM
)
with client.batch as batch:
batch.add_data_object(object_to_add, "Author", "36ddd591-2dee-4e7e-a3cc-eb86d30a4303", vector=[1,2])
# lets force an error, adding a second object with unmatching vector dimensions
batch.add_data_object(object_to_add, "Author", "cb7d0da4-ceaa-42d0-a483-282f545deed7", vector=[1,2,3])
This can also be applied to adding references in batch. Note that sending batches, especially references, skips some validations at the object and reference level. Adding this validation on single data objects like above makes it less likely for errors to go undiscovered.
Design
GraphQL query builder pattern
For complex GraphQL queries (e.g. with filters), the client uses a builder pattern to form the queries. An example is the following query with multiple filters:
import weaviate
client = weaviate.Client("http://localhost:8080")
where_filter = {
"path": ["wordCount"],
"operator": "GreaterThan",
"valueInt": 1000
}
near_text_filter = {
"concepts": ["fashion"],
"certainty": 0.7,
"moveAwayFrom": {
"concepts": ["finance"],
"force": 0.45
},
"moveTo": {
"concepts": ["haute couture"],
"force": 0.85
}
}
query_result = client.query\
.get("Article", ["title"])\
.with_where(where_filter)\
.with_near_text(near_text_filter)\
.with_limit(50)\
.do()
print(query_result)
Note that you need to use the .do()
method to execute the query.
You can use .build()
to inspect the resulting GraphQL query
query_result = client.query\
.get("Article", ["title"])\
.with_where(where_filter)\
.with_near_text(near_text_filter)\
.with_limit(50)
query_result.build()
>>> '{Get{Article(where: {path: ["wordCount"] operator: GreaterThan valueInt: 1000} limit: 50 nearText: {concepts: ["fashion"] certainty: 0.7 moveTo: {force: 0.85 concepts: ["haute couture"]} moveAwayFrom: {force: 0.45 concepts: ["finance"]}} ){title}}}'
Best practices and notes
Thread-safety
While the Python client is fundamentally designed to be thread-safe, it's important to note that due to its dependency on the requests
library, complete thread safety isn't guaranteed.
This is an area that we are looking to improve in the future.
The batching algorithm in our client is not thread-safe. Keep this in mind to help ensure smoother, more predictable operations when using our Python client in multi-threaded environments.
If you are performing batching in a multi-threaded scenario, ensure that only one of the threads is performing the batching workflow at any given time. No two threads can use the same client.batch
object at one time.
Releases
Go to the GitHub releases page to see the history of the Python client library releases.
Click here for a table of Weaviate and corresponding client versions
This table lists the Weaviate core versions and corresponding client library versions.
Weaviate (GitHub) | First release date | Python (GitHub) | TypeScript/ JavaScript (GitHub) | Go (GitHub) | Java (GitHub) |
---|---|---|---|---|---|
1.27.x | 2024-10-16 | 4.9.x | 3.2.x | 4.16.x | 4.9.x |
1.26.x | 2024-07-22 | 4.7.x | 3.1.x | 4.15.x | 4.8.x |
1.25.x | 2024-05-10 | 4.6.x | 2.1.x | 4.13.x | 4.6.x |
1.24.x | 2024-02-27 | 4.5.x | 2.0.x | 4.10.x | 4.4.x |
1.23.x | 2023-12-18 | 3.26.x | 1.5.x | 4.10.x | 4.4.x |
1.22.x | 2023-10-27 | 3.25.x | 1.5.x | 4.10.x | 4.3.x |
1.21.x | 2023-08-17 | 3.22.x | 1.4.x | 4.9.x | 4.2.x |
1.20.x | 2023-07-06 | 3.22.x | 1.1.x | 4.7.x | 4.2.x |
1.19.x | 2023-05-04 | 3.17.x | 1.1.x1 | 4.7.x | 4.0.x |
1.18.x | 2023-03-07 | 3.13.x | 2.14.x | 4.6.x | 3.6.x |
1.17.x | 2022-12-20 | 3.9.x | 2.14.x | 4.5.x | 3.5.x |
1.16.x | 2022-10-31 | 3.8.x | 2.13.x | 4.4.x | 3.4.x |
1.15.x | 2022-09-07 | 3.6.x | 2.12.x | 4.3.x | 3.3.x |
1.14.x | 2022-07-07 | 3.6.x | 2.11.x | 4.2.x | 3.2.x |
1.13.x | 2022-05-03 | 3.4.x | 2.9.x | 4.0.x | 2.4.x |
1.12.x | 2022-04-05 | 3.4.x | 2.8.x | 3.0.x | 2.3.x |
1.11.x | 2022-03-14 | 3.2.x | 2.7.x | 2.6.x | 2.3.x |
1.10.x | 2022-01-27 | 3.1.x | 2.5.x | 2.4.x | 2.1.x |
1.9.x | 2021-12-10 | 3.1.x | 2.4.x | 2.4.x | 2.1.x |
1.8.x | 2021-11-30 | 3.1.x | 2.4.x | 2.3.x | 1.1.x |
1.7.x | 2021-09-01 | 3.1.x | 2.4.x | 2.3.x | 1.1.x |
1.6.x | 2021-08-11 | 2.4.x | 2.3.x | 2.2.x | 1.0.x |
1.5.x | 2021-07-13 | 2.2.x | 2.1.x | 2.1.x | 1.0.x |
1.4.x | 2021-06-09 | 2.2.x | 2.1.x | 2.1.x | 1.0.x |
1.3.x | 2021-04-23 | 2.2.x | 2.1.x | 2.1.x | 1.0.x |
1.2.x | 2021-03-15 | 2.2.x | 2.0.x | 1.1.x | - |
1.1.x | 2021-02-10 | 2.1.x | - | - | - |
1.0.x | 2021-01-14 | 2.0.x | - | - | - |
Questions and feedback
If you have any questions or feedback, let us know in the user forum.