When running Weaviate with Docker or Kubernetes, you can persist its data by mounting a volume to store the data outside of the containers. Doing so will cause the Weaviate instance to also load the data from the mounted volume when it is restarted.
Note that Weaviate now offers native backup modules starting with
v1.15 for single-node instances, and
v1.16 for multi-node instances. For older versions of Weaviate, persisting data as described here will allow you to back up Weaviate.
When running Weaviate with Docker Compose, you can set the
volumes variable under the
weaviate service and a unique cluster hostname as an environment variable.
- About the volumes
/var/weaviateis the location where you want to store the data on the local machine
/var/lib/weaviate(after the colon) is the location inside the container, don't change this
- About the hostname
CLUSTER_HOSTNAMEcan be any arbitrarily chosen name
In the case you want a more verbose output, you can change the environment variable for the
A complete example of a Weaviate without modules but with an externally mounted volume and more verbose output:
- /var/weaviate:/var/lib/weaviate # <== set a volume here
CLUSTER_HOSTNAME: 'node1' # <== this can be set to an arbitrary name
For Kubernetes setup, the only thing to bear in mind is that Weaviate needs a
PersistentVolumeClaims (more info) but the Helm chart is already configured to store the data on an external volume.
Disk Pressure Warnings and Limits
v1.12.0 there are two levels of disk usage notifications and actions configured through environment variables. Both variables are optional. If not set they default to the values outlined below:
|If disk usage is higher than the given percentage a warning will be logged by all shards on the affected node's disk|
|If disk usage is higher than the given percentage all shards on the affected node will be marked as |
If a shard was marked
READONLY due to disk pressure and you want to mark the
shard as ready again (either because you have made more space available or
changed the thresholds) you can use the Shards API to do so.
Virtual memory access method
You can choose between
mmap (DEFAULT) and
pread functions to access virtual memory by setting the
PERSISTENCE_LSM_ACCESS_STRATEGY environment variable.
The two functions reflect different under-the-hood memory management behaviors.
mmap uses a memory-mapped file, which means that the file is mapped into the virtual memory of the process.
pread is a function that reads data from a file descriptor at a given offset.
mmap may be a preferred option with memory management benefits. However, if you experience stalling situations under heavy memory load, we suggest trying