Skip to main content
Weave provides server response caching to improve performance when making repeated queries or working with limited network bandwidth. While currently disabled by default, this feature is expected to become the default behavior in a future release.

When to use caching

Server response caching is particularly beneficial when:
  • You frequently run the same queries
  • You have limited network bandwidth
  • You’re working in an environment with high latency
  • You’re developing offline and want to cache responses for later use
This feature is especially useful when running repeated evaluations on a dataset, as it allows caching the dataset between runs.

How to enable caching

To enable caching, you can set the following environment variables:
# Enable server response caching
export WEAVE_USE_SERVER_CACHE=true

# Set cache size limit (default is 1GB)
export WEAVE_SERVER_CACHE_SIZE_LIMIT=1000000000

# Set cache directory (optional, defaults to temporary directory)
export WEAVE_SERVER_CACHE_DIR=/path/to/cache

Caching behavior

Technically, this feature will cache idempotent requests against the server. Specifically, we cache:
  • obj_read
  • table_query
  • table_query_stats
  • refs_read_batch
  • file_content_read

Cache size and storage details

The cache size is controlled by WEAVE_SERVER_CACHE_SIZE_LIMIT (in bytes). The actual disk space used consists of three components:
  1. A constant 32KB checksum file
  2. A Write-Ahead Log (WAL) file up to ~4MB per running client (automatically removed when the program exits)
  3. The main database file, which is at least 32KB and at most WEAVE_SERVER_CACHE_SIZE_LIMIT
Total disk space used:
  • While running >= 32KB + ~4MB + cache size
  • After exit >= 32KB + cache size
For example, with the a 5MB cache limit:
  • While running: ~9MB maximum
  • After exit: ~5MB maximum

Troubleshooting