global_ram_cache#

tpcp.caching.global_ram_cache(max_n: int | None = None, *, cache_only: Sequence[str] | None = None)[source]#

Wrap an algorithm/pipeline class to enable LRU based RAM cashing for its primary action method.

Warning

When using this decorator, all actions calls are not made on the original object, but on a clone, and only the results are “re-attached” to the original object. In case you rely on side effects of the action method, this might cause issues. But, if you are relying on side effects, you are probably doing something wrong anyway.

Parameters:
max_n

The maximum number of entries in the cache. If None, the cache will grow without limit.

cache_only

A list of strings that defines which results should be cached. If None, all results are cached. In case you only cash a subset of the results, only these results will be available on the returned objects. Also, during the first (uncached) call, only the results that are cached will be available. This might be helpful to reduce the size of the cache.

Returns:
The algorithm class with the cached action method.

See also

tpcp.caching.global_disk_cache

Same as this function, but uses a disk cache instead of an LRU cache in RAM.

Examples using tpcp.caching.global_ram_cache#

Caching

Caching