tpcp.caching.global_ram_cache(max_n: int | None = None, *, cache_only: Sequence[str] | None = None)[source]#

Wrap an algorithm/pipeline class to enable LRU based RAM cashing for its primary action method.


When using this decorator, all actions calls are not made on the original object, but on a clone, and only the results are “re-attached” to the original object. In case you rely on side effects of the action method, this might cause issues. But, if you are relying on side effects, you are probably doing something wrong anyway.


The maximum number of entries in the cache. If None, the cache will grow without limit.


A list of strings that defines which results should be cached. If None, all results are cached. In case you only cash a subset of the results, only these results will be available on the returned objects. Also, during the first (uncached) call, only the results that are cached will be available. This might be helpful to reduce the size of the cache.

The algorithm class with the cached action method.

See also


Same as this function, but uses a disk cache instead of an LRU cache in RAM.

Examples using tpcp.caching.global_ram_cache#