global_ram_cache#
- tpcp.caching.global_ram_cache(max_n: int | None = None, *, cache_only: Sequence[str] | None = None, action_method_name: str | None = None, restore_in_parallel_process: bool = True)[source]#
Wrap an algorithm/pipeline class to enable LRU based RAM cashing for the specified action method.
Warning
When using this decorator, all actions calls are not made on the original object, but on a clone, and only the results are “re-attached” to the original object. In case you rely on side effects of the action method, this might cause issues. But, if you are relying on side effects, you are probably doing something wrong anyway.
Warning
RAM cached objects can only be used with parallel processing, when the Algorithm/Pipeline class is defined NOT in the main module. Otherwise, you will get strange pickling errors. In general, using RAM cache with multi-processing does likely not make sense, as the RAM cache can not be shared between the individual processes.
- Parameters:
- max_n
The maximum number of entries in the cache. If None, the cache will grow without limit.
- cache_only
A list of strings that defines which results should be cached. If None, all results are cached. In case you only cash a subset of the results, only these results will be available on the returned objects. Also, during the first (uncached) call, only the results that are cached will be available. This might be helpful to reduce the size of the cache.
- action_method_name
In case the object you want to cache has multiple action methods, you can specify the name of the action method.
- restore_in_parallel_process
If True, we will register a global parallel callback, that the diskcache will be correctly restored when using joblib parallel with the tpcp implementation of
delayed
.Warning
This will only restore the cached setting, however, the actual cache is not shared and does not carry over to the new process
- Returns:
- The algorithm class with the cached action method.
See also
tpcp.caching.global_disk_cache
Same as this function, but uses a disk cache instead of an LRU cache in RAM.