Dataset#

class tpcp.Dataset(*, groupby_cols: list[str] | str | None = None, subset_index: DataFrame | None = None)[source]#

Baseclass for tpcp Dataset objects.

This class provides fundamental functionality like iteration, getting subsets, and compatibility with sklearn’s cross validation helpers.

For more information check out the examples and user guides on datasets.

Parameters:
groupby_cols

A column name or a list of column names that should be used to group the index before iterating over it. For examples see below.

subset_index

For all classes that inherit from this class, subset_index must be None by default. But the subclasses require a create_index method that returns a DataFrame representing the index.

Attributes:
group

Get the current group label.

group_label

Get the current group label.

group_labels

Get all group labels of the dataset based on the set groupby level.

grouped_index

Return the index with the groupby columns set as multiindex.

groups

Get the current group labels.

index

Get index.

index_is_unchanged

Returns True if the index is the same as the one created by create_index.

shape

Get the shape of the dataset.

Examples

This class is usually not meant to be used directly, but the following code snippets show some common operations that can be expected to work for all dataset subclasses.

>>> import pandas as pd
>>> from itertools import product
>>>
>>> from tpcp import Dataset
>>>
>>> test_index = pd.DataFrame(
...     list(product(("patient_1", "patient_2", "patient_3"), ("test_1", "test_2"), ("1", "2"))),
...     columns=["patient", "test", "extra"],
... )
>>> # We create a little dummy dataset by passing an index directly to `test_index`
>>> # Usually we would create a subclass with a `create_index` method that returns a DataFrame representing the
>>> # index.
>>> dataset = Dataset(subset_index=test_index)
>>> dataset
Dataset [12 groups/rows]

         patient    test extra
   0   patient_1  test_1     1
   1   patient_1  test_1     2
   2   patient_1  test_2     1
   3   patient_1  test_2     2
   4   patient_2  test_1     1
   5   patient_2  test_1     2
   6   patient_2  test_2     1
   7   patient_2  test_2     2
   8   patient_3  test_1     1
   9   patient_3  test_1     2
   10  patient_3  test_2     1
   11  patient_3  test_2     2

We can loop over the dataset. By default, we will loop over each row.

>>> for r in dataset[:2]:
...     print(r)
Dataset [1 groups/rows]

        patient    test extra
   0  patient_1  test_1     1
Dataset [1 groups/rows]

        patient    test extra
   0  patient_1  test_1     2

We can also change groupby (either in the init or afterwards), to loop over other combinations. If we select the level test, we will loop over all patient-test combinations.

>>> grouped_dataset = dataset.groupby(["patient", "test"])
>>> grouped_dataset  
Dataset [6 groups/rows]

                       patient    test extra
   patient   test
   patient_1 test_1  patient_1  test_1     1
             test_1  patient_1  test_1     2
             test_2  patient_1  test_2     1
             test_2  patient_1  test_2     2
   patient_2 test_1  patient_2  test_1     1
             test_1  patient_2  test_1     2
             test_2  patient_2  test_2     1
             test_2  patient_2  test_2     2
   patient_3 test_1  patient_3  test_1     1
             test_1  patient_3  test_1     2
             test_2  patient_3  test_2     1
             test_2  patient_3  test_2     2
>>> for r in grouped_dataset[:2]:
...     print(r)  
Dataset [1 groups/rows]

                       patient    test extra
   patient   test
   patient_1 test_1  patient_1  test_1     1
             test_1  patient_1  test_1     2
Dataset [1 groups/rows]

                       patient    test extra
   patient   test
   patient_1 test_2  patient_1  test_2     1
             test_2  patient_1  test_2     2

To iterate over the unique values of a specific level use the “iter_level” function:

>>> for r in list(grouped_dataset.iter_level("patient"))[:2]:
...     print(r)  
Dataset [2 groups/rows]

                       patient    test extra
   patient   test
   patient_1 test_1  patient_1  test_1     1
             test_1  patient_1  test_1     2
             test_2  patient_1  test_2     1
             test_2  patient_1  test_2     2
Dataset [2 groups/rows]

                       patient    test extra
   patient   test
   patient_2 test_1  patient_2  test_1     1
             test_1  patient_2  test_1     2
             test_2  patient_2  test_2     1
             test_2  patient_2  test_2     2

We can also get arbitary subsets from the dataset:

>>> subset = grouped_dataset.get_subset(patient=["patient_1", "patient_2"], extra="2")
>>> subset  
Dataset [4 groups/rows]

                       patient    test extra
   patient   test
   patient_1 test_1  patient_1  test_1     2
             test_2  patient_1  test_2     2
   patient_2 test_1  patient_2  test_1     2
             test_2  patient_2  test_2     2

If we want to use datasets in combination with GroupKFold, we can generate valid group labels as follows. These grouplabels are strings representing the unique value of the index at the specified levels.

Note

You usually don’t want to use that in combination with self.groupby.

>>> # We are using the ungrouped dataset again!
>>> group_labels = dataset.create_string_group_labels(["patient", "test"])
>>> pd.concat([dataset.index, pd.Series(group_labels, name="group_labels")], axis=1)
      patient    test extra             group_labels
0   patient_1  test_1     1  ('patient_1', 'test_1')
1   patient_1  test_1     2  ('patient_1', 'test_1')
2   patient_1  test_2     1  ('patient_1', 'test_2')
3   patient_1  test_2     2  ('patient_1', 'test_2')
4   patient_2  test_1     1  ('patient_2', 'test_1')
5   patient_2  test_1     2  ('patient_2', 'test_1')
6   patient_2  test_2     1  ('patient_2', 'test_2')
7   patient_2  test_2     2  ('patient_2', 'test_2')
8   patient_3  test_1     1  ('patient_3', 'test_1')
9   patient_3  test_1     2  ('patient_3', 'test_1')
10  patient_3  test_2     1  ('patient_3', 'test_2')
11  patient_3  test_2     2  ('patient_3', 'test_2')

Methods

as_attrs()

Return a version of the Dataset class that can be subclassed using attrs defined classes.

as_dataclass()

Return a version of the Dataset class that can be subclassed using dataclasses.

assert_is_single(groupby_cols, property_name)

Raise error if index does contain more than one group/row with the given groupby settings.

assert_is_single_group(property_name)

Raise error if index does contain more than one group/row.

clone()

Create a new instance of the class with all parameters copied over.

create_index()

Create the full index for the dataset.

create_string_group_labels(label_cols)

Generate a list of string labels for each group/row in the dataset.

get_params([deep])

Get parameters for this algorithm.

get_subset(*[, group_labels, index, bool_map])

Get a subset of the dataset.

groupby(groupby_cols)

Return a copy of the dataset grouped by the specified columns.

index_as_tuples()

Get all datapoint labels of the dataset (i.e. a list of the rows of the index as named tuples).

is_single(groupby_cols)

Return True if index contains only one row/group with the given groupby settings.

is_single_group()

Return True if index contains only one group.

iter_level(level)

Return generator object containing a subset for every category from the selected level.

set_params(**params)

Set the parameters of this Algorithm.

create_group_labels

__init__(*, groupby_cols: list[str] | str | None = None, subset_index: DataFrame | None = None) None[source]#
classmethod as_attrs()[source]#

Return a version of the Dataset class that can be subclassed using attrs defined classes.

Note, this requires attrs to be installed!

classmethod as_dataclass()[source]#

Return a version of the Dataset class that can be subclassed using dataclasses.

assert_is_single(groupby_cols: list[str] | str | None, property_name) None[source]#

Raise error if index does contain more than one group/row with the given groupby settings.

This should be used when implementing access to data values, which can only be accessed when only a single trail/participant/etc. exist in the dataset.

Parameters:
groupby_cols

None (no grouping) or a valid subset of the columns available in the dataset index.

property_name

Name of the property this check is used in. Used to format the error message.

assert_is_single_group(property_name) None[source]#

Raise error if index does contain more than one group/row.

Note that this is different from assert_is_single as it is aware of the current grouping. Instead of checking that a certain combination of columns is left in the dataset, it checks that only a single group exists with the already selected grouping as defined by self.groupby_cols.

Parameters:
property_name

Name of the property this check is used in. Used to format the error message.

clone() Self[source]#

Create a new instance of the class with all parameters copied over.

This will create a new instance of the class itself and all nested objects

create_index() DataFrame[source]#

Create the full index for the dataset.

This needs to be implemented by the subclass.

Warning

Make absolutely sure that the dataframe you return is deterministic and does not change between runs! This can lead to some nasty bugs! We try to catch them internally, but it is not always possible. As tips, avoid reliance on random numbers and make sure that the order is not depend on things like file system order, when creating an index by scanning a directory. Particularly nasty are cases when using non-sorted container like set, that sometimes maintain their order, but sometimes don’t. At the very least, we recommend to sort the final dataframe you return in create_index.

create_string_group_labels(label_cols: str | list[str]) list[str][source]#

Generate a list of string labels for each group/row in the dataset.

Note

This has a different use case than the dataset-wide groupby. Using groupby reduces the effective size of the dataset to the number of groups. This method produces a group label for each group/row that is already in the dataset, without changing the dataset.

The output of this method can be used in combination with GroupKFold as the group label.

Parameters:
label_cols

The columns that should be included in the label. If the dataset is already grouped, this must be a subset of self.groupby_cols.

get_params(deep: bool = True) dict[str, Any][source]#

Get parameters for this algorithm.

Parameters:
deep

Only relevant if object contains nested algorithm objects. If this is the case and deep is True, the params of these nested objects are included in the output using a prefix like nested_object_name__ (Note the two “_” at the end)

Returns:
params

Parameter names mapped to their values.

get_subset(*, group_labels: list[tuple[str, ...]] | None = None, index: DataFrame | None = None, bool_map: Sequence[bool] | None = None, **kwargs: list[str] | str) Self[source]#

Get a subset of the dataset.

Note

All arguments are mutable exclusive!

Parameters:
group_labels

A valid row locator or slice that can be passed to self.grouped_index.loc[locator, :]. This basically needs to be a subset of self.group_labels. Note that this is the only indexer that works on the grouped index. All other indexers work on the pure index.

index

pd.DataFrame that is a valid subset of the current dataset index.

bool_map

bool-map that is used to index the current index-dataframe. The list must be of same length as the number of rows in the index.

**kwargs

The key must be the name of an index column. The value is a list containing strings that correspond to the categories that should be kept. For examples see above.

Returns:
subset

New dataset object filtered by specified parameters.

property group: GroupLabelT#

Get the current group label. Deprecated, use group_label instead.

property group_label: GroupLabelT#

Get the current group label.

The group is defined by the current groupby settings.

Note, this attribute can only be used, if there is just a single group. This will return a named tuple. The tuple will contain only one entry if there is only a single groupby column or column in the index. The elements of the named tuple will have the same names as the groupby columns and will be in the same order.

property group_labels: list[GroupLabelT]#

Get all group labels of the dataset based on the set groupby level.

This will return a list of named tuples. The tuples will contain only one entry if there is only one groupby level or index column.

The elements of the named tuples will have the same names as the groupby columns and will be in the same order.

Note, that if one of the groupby levels/index columns is not a valid Python attribute name (e.g. in contains spaces or starts with a number), the named tuple will not contain the correct column name! For more information see the documentation of the rename parameter of collections.namedtuple.

For some examples and additional explanation see this example.

groupby(groupby_cols: list[str] | str | None) Self[source]#

Return a copy of the dataset grouped by the specified columns.

This does not change the order of the rows of the dataset index.

Each unique group represents a single data point in the resulting dataset.

Parameters:
groupby_cols

None (no grouping) or a valid subset of the columns available in the dataset index.

property grouped_index: DataFrame#

Return the index with the groupby columns set as multiindex.

property groups: list[GroupLabelT]#

Get the current group labels. Deprecated, use group_labels instead.

property index: DataFrame#

Get index.

index_as_tuples() list[GroupLabelT][source]#

Get all datapoint labels of the dataset (i.e. a list of the rows of the index as named tuples).

property index_is_unchanged: bool#

Returns True if the index is the same as the one created by create_index.

This can be used to check, if the index represents a subset or the actual full index. Note, that this is independent of the groupby_cols setting.

Note

Under the hood this uses the attrs functionality of pandas to store a hash of the original index on the dataframe. If the index is modified or a new index is created, this property does either not exist anymore or the content is modified.

is_single(groupby_cols: list[str] | str | None) bool[source]#

Return True if index contains only one row/group with the given groupby settings.

If groupby_cols=None this checks if there is only a single row left. If you want to check if there is only a single group within the current grouping, use is_single_group instead.

Parameters:
groupby_cols

None (no grouping) or a valid subset of the columns available in the dataset index.

is_single_group() bool[source]#

Return True if index contains only one group.

iter_level(level: str) Iterator[Self][source]#

Return generator object containing a subset for every category from the selected level.

Parameters:
level

Optional str that sets the level which shall be used for iterating. This must be one of the columns names of the index.

Returns:
subset

New dataset object containing only one category in the specified level.

set_params(**params: Any) Self[source]#

Set the parameters of this Algorithm.

To set parameters of nested objects use nested_object_name__para_name=.

property shape: tuple[int]#

Get the shape of the dataset.

This only reports a single dimension. This is equal to the number of rows in the index, if self.groupby_cols=None. Otherwise, it is equal to the number of unique groups.

Examples using tpcp.Dataset#

Custom Dataset - Basics

Custom Dataset - Basics

Custom Dataset - A real world example

Custom Dataset - A real world example

The final ECG Example dataset

The final ECG Example dataset

Algorithms - A real world example: QRS-Detection

Algorithms - A real world example: QRS-Detection

Grid Search optimal Algorithm Parameter

Grid Search optimal Algorithm Parameter

Optimizable Pipelines

Optimizable Pipelines

GridSearchCV

GridSearchCV

Custom Optuna Optimizer

Custom Optuna Optimizer

Build-in Optuna Optimizers

Build-in Optuna Optimizers

Validation

Validation

Cross Validation

Cross Validation

Custom Scorer

Custom Scorer

Advanced cross-validation

Advanced cross-validation

Tensorflow/Keras

Tensorflow/Keras

Caching

Caching

TypedIterator

TypedIterator

Optimization Info

Optimization Info