Cross Validation#

Whenever using some sort of trainable algorithm it is important to clearly separate the training and the testing data to get an unbiased result. Usually this is achieved by a train-test split. However, if you don’t have that much data, there is always a risk that one random train-test split, will provide better (or worse) results than another. In these cases it is a good idea to use cross-validation. In this procedure, you perform multiple train-test splits and average the results over all “folds”. For more information see our evaluation guide and the sklearn guide on cross validation.

In this example, we will learn how to use the cross_validate function implemented in tcpc. For this, we will redo the example on optimizable pipelines but we will perform the final evaluation via cross-validation. If you want to have more information on how the dataset and pipeline is built, head over to this example. Here we will just copy the code over.

Dataset

from pathlib import Path

from examples.datasets.datasets_final_ecg import ECGExampleData

try:
    HERE = Path(__file__).parent
except NameError:
    HERE = Path().resolve()
data_path = HERE.parent.parent / "example_data/ecg_mit_bih_arrhythmia/data"
example_data = ECGExampleData(data_path)

Pipeline

import pandas as pd

from examples.algorithms.algorithms_qrs_detection_final import OptimizableQrsDetector
from tpcp import OptimizableParameter, OptimizablePipeline, Parameter, cf


class MyPipeline(OptimizablePipeline):
    algorithm: Parameter[OptimizableQrsDetector]
    algorithm__min_r_peak_height_over_baseline: OptimizableParameter[float]

    r_peak_positions_: pd.Series

    def __init__(self, algorithm: OptimizableQrsDetector = cf(OptimizableQrsDetector())):
        self.algorithm = algorithm

    def self_optimize(self, dataset: ECGExampleData, **kwargs):
        ecg_data = [d.data["ecg"] for d in dataset]
        r_peaks = [d.r_peak_positions_["r_peak_position"] for d in dataset]
        # Note: We need to clone the algorithm instance, to make sure we don't leak any data between runs.
        algo = self.algorithm.clone()
        self.algorithm = algo.self_optimize(ecg_data, r_peaks, dataset.sampling_rate_hz)
        return self

    def run(self, datapoint: ECGExampleData):
        # Note: We need to clone the algorithm instance, to make sure we don't leak any data between runs.
        algo = self.algorithm.clone()
        algo.detect(datapoint.data, datapoint.sampling_rate_hz)

        self.r_peak_positions_ = algo.r_peak_positions_
        return self

The Scorer#

The scorer is identical to the scoring function used in the other examples. The F1-score is still the most important parameter for our comparison.

from examples.algorithms.algorithms_qrs_detection_final import match_events_with_reference, precision_recall_f1_score


def score(pipeline: MyPipeline, datapoint: ECGExampleData):
    # We use the `safe_run` wrapper instead of just run. This is always a good idea.
    # We don't need to clone the pipeline here, as GridSearch will already clone the pipeline internally and `run`
    # will clone it again.
    pipeline = pipeline.safe_run(datapoint)
    tolerance_s = 0.02  # We just use 20 ms for this example
    matches = match_events_with_reference(
        pipeline.r_peak_positions_.to_numpy(),
        datapoint.r_peak_positions_.to_numpy(),
        tolerance=tolerance_s * datapoint.sampling_rate_hz,
    )
    precision, recall, f1_score = precision_recall_f1_score(matches)
    return {"precision": precision, "recall": recall, "f1_score": f1_score}

Data Splitting#

Before performing a cross validation, we need to decide on the number of folds and type of splits. In tpcp we support all cross validation iterators provided in sklearn.

To keep the runtime low for this example, we are going to use a 3-fold CV.

from sklearn.model_selection import KFold

cv = KFold(n_splits=3)

Cross Validation#

Now we have all the pieces for the final cross validation. First we need to create instances of our data and pipeline. Then we need to wrap our pipeline instance into an Optimize wrapper. Finally, we can call tpcp.validate.cross_validate.

from tpcp.optimize import Optimize
from tpcp.validate import cross_validate

pipe = MyPipeline()
optimizable_pipe = Optimize(pipe)

results = cross_validate(
    optimizable_pipe, example_data, scoring=score, cv=cv, return_optimizer=True, return_train_score=True
)
result_df = pd.DataFrame(results)
result_df
CV Folds:   0%|          | 0/3 [00:00<?, ?it/s]

Datapoints:   0%|          | 0/4 [00:00<?, ?it/s]

Datapoints:  50%|█████     | 2/4 [00:00<00:00, 14.66it/s]

Datapoints: 100%|██████████| 4/4 [00:00<00:00, 14.34it/s]
Datapoints: 100%|██████████| 4/4 [00:00<00:00, 14.37it/s]


Datapoints:   0%|          | 0/8 [00:00<?, ?it/s]

Datapoints:  25%|██▌       | 2/8 [00:00<00:00, 13.97it/s]

Datapoints:  50%|█████     | 4/8 [00:00<00:00, 13.81it/s]

Datapoints:  75%|███████▌  | 6/8 [00:00<00:00, 13.87it/s]

Datapoints: 100%|██████████| 8/8 [00:00<00:00, 14.18it/s]
Datapoints: 100%|██████████| 8/8 [00:00<00:00, 14.05it/s]

CV Folds:  33%|███▎      | 1/3 [00:01<00:02,  1.34s/it]

Datapoints:   0%|          | 0/4 [00:00<?, ?it/s]

Datapoints:  50%|█████     | 2/4 [00:00<00:00, 14.60it/s]

Datapoints: 100%|██████████| 4/4 [00:00<00:00, 14.61it/s]
Datapoints: 100%|██████████| 4/4 [00:00<00:00, 14.58it/s]


Datapoints:   0%|          | 0/8 [00:00<?, ?it/s]

Datapoints:  25%|██▌       | 2/8 [00:00<00:00, 15.01it/s]

Datapoints:  50%|█████     | 4/8 [00:00<00:00, 14.88it/s]

Datapoints:  75%|███████▌  | 6/8 [00:00<00:00, 14.85it/s]

Datapoints: 100%|██████████| 8/8 [00:00<00:00, 14.88it/s]
Datapoints: 100%|██████████| 8/8 [00:00<00:00, 14.88it/s]

CV Folds:  67%|██████▋   | 2/3 [00:02<00:01,  1.36s/it]

Datapoints:   0%|          | 0/4 [00:00<?, ?it/s]

Datapoints:  50%|█████     | 2/4 [00:00<00:00, 15.11it/s]

Datapoints: 100%|██████████| 4/4 [00:00<00:00, 14.94it/s]
Datapoints: 100%|██████████| 4/4 [00:00<00:00, 14.94it/s]


Datapoints:   0%|          | 0/8 [00:00<?, ?it/s]

Datapoints:  25%|██▌       | 2/8 [00:00<00:00, 15.06it/s]

Datapoints:  50%|█████     | 4/8 [00:00<00:00, 14.67it/s]

Datapoints:  75%|███████▌  | 6/8 [00:00<00:00, 14.63it/s]

Datapoints: 100%|██████████| 8/8 [00:00<00:00, 14.82it/s]
Datapoints: 100%|██████████| 8/8 [00:00<00:00, 14.77it/s]

CV Folds: 100%|██████████| 3/3 [00:04<00:00,  1.37s/it]
CV Folds: 100%|██████████| 3/3 [00:04<00:00,  1.37s/it]
score_time optimize_time train_data_labels test_data_labels optimizer test_precision test_recall test_f1_score test_single_precision test_single_recall test_single_f1_score train_precision train_recall train_f1_score train_single_precision train_single_recall train_single_f1_score
0 0.304279 0.416829 [(group_2, 106), (group_3, 108), (group_1, 114... [(group_1, 100), (group_2, 102), (group_3, 104... Optimize(optimize_with_info=True, pipeline=MyP... 0.975938 0.978213 0.977058 [0.9995600527936648, 0.9723119520073835, 0.962... [0.9995600527936648, 0.9634202103337905, 0.968... [0.9995600527936648, 0.9678456591639871, 0.965... 0.940084 0.775055 0.813250 [0.9470529470529471, 0.9077277970011534, 0.919... [0.9353724716329551, 0.44639818491208166, 0.15... [0.9411764705882353, 0.5984790874524715, 0.260...
1 0.299644 0.485168 [(group_1, 100), (group_2, 102), (group_3, 104... [(group_2, 106), (group_3, 108), (group_1, 114... Optimize(optimize_with_info=True, pipeline=MyP... 0.918491 0.647829 0.710830 [0.9192846785886902, 0.8467005076142132, 0.910... [0.938332511100148, 0.47305728871242203, 0.189... [0.9287109375, 0.6069868995633187, 0.313656387... 0.954602 0.952627 0.953583 [0.9995600527936648, 0.9725022914757103, 0.962... [0.9995600527936648, 0.9702789208962048, 0.968... [0.9995600527936648, 0.9713893339436942, 0.965...
2 0.293074 0.493561 [(group_1, 100), (group_2, 102), (group_3, 104... [(group_3, 119), (group_1, 121), (group_2, 123... Optimize(optimize_with_info=True, pipeline=MyP... 0.941766 0.909605 0.925066 [0.9984909456740443, 1.0, 0.9993416721527321, ... [0.9989934574735783, 0.9243156199677939, 1.0, ... [0.99874213836478, 0.9606694560669455, 0.99967... 0.965628 0.799168 0.833407 [0.9995600527936648, 0.972183588317107, 0.9638... [0.9995600527936648, 0.9588477366255144, 0.968... [0.9995600527936648, 0.9654696132596684, 0.965...


Understanding the Results#

The cross validation provides a lot of outputs (some of them can be disabled using the function parameters). To simplify things a little, we will split the output into four parts:

The main output are the test set performance values. Each row corresponds to performance in respective fold.

performance = result_df[["test_precision", "test_recall", "test_f1_score"]]
performance
test_precision test_recall test_f1_score
0 0.975938 0.978213 0.977058
1 0.918491 0.647829 0.710830
2 0.941766 0.909605 0.925066


The final generalization performance you would report is usually the average over all folds. The STD can also be interesting, as it tells you how stable your optimization is and if your splits provide comparable data distributions.

test_precision test_recall test_f1_score
mean 0.945399 0.845216 0.870985
std 0.028895 0.174350 0.141113


If you need more insight into the results (e.g. when the std of your results is high), you can inspect the individual score for each data point. In this example this is only a list with a single element per score, as we only had a single datapoint per fold. In a real scenario, this will be a list of all datapoints. Inspecting this list can help to identify potential issues with certain parts of your dataset. To link the performance values to a specific datapoint, you can look at the test_data_labels field.

single_performance = result_df[
    ["test_single_precision", "test_single_recall", "test_single_f1_score", "test_data_labels"]
]
single_performance
test_single_precision test_single_recall test_single_f1_score test_data_labels
0 [0.9995600527936648, 0.9723119520073835, 0.962... [0.9995600527936648, 0.9634202103337905, 0.968... [0.9995600527936648, 0.9678456591639871, 0.965... [(group_1, 100), (group_2, 102), (group_3, 104...
1 [0.9192846785886902, 0.8467005076142132, 0.910... [0.938332511100148, 0.47305728871242203, 0.189... [0.9287109375, 0.6069868995633187, 0.313656387... [(group_2, 106), (group_3, 108), (group_1, 114...
2 [0.9984909456740443, 1.0, 0.9993416721527321, ... [0.9989934574735783, 0.9243156199677939, 1.0, ... [0.99874213836478, 0.9606694560669455, 0.99967... [(group_3, 119), (group_1, 121), (group_2, 123...


Even further insight is provided by the train results (if activated in parameters). These are the performance results on the train set and can indicate if the training provided meaningful results and can also indicate over-fitting, if the performance of the test set is much worse than the performance on the train set.

train_performance = result_df[
    [
        "train_precision",
        "train_recall",
        "train_f1_score",
        "train_single_precision",
        "train_single_recall",
        "train_single_f1_score",
        "train_data_labels",
    ]
]
train_performance
train_precision train_recall train_f1_score train_single_precision train_single_recall train_single_f1_score train_data_labels
0 0.940084 0.775055 0.813250 [0.9470529470529471, 0.9077277970011534, 0.919... [0.9353724716329551, 0.44639818491208166, 0.15... [0.9411764705882353, 0.5984790874524715, 0.260... [(group_2, 106), (group_3, 108), (group_1, 114...
1 0.954602 0.952627 0.953583 [0.9995600527936648, 0.9725022914757103, 0.962... [0.9995600527936648, 0.9702789208962048, 0.968... [0.9995600527936648, 0.9713893339436942, 0.965... [(group_1, 100), (group_2, 102), (group_3, 104...
2 0.965628 0.799168 0.833407 [0.9995600527936648, 0.972183588317107, 0.9638... [0.9995600527936648, 0.9588477366255144, 0.968... [0.9995600527936648, 0.9654696132596684, 0.965... [(group_1, 100), (group_2, 102), (group_3, 104...


The final level of debug information is provided via the timings (note the long runtime in fold 0 can be explained by the jit-compiler used in BarthDtw) …

timings = result_df[["score_time", "optimize_time"]]
timings
score_time optimize_time
0 0.304279 0.416829
1 0.299644 0.485168
2 0.293074 0.493561


… and the optimized pipeline object. This is the actual trained object generated in this fold. You can apply it to other data for testing or inspect the actual object for further debug information that might be stored on it.

Optimize(optimize_with_info=True, pipeline=MyPipeline(algorithm=OptimizableQrsDetector(high_pass_filter_cutoff_hz=1, max_heart_rate_bpm=200.0, min_r_peak_height_over_baseline=1.0, r_peak_match_tolerance_s=0.01)), safe_optimize=True)
{'algorithm__high_pass_filter_cutoff_hz': 1, 'algorithm__max_heart_rate_bpm': 200.0, 'algorithm__min_r_peak_height_over_baseline': 0.6322168257130579, 'algorithm__r_peak_match_tolerance_s': 0.01, 'algorithm': OptimizableQrsDetector(high_pass_filter_cutoff_hz=1, max_heart_rate_bpm=200.0, min_r_peak_height_over_baseline=0.6322168257130579, r_peak_match_tolerance_s=0.01)}

Further Notes#

We also support grouped cross validation. Check the dataset guide on how you can group the data before cross-validation or generate data labels to be used with GroupedKFold.

Optimize is just an example of an optimizer that can be passed to cross validation. You can pass any tpcp optimizer like GridSearch or GridSearchCV or custom optimizer that implement the tpcp.optimize.BaseOptimize interface.

Total running time of the script: (0 minutes 6.580 seconds)

Estimated memory usage: 28 MB

Gallery generated by Sphinx-Gallery