Tensorflow/Keras#

Note

This example requires the tensorflow package to be installed.

Theoretically, tpcp is framework agnostic and can be used with any framework. However, due to the way some frameworks handle their objects, some special handling internally is required. Hence, this example does not only serve as example on how to use tensorflow with tpcp, but also as a test case for these special cases.

When using tpcp with any machine learning framework, you either want to use a pretrained model with a normal pipeline or a train your own model as part of an Optimizable Pipeline. Here we show the second case, as it is more complex, and you are likely able to figure out the first case yourself.

This means, we are planning to perform the following steps:

  1. Create a pipeline that creates and trains a model.

  2. Allow the modification of model hyperparameters.

  3. Run a simple cross-validation to demonstrate the functionality.

This example reimplements the basic MNIST example from the [tensorflow documentation](https://www.tensorflow.org/tutorials/keras/classification).

Some Notes#

In this example we show how to implement a Pipeline that uses tensorflow. You could implement an Algorithm in a similar way. This would actually be easier, as no specific handling of the input data would be required. For a pipeline, we need to create a custom Dataset class, as this is the expected input for a pipeline.

The Dataset#

We are using the normal fashion MNIST dataset for this example It consists of 60.000 images of 28x28 pixels, each with a label. We will ignore the typical train-test split, as we want to do our own cross-validation.

In addition, we will simulate an additional “index level”. In this (and most typical deep learning datasets), each datapoint is one vector for which we can make one prediction. In tpcp, we usually deal with datasets, where you might have multiple pieces of information for each datapoint. For example, one datapoint could be a patient, for which we have an entire time series of measurements. We will simulate this here, by creating the index of our dataset as 1000 groups each containing 60 images.

Other than that, the dataset is pretty standard. Besides the create_index method, we only need to implement the input_as_array and labels_as_array methods that allow us to easily access the data once we selected a single group.

from functools import lru_cache

import numpy as np
import pandas as pd
import tensorflow as tf
from tpcp import Dataset

tf.keras.utils.set_random_seed(812)
tf.config.experimental.enable_op_determinism()


@lru_cache(maxsize=1)
def get_fashion_mnist_data():
    # Note: We throw train and test sets together, as we don't care about the official split here.
    #       We will create our own split later.
    (train_images, train_labels), (test_images, test_labels) = (
        tf.keras.datasets.fashion_mnist.load_data()
    )
    return np.array(list(train_images) + list(test_images)), list(
        train_labels
    ) + list(test_labels)


class FashionMNIST(Dataset):
    def input_as_array(self) -> np.ndarray:
        self.assert_is_single(None, "input_as_array")
        group_id = int(self.group_label.group_id)
        images, _ = get_fashion_mnist_data()
        return (
            images[group_id * 60 : (group_id + 1) * 60].reshape((60, 28, 28))
            / 255
        )

    def labels_as_array(self) -> np.ndarray:
        self.assert_is_single(None, "labels_as_array")
        group_id = int(self.group_label.group_id)
        _, labels = get_fashion_mnist_data()
        return np.array(labels[group_id * 60 : (group_id + 1) * 60])

    def create_index(self) -> pd.DataFrame:
        # There are 60.000 images in total.
        # We simulate 1000 groups of 60 images each.
        return pd.DataFrame({"group_id": list(range(1000))})

We can see our Dataset works as expected:

dataset = FashionMNIST()
dataset[0].input_as_array().shape
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz

    0/29515 ━━━━━━━━━━━━━━━━━━━━ 0s 0s/step
29515/29515 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz

       0/26421880 ━━━━━━━━━━━━━━━━━━━━ 0s 0s/step
  778240/26421880 ━━━━━━━━━━━━━━━━━━━━ 1s 0us/step
 9764864/26421880 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
19890176/26421880 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
26421880/26421880 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz

   0/5148 ━━━━━━━━━━━━━━━━━━━━ 0s 0s/step
5148/5148 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz

      0/4422102 ━━━━━━━━━━━━━━━━━━━━ 0s 0s/step
 548864/4422102 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
4422102/4422102 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step

(60, 28, 28)
dataset[0].labels_as_array().shape
(60,)

The Pipeline#

We will create a pipeline that uses a simple neural network to classify the images. In tpcp, all “things” that should be optimized need to be parameters. This means our model itself needs to be a parameter of the pipeline. However, as we don’t have the model yet, as its creation depends on other hyperparameters, we add it as an optional parameter initialized with None. Further, we prefix the parameter name with an underscore, to signify, that this is not a parameter that should be modified manually by the user. This is just convention, and it is up to you to decide how you want to name your parameters.

We further introduce a hyperparameter n_dense_layer_nodes to show how we can influence the model creation.

The optimize method#

To make our pipeline optimizable, it needs to inherit from OptimizablePipeline. Further we need to mark at least one of the parameters as OptiPara using the type annotation. We do this for our _model parameter.

Finally, we need to implement the self_optimize method. This method will get the entire training dataset as input and should update the _model parameter with the trained model. Hence, we first extract the relevant data (remember, each datapoint is 60 images), by concatinating all images over all groups in the dataset. Then we create the Keras model based on the hyperparameters. Finally, we train the model and update the _model parameter.

Here we chose to wrap the method with make_optimize_safe. This decorator will perform some runtime checks to ensure that the method is implemented correctly.

The run method#

The run method expects that the _model parameter is already set (i.e. the pipeline was already optimized). It gets a single datapoint as input (remember, a datapoint is a single group of 60 images). We then extract the data from the datapoint and let the model make a prediction. We store the prediction on our output attribute predictions_. The trailing underscore is a convention to signify, that this is an “result” attribute.

import warnings
from typing import Optional

from tpcp import (
    OptimizablePipeline,
    OptiPara,
    make_action_safe,
    make_optimize_safe,
)
from typing_extensions import Self


class KerasPipeline(OptimizablePipeline):
    n_dense_layer_nodes: int
    n_train_epochs: int
    _model: OptiPara[Optional[tf.keras.Sequential]]

    predictions_: np.ndarray

    def __init__(
        self,
        n_dense_layer_nodes=128,
        n_train_epochs=5,
        _model: Optional[tf.keras.Sequential] = None,
    ):
        self.n_dense_layer_nodes = n_dense_layer_nodes
        self.n_train_epochs = n_train_epochs
        self._model = _model

    @property
    def predicted_labels_(self):
        return np.argmax(self.predictions_, axis=1)

    @make_optimize_safe
    def self_optimize(self, dataset, **_) -> Self:
        data = tf.convert_to_tensor(
            np.vstack([d.input_as_array() for d in dataset])
        )
        labels = tf.convert_to_tensor(
            np.hstack([d.labels_as_array() for d in dataset])
        )

        print(data.shape)
        if self._model is not None:
            warnings.warn("Overwriting existing model!")

        self._model = tf.keras.Sequential(
            [
                tf.keras.layers.Input((28, 28)),
                tf.keras.layers.Flatten(),
                tf.keras.layers.Dense(
                    self.n_dense_layer_nodes, activation="relu"
                ),
                tf.keras.layers.Dense(10),
            ]
        )

        self._model.compile(
            optimizer="adam",
            loss=tf.keras.losses.SparseCategoricalCrossentropy(
                from_logits=True
            ),
            metrics=["accuracy"],
        )

        self._model.fit(data, labels, epochs=self.n_train_epochs)

        return self

    @make_action_safe
    def run(self, datapoint) -> Self:
        if self._model is None:
            raise RuntimeError("Model not trained yet!")
        data = tf.convert_to_tensor(datapoint.input_as_array())

        self.predictions_ = self._model.predict(data)
        return self

Testing the pipeline#

We can now test our pipeline. We will run the optimization using a couple of datapoints (to keep everything fast) and then use run to get the predictions for a single unseen datapoint.

pipeline = KerasPipeline().self_optimize(FashionMNIST()[:10])
p1 = pipeline.run(FashionMNIST()[11])
print(p1.predicted_labels_)
print(FashionMNIST()[11].labels_as_array())
(600, 28, 28)
Epoch 1/5

 1/19 ━━━━━━━━━━━━━━━━━━━━ 14s 795ms/step - accuracy: 0.1250 - loss: 2.3593
19/19 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.3623 - loss: 1.8187
Epoch 2/5

 1/19 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - accuracy: 0.7188 - loss: 1.0003
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.6958 - loss: 0.9131
Epoch 3/5

 1/19 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - accuracy: 0.7812 - loss: 0.7427
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7586 - loss: 0.7140
Epoch 4/5

 1/19 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step - accuracy: 0.7812 - loss: 0.6089
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7986 - loss: 0.5833
Epoch 5/5

 1/19 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step - accuracy: 0.7812 - loss: 0.5329
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8290 - loss: 0.5055

1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step
[8 8 6 9 4 0 7 3 7 9 4 8 4 3 7 8 1 4 0 7 9 8 5 5 2 1 3 4 1 9 7 5 9 9 7 8 2
 7 4 7 2 4 7 1 1 7 5 4 8 3 5 9 0 7 4 0 0 9 1 9]
[8 8 0 9 2 0 7 3 7 9 3 8 4 3 7 8 1 4 0 7 9 8 5 5 2 1 3 4 6 7 7 5 9 9 7 8 2
 7 4 7 0 3 5 1 1 5 5 2 8 3 5 9 0 7 3 0 0 7 1 9]

We can see that even with just 5 epochs, the model already performs quite well. To quantify we can calculate the accuracy for this datapoint:

from sklearn.metrics import accuracy_score

accuracy_score(p1.predicted_labels_, FashionMNIST()[11].labels_as_array())
0.8

Cross Validation#

If we want to run a cross validation, we need to formalize the scoring into a function. We will calculate two types of accuracy: First, the accuracy per group and second, the accuracy over all images across all groups. For more information about how this works, check the Custom Scorer example.

from collections.abc import Sequence

from tpcp.validate import Aggregator


class SingleValueAccuracy(Aggregator[tuple[np.ndarray, np.ndarray]]):
    def aggregate(
        self, /, values: Sequence[tuple[np.ndarray, np.ndarray]], **_
    ) -> dict[str, float]:
        return {
            "accuracy": accuracy_score(
                np.hstack([v[0] for v in values]),
                np.hstack([v[1] for v in values]),
            )
        }


single_value_accuracy = SingleValueAccuracy()


def scoring(pipeline, datapoint):
    result: np.ndarray = pipeline.safe_run(datapoint).predicted_labels_
    reference = datapoint.labels_as_array()

    return {
        "accuracy": accuracy_score(result, reference),
        "per_sample": single_value_accuracy((result, reference)),
    }

Now we can run a cross validation. We will only run it on a subset of the data, to keep the runtime manageable.

Note

You might see warnings about retracing of the model. This is because we clone the pipeline before each call to the run method. This is a good idea to ensure that all pipelines are independent of each other, however, might result in some performance overhead.

from tpcp.optimize import Optimize
from tpcp.validate import cross_validate

pipeline = KerasPipeline(n_train_epochs=10)
cv_results = cross_validate(
    Optimize(pipeline), FashionMNIST()[:100], scoring=scoring, cv=3
)
CV Folds:   0%|          | 0/3 [00:00<?, ?it/s](3960, 28, 28)
Epoch 1/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 1:31 746ms/step - accuracy: 0.0938 - loss: 2.5074
  2/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.1328 - loss: 2.4094    
 29/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.3437 - loss: 1.7768
 58/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.4516 - loss: 1.4995
 86/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5104 - loss: 1.3483
115/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5500 - loss: 1.2482
124/124 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.5611 - loss: 1.2206
Epoch 2/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 8s 72ms/step - accuracy: 0.8750 - loss: 0.4848
 29/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7857 - loss: 0.6233 
 57/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7911 - loss: 0.6051
 85/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7951 - loss: 0.5929
113/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7978 - loss: 0.5881
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7990 - loss: 0.5862
Epoch 3/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 77ms/step - accuracy: 0.8438 - loss: 0.4153
 29/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8142 - loss: 0.5231 
 57/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8233 - loss: 0.5097
 85/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8272 - loss: 0.5021
113/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8287 - loss: 0.5006
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8293 - loss: 0.5000
Epoch 4/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 76ms/step - accuracy: 0.8750 - loss: 0.3588
 29/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8314 - loss: 0.4573 
 57/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8382 - loss: 0.4511
 85/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8401 - loss: 0.4480
113/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8415 - loss: 0.4488
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8421 - loss: 0.4488
Epoch 5/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 76ms/step - accuracy: 0.8750 - loss: 0.3387
 29/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8470 - loss: 0.4151 
 57/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8516 - loss: 0.4121
 85/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8536 - loss: 0.4106
113/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8551 - loss: 0.4118
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8557 - loss: 0.4119
Epoch 6/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 77ms/step - accuracy: 0.8750 - loss: 0.3044
 29/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8600 - loss: 0.3743 
 58/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8629 - loss: 0.3765
 86/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8629 - loss: 0.3780
114/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8635 - loss: 0.3804
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8638 - loss: 0.3807
Epoch 7/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 78ms/step - accuracy: 0.9062 - loss: 0.2879
 29/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8618 - loss: 0.3450 
 57/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8659 - loss: 0.3490
 86/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8674 - loss: 0.3515
114/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8687 - loss: 0.3539
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8692 - loss: 0.3543
Epoch 8/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 79ms/step - accuracy: 0.8750 - loss: 0.2663
 30/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8694 - loss: 0.3196 
 58/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8739 - loss: 0.3254
 86/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8762 - loss: 0.3286
115/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8776 - loss: 0.3313
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8780 - loss: 0.3317
Epoch 9/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 79ms/step - accuracy: 0.9375 - loss: 0.2468
 30/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8889 - loss: 0.2967 
 58/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8882 - loss: 0.3027
 86/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8880 - loss: 0.3060
115/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8880 - loss: 0.3090
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8881 - loss: 0.3095
Epoch 10/10

  1/124 ━━━━━━━━━━━━━━━━━━━━ 9s 80ms/step - accuracy: 0.9375 - loss: 0.2620
 30/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8989 - loss: 0.2791 
 57/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8965 - loss: 0.2852
 85/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8958 - loss: 0.2883
113/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8950 - loss: 0.2911
124/124 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8950 - loss: 0.2917


Datapoints:   0%|          | 0/34 [00:00<?, ?it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:   3%|▎         | 1/34 [00:00<00:04,  8.23it/s]WARNING:tensorflow:5 out of the last 5 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x7f9084ae7760> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/stepWARNING:tensorflow:6 out of the last 6 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x7f9084ae7760> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step


Datapoints:   6%|▌         | 2/34 [00:00<00:03,  8.32it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:   9%|▉         | 3/34 [00:00<00:03,  8.27it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  12%|█▏        | 4/34 [00:00<00:03,  8.41it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  15%|█▍        | 5/34 [00:00<00:03,  8.45it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step


Datapoints:  18%|█▊        | 6/34 [00:00<00:03,  8.31it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  21%|██        | 7/34 [00:00<00:03,  8.37it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  24%|██▎       | 8/34 [00:00<00:03,  8.41it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step


Datapoints:  26%|██▋       | 9/34 [00:01<00:02,  8.40it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  29%|██▉       | 10/34 [00:01<00:02,  8.46it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  32%|███▏      | 11/34 [00:01<00:02,  8.49it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step


Datapoints:  35%|███▌      | 12/34 [00:01<00:02,  8.46it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  38%|███▊      | 13/34 [00:01<00:02,  8.49it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  41%|████      | 14/34 [00:01<00:02,  8.51it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step


Datapoints:  44%|████▍     | 15/34 [00:01<00:02,  8.44it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  47%|████▋     | 16/34 [00:01<00:02,  7.98it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  50%|█████     | 17/34 [00:02<00:02,  7.99it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step


Datapoints:  53%|█████▎    | 18/34 [00:02<00:01,  8.02it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  56%|█████▌    | 19/34 [00:02<00:01,  8.15it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  59%|█████▉    | 20/34 [00:02<00:01,  8.25it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step


Datapoints:  62%|██████▏   | 21/34 [00:02<00:01,  8.25it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  65%|██████▍   | 22/34 [00:02<00:01,  8.32it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  68%|██████▊   | 23/34 [00:02<00:01,  8.38it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step


Datapoints:  71%|███████   | 24/34 [00:02<00:01,  8.33it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  74%|███████▎  | 25/34 [00:03<00:01,  8.31it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  76%|███████▋  | 26/34 [00:03<00:00,  8.37it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step


Datapoints:  79%|███████▉  | 27/34 [00:03<00:00,  8.35it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  82%|████████▏ | 28/34 [00:03<00:00,  8.39it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  85%|████████▌ | 29/34 [00:03<00:00,  8.43it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step


Datapoints:  88%|████████▊ | 30/34 [00:03<00:00,  8.40it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  91%|█████████ | 31/34 [00:03<00:00,  8.43it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  94%|█████████▍| 32/34 [00:03<00:00,  8.44it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step


Datapoints:  97%|█████████▋| 33/34 [00:03<00:00,  8.35it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints: 100%|██████████| 34/34 [00:04<00:00,  8.33it/s]
Datapoints: 100%|██████████| 34/34 [00:04<00:00,  8.34it/s]

CV Folds:  33%|███▎      | 1/3 [00:08<00:16,  8.43s/it](4020, 28, 28)
Epoch 1/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 1:29 718ms/step - accuracy: 0.1250 - loss: 2.3656
  2/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.1172 - loss: 2.3871    
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.3890 - loss: 1.7658
 59/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.4861 - loss: 1.5064
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5357 - loss: 1.3644
116/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5712 - loss: 1.2621
126/126 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.5820 - loss: 1.2309
Epoch 2/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 2s 16ms/step - accuracy: 0.8438 - loss: 0.4830
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7651 - loss: 0.6239 
 59/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7679 - loss: 0.6215
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7730 - loss: 0.6151
114/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7771 - loss: 0.6084
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7789 - loss: 0.6047
Epoch 3/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 73ms/step - accuracy: 0.8750 - loss: 0.3561
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8123 - loss: 0.5192 
 57/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8125 - loss: 0.5230
 84/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8147 - loss: 0.5202
112/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8166 - loss: 0.5171
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8177 - loss: 0.5147
Epoch 4/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 8s 72ms/step - accuracy: 0.9375 - loss: 0.3082
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8373 - loss: 0.4751 
 57/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8336 - loss: 0.4832
 85/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8346 - loss: 0.4807
113/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8357 - loss: 0.4776
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8366 - loss: 0.4753
Epoch 5/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 75ms/step - accuracy: 0.8750 - loss: 0.3008
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8345 - loss: 0.4363 
 59/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8376 - loss: 0.4389
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8409 - loss: 0.4357
115/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8434 - loss: 0.4331
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8446 - loss: 0.4315
Epoch 6/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 76ms/step - accuracy: 0.8750 - loss: 0.2896
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8406 - loss: 0.4049 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8457 - loss: 0.4070
 85/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8504 - loss: 0.4039
112/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8532 - loss: 0.4019
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8547 - loss: 0.4001
Epoch 7/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 72ms/step - accuracy: 0.8750 - loss: 0.2921
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8363 - loss: 0.3789 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8439 - loss: 0.3806
 86/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8504 - loss: 0.3771
114/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8543 - loss: 0.3750
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8561 - loss: 0.3735
Epoch 8/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 75ms/step - accuracy: 0.8750 - loss: 0.2784
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8489 - loss: 0.3545 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8549 - loss: 0.3568
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8606 - loss: 0.3540
116/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8642 - loss: 0.3520
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8655 - loss: 0.3509
Epoch 9/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 76ms/step - accuracy: 0.8750 - loss: 0.2839
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8580 - loss: 0.3369 
 56/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8627 - loss: 0.3374
 84/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8681 - loss: 0.3336
112/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8717 - loss: 0.3318
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8734 - loss: 0.3304
Epoch 10/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 8s 70ms/step - accuracy: 0.8750 - loss: 0.2753
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8631 - loss: 0.3168 
 57/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8698 - loss: 0.3173
 86/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8758 - loss: 0.3139
114/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8794 - loss: 0.3123
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8808 - loss: 0.3112


Datapoints:   0%|          | 0/33 [00:00<?, ?it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:   3%|▎         | 1/33 [00:00<00:03,  8.30it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:   6%|▌         | 2/33 [00:00<00:03,  8.39it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:   9%|▉         | 3/33 [00:00<00:03,  8.35it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  12%|█▏        | 4/33 [00:00<00:03,  8.44it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  15%|█▌        | 5/33 [00:00<00:03,  8.50it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  18%|█▊        | 6/33 [00:00<00:03,  8.46it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  21%|██        | 7/33 [00:00<00:03,  8.45it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  24%|██▍       | 8/33 [00:00<00:02,  8.48it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  27%|██▋       | 9/33 [00:01<00:02,  8.42it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  30%|███       | 10/33 [00:01<00:02,  8.47it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  33%|███▎      | 11/33 [00:01<00:02,  8.47it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  36%|███▋      | 12/33 [00:01<00:02,  8.39it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  39%|███▉      | 13/33 [00:01<00:02,  8.28it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  42%|████▏     | 14/33 [00:01<00:02,  8.30it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  45%|████▌     | 15/33 [00:01<00:02,  8.28it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  48%|████▊     | 16/33 [00:01<00:02,  8.38it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  52%|█████▏    | 17/33 [00:02<00:01,  8.38it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  55%|█████▍    | 18/33 [00:02<00:01,  8.38it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  58%|█████▊    | 19/33 [00:02<00:01,  8.45it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  61%|██████    | 20/33 [00:02<00:01,  8.48it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  64%|██████▎   | 21/33 [00:02<00:01,  8.42it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  67%|██████▋   | 22/33 [00:02<00:01,  8.44it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  70%|██████▉   | 23/33 [00:02<00:01,  8.44it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step


Datapoints:  73%|███████▎  | 24/33 [00:02<00:01,  8.27it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  76%|███████▌  | 25/33 [00:02<00:00,  8.30it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  79%|███████▉  | 26/33 [00:03<00:00,  8.33it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  82%|████████▏ | 27/33 [00:03<00:01,  4.54it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  85%|████████▍ | 28/33 [00:03<00:00,  5.26it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  88%|████████▊ | 29/33 [00:03<00:00,  5.91it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  91%|█████████ | 30/33 [00:03<00:00,  6.40it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  94%|█████████▍| 31/33 [00:04<00:00,  6.88it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  97%|█████████▋| 32/33 [00:04<00:00,  7.26it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints: 100%|██████████| 33/33 [00:04<00:00,  7.54it/s]
Datapoints: 100%|██████████| 33/33 [00:04<00:00,  7.71it/s]

CV Folds:  67%|██████▋   | 2/3 [00:16<00:08,  8.49s/it](4020, 28, 28)
Epoch 1/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 1:30 725ms/step - accuracy: 0.0938 - loss: 2.3129
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.3991 - loss: 1.7300    
 57/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.4861 - loss: 1.4946
 84/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5339 - loss: 1.3612
112/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5687 - loss: 1.2633
126/126 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.5833 - loss: 1.2218
Epoch 2/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 8s 70ms/step - accuracy: 0.8125 - loss: 0.4789
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7855 - loss: 0.6094 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7842 - loss: 0.6124
 86/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7865 - loss: 0.6102
115/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7887 - loss: 0.6054
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7896 - loss: 0.6034
Epoch 3/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 75ms/step - accuracy: 0.8750 - loss: 0.3564
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8287 - loss: 0.5035 
 59/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8226 - loss: 0.5133
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8222 - loss: 0.5153
116/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8224 - loss: 0.5142
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8226 - loss: 0.5135
Epoch 4/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 78ms/step - accuracy: 0.9062 - loss: 0.3024
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8529 - loss: 0.4422 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8462 - loss: 0.4541
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8447 - loss: 0.4577
116/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8441 - loss: 0.4582
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8441 - loss: 0.4580
Epoch 5/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 77ms/step - accuracy: 0.9062 - loss: 0.2785
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8671 - loss: 0.4048 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8611 - loss: 0.4166
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8593 - loss: 0.4204
116/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8583 - loss: 0.4210
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8581 - loss: 0.4210
Epoch 6/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 76ms/step - accuracy: 0.9062 - loss: 0.2679
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8788 - loss: 0.3689 
 59/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8743 - loss: 0.3802
 86/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8726 - loss: 0.3848
114/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8715 - loss: 0.3864
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8713 - loss: 0.3865
Epoch 7/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 75ms/step - accuracy: 0.9375 - loss: 0.2306
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8876 - loss: 0.3413 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8835 - loss: 0.3513
 86/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8822 - loss: 0.3563
114/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8814 - loss: 0.3588
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8813 - loss: 0.3593
Epoch 8/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 73ms/step - accuracy: 0.9375 - loss: 0.2117
 25/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8933 - loss: 0.3155 
 53/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8897 - loss: 0.3291
 81/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8885 - loss: 0.3337
110/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8877 - loss: 0.3362
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8875 - loss: 0.3369
Epoch 9/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 2s 16ms/step - accuracy: 0.9375 - loss: 0.2051
 29/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8974 - loss: 0.2993 
 57/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8943 - loss: 0.3090
 86/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8935 - loss: 0.3135
115/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8938 - loss: 0.3154
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8941 - loss: 0.3157
Epoch 10/10

  1/126 ━━━━━━━━━━━━━━━━━━━━ 9s 76ms/step - accuracy: 0.9375 - loss: 0.1920
 30/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8987 - loss: 0.2823 
 58/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8980 - loss: 0.2917
 87/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8980 - loss: 0.2967
115/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8984 - loss: 0.2989
126/126 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8987 - loss: 0.2994


Datapoints:   0%|          | 0/33 [00:00<?, ?it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:   3%|▎         | 1/33 [00:00<00:03,  8.21it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:   6%|▌         | 2/33 [00:00<00:03,  8.25it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:   9%|▉         | 3/33 [00:00<00:03,  8.27it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  12%|█▏        | 4/33 [00:00<00:03,  8.37it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  15%|█▌        | 5/33 [00:00<00:03,  8.42it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  18%|█▊        | 6/33 [00:00<00:03,  8.34it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  21%|██        | 7/33 [00:00<00:03,  8.41it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  24%|██▍       | 8/33 [00:00<00:02,  8.45it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  27%|██▋       | 9/33 [00:01<00:02,  8.28it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  30%|███       | 10/33 [00:01<00:02,  8.28it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  33%|███▎      | 11/33 [00:01<00:02,  8.34it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  36%|███▋      | 12/33 [00:01<00:02,  8.32it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  39%|███▉      | 13/33 [00:01<00:02,  8.40it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  42%|████▏     | 14/33 [00:01<00:02,  8.43it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  45%|████▌     | 15/33 [00:01<00:02,  8.37it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  48%|████▊     | 16/33 [00:01<00:02,  8.42it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  52%|█████▏    | 17/33 [00:02<00:01,  8.47it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  55%|█████▍    | 18/33 [00:02<00:01,  8.38it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  58%|█████▊    | 19/33 [00:02<00:01,  8.30it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  61%|██████    | 20/33 [00:02<00:01,  8.33it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  64%|██████▎   | 21/33 [00:02<00:01,  8.30it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  67%|██████▋   | 22/33 [00:02<00:01,  8.38it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  70%|██████▉   | 23/33 [00:02<00:01,  8.42it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  73%|███████▎  | 24/33 [00:02<00:01,  8.34it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  76%|███████▌  | 25/33 [00:02<00:00,  8.30it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  79%|███████▉  | 26/33 [00:03<00:00,  8.28it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step


Datapoints:  82%|████████▏ | 27/33 [00:03<00:00,  8.21it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  85%|████████▍ | 28/33 [00:03<00:00,  8.31it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  88%|████████▊ | 29/33 [00:03<00:00,  8.40it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  91%|█████████ | 30/33 [00:03<00:00,  8.31it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  94%|█████████▍| 31/33 [00:03<00:00,  8.39it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints:  97%|█████████▋| 32/33 [00:03<00:00,  8.43it/s]
1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step


Datapoints: 100%|██████████| 33/33 [00:03<00:00,  8.39it/s]
Datapoints: 100%|██████████| 33/33 [00:03<00:00,  8.36it/s]

CV Folds: 100%|██████████| 3/3 [00:25<00:00,  8.35s/it]
CV Folds: 100%|██████████| 3/3 [00:25<00:00,  8.38s/it]

We can now look at the results per group:

cv_results["test__single__accuracy"]
[[0.8333333333333334, 0.8333333333333334, 0.9166666666666666, 0.7666666666666667, 0.8166666666666667, 0.8166666666666667, 0.8, 0.8333333333333334, 0.8333333333333334, 0.85, 0.7666666666666667, 0.8333333333333334, 0.8833333333333333, 0.8666666666666667, 0.8833333333333333, 0.8666666666666667, 0.8333333333333334, 0.8333333333333334, 0.8, 0.8333333333333334, 0.7833333333333333, 0.7333333333333333, 0.8, 0.8333333333333334, 0.8333333333333334, 0.8833333333333333, 0.8666666666666667, 0.85, 0.85, 0.75, 0.75, 0.85, 0.8166666666666667, 0.8833333333333333], [0.8, 0.9, 0.75, 0.8666666666666667, 0.8833333333333333, 0.8166666666666667, 0.8833333333333333, 0.8833333333333333, 0.8333333333333334, 0.8, 0.85, 0.8, 0.8166666666666667, 0.8, 0.7833333333333333, 0.8, 0.8166666666666667, 0.7666666666666667, 0.85, 0.75, 0.8, 0.8333333333333334, 0.7666666666666667, 0.9, 0.8, 0.7833333333333333, 0.8166666666666667, 0.8666666666666667, 0.8833333333333333, 0.9166666666666666, 0.7333333333333333, 0.7833333333333333, 0.9], [0.7833333333333333, 0.9333333333333333, 0.75, 0.8333333333333334, 0.8833333333333333, 0.8666666666666667, 0.8833333333333333, 0.9, 0.9, 0.9, 0.9166666666666666, 0.85, 0.8666666666666667, 0.8833333333333333, 0.85, 0.8, 0.9166666666666666, 0.75, 0.7166666666666667, 0.8666666666666667, 0.8166666666666667, 0.75, 0.7833333333333333, 0.7333333333333333, 0.8666666666666667, 0.8833333333333333, 0.9166666666666666, 0.8, 0.8666666666666667, 0.9, 0.85, 0.85, 0.9]]

Average first per group and then over all groups:

cv_results["test__agg__accuracy"]
array([0.82892157, 0.82525253, 0.84747475])

And the overall accuracy as the average over all samples of all groups within a fold:

cv_results["test__agg__per_sample__accuracy"]
array([0.82892157, 0.82525253, 0.84747475])

Total running time of the script: (0 minutes 38.928 seconds)

Estimated memory usage: 703 MB

Gallery generated by Sphinx-Gallery