Tuning Multiple Hyperparameters Grids#

This example uses the ‘fmri’ dataset, performs simple binary classification using a Support Vector Machine classifier while tuning multiple hyperparameters grids at the same time.

References#

Waskom, M.L., Frank, M.C., Wagner, A.D. (2016). Adaptive engagement of cognitive control in context-dependent decision-making. Cerebral Cortex.

# Authors: Federico Raimondo <f.raimondo@fz-juelich.de>
#
# License: AGPL
import numpy as np
from seaborn import load_dataset

from julearn import run_cross_validation
from julearn.utils import configure_logging
from julearn.pipeline import PipelineCreator

Set the logging level to info to see extra information

configure_logging(level="INFO")
2023-07-19 12:41:58,068 - julearn - INFO - ===== Lib Versions =====
2023-07-19 12:41:58,068 - julearn - INFO - numpy: 1.25.1
2023-07-19 12:41:58,068 - julearn - INFO - scipy: 1.11.1
2023-07-19 12:41:58,068 - julearn - INFO - sklearn: 1.3.0
2023-07-19 12:41:58,068 - julearn - INFO - pandas: 2.0.3
2023-07-19 12:41:58,068 - julearn - INFO - julearn: 0.3.1.dev1
2023-07-19 12:41:58,068 - julearn - INFO - ========================

Set the random seed to always have the same example

Load the dataset

df_fmri = load_dataset("fmri")
print(df_fmri.head())
  subject  timepoint event    region    signal
0     s13         18  stim  parietal -0.017552
1      s5         14  stim  parietal -0.080883
2     s12         18  stim  parietal -0.081033
3     s11         18  stim  parietal -0.046134
4     s10         18  stim  parietal -0.037970

Set the dataframe in the right format

df_fmri = df_fmri.pivot(
    index=["subject", "timepoint", "event"], columns="region", values="signal"
)

df_fmri = df_fmri.reset_index()
print(df_fmri.head())

X = ["frontal", "parietal"]
y = "event"
region subject  timepoint event   frontal  parietal
0           s0          0   cue  0.007766 -0.006899
1           s0          0  stim -0.021452 -0.039327
2           s0          1   cue  0.016440  0.000300
3           s0          1  stim -0.021054 -0.035735
4           s0          2   cue  0.024296  0.033220

Lets do a first attempt and use a linear SVM with the default parameters.

creator = PipelineCreator(problem_type="classification")
creator.add("zscore")
creator.add("svm", kernel="linear")

scores = run_cross_validation(X=X, y=y, data=df_fmri, model=creator)

print(scores["test_score"].mean())
2023-07-19 12:41:58,085 - julearn - INFO - Adding step zscore that applies to ColumnTypes<types={'continuous'}; pattern=(?:__:type:__continuous)>
2023-07-19 12:41:58,086 - julearn - INFO - Step added
2023-07-19 12:41:58,086 - julearn - INFO - Adding step svm that applies to ColumnTypes<types={'continuous'}; pattern=(?:__:type:__continuous)>
2023-07-19 12:41:58,086 - julearn - INFO - Setting hyperparameter kernel = linear
2023-07-19 12:41:58,086 - julearn - INFO - Step added
2023-07-19 12:41:58,086 - julearn - INFO - ==== Input Data ====
2023-07-19 12:41:58,086 - julearn - INFO - Using dataframe as input
2023-07-19 12:41:58,086 - julearn - INFO -      Features: ['frontal', 'parietal']
2023-07-19 12:41:58,086 - julearn - INFO -      Target: event
2023-07-19 12:41:58,086 - julearn - INFO -      Expanded features: ['frontal', 'parietal']
2023-07-19 12:41:58,086 - julearn - INFO -      X_types:{}
2023-07-19 12:41:58,086 - julearn - WARNING - The following columns are not defined in X_types: ['frontal', 'parietal']. They will be treated as continuous.
/home/runner/work/julearn/julearn/julearn/utils/logging.py:238: RuntimeWarning: The following columns are not defined in X_types: ['frontal', 'parietal']. They will be treated as continuous.
  warn(msg, category=category)
2023-07-19 12:41:58,087 - julearn - INFO - ====================
2023-07-19 12:41:58,087 - julearn - INFO -
2023-07-19 12:41:58,088 - julearn - INFO - = Model Parameters =
2023-07-19 12:41:58,088 - julearn - INFO - ====================
2023-07-19 12:41:58,088 - julearn - INFO -
2023-07-19 12:41:58,088 - julearn - INFO - = Data Information =
2023-07-19 12:41:58,088 - julearn - INFO -      Problem type: classification
2023-07-19 12:41:58,088 - julearn - INFO -      Number of samples: 532
2023-07-19 12:41:58,088 - julearn - INFO -      Number of features: 2
2023-07-19 12:41:58,088 - julearn - INFO - ====================
2023-07-19 12:41:58,088 - julearn - INFO -
2023-07-19 12:41:58,089 - julearn - INFO -      Number of classes: 2
2023-07-19 12:41:58,089 - julearn - INFO -      Target type: object
2023-07-19 12:41:58,089 - julearn - INFO -      Class distributions: event
cue     266
stim    266
Name: count, dtype: int64
2023-07-19 12:41:58,090 - julearn - INFO - Using outer CV scheme KFold(n_splits=5, random_state=None, shuffle=False)
2023-07-19 12:41:58,090 - julearn - INFO - Binary classification problem detected.
0.5939164168576971

Now lets tune a bit this SVM. We will use a grid search to tune the regularization parameter C and the kernel. We will also tune the gamma. But since the gamma is only used for the rbf kernel, we will use a different grid for the rbf kernel.

To specify two different sets of parameters for the same step, we can explicitly specify the name of the step. This is done by passing the name parameter to the add method.

creator = PipelineCreator(problem_type="classification")
creator.add("zscore")
creator.add("svm", kernel="linear", C=[0.01, 0.1], name="svm")
creator.add(
    "svm",
    kernel="rbf",
    C=[0.01, 0.1],
    gamma=["scale", "auto", 1e-2, 1e-3],
    name="svm",
)

search_params = {
    "kind": "grid",
    "cv": 2,  # to speed up the example
}

scores, estimator = run_cross_validation(
    X=X,
    y=y,
    data=df_fmri,
    model=creator,
    search_params=search_params,
    return_estimator="final",
)

print(scores["test_score"].mean())
2023-07-19 12:41:58,161 - julearn - INFO - Adding step zscore that applies to ColumnTypes<types={'continuous'}; pattern=(?:__:type:__continuous)>
2023-07-19 12:41:58,161 - julearn - INFO - Step added
2023-07-19 12:41:58,162 - julearn - INFO - Adding step svm that applies to ColumnTypes<types={'continuous'}; pattern=(?:__:type:__continuous)>
2023-07-19 12:41:58,162 - julearn - INFO - Setting hyperparameter kernel = linear
2023-07-19 12:41:58,162 - julearn - INFO - Tuning hyperparameter C = [0.01, 0.1]
2023-07-19 12:41:58,162 - julearn - INFO - Step added
2023-07-19 12:41:58,162 - julearn - INFO - Adding step svm that applies to ColumnTypes<types={'continuous'}; pattern=(?:__:type:__continuous)>
2023-07-19 12:41:58,162 - julearn - INFO - Setting hyperparameter kernel = rbf
2023-07-19 12:41:58,162 - julearn - INFO - Tuning hyperparameter C = [0.01, 0.1]
2023-07-19 12:41:58,162 - julearn - INFO - Tuning hyperparameter gamma = ['scale', 'auto', 0.01, 0.001]
2023-07-19 12:41:58,162 - julearn - INFO - Step added
2023-07-19 12:41:58,162 - julearn - INFO - ==== Input Data ====
2023-07-19 12:41:58,162 - julearn - INFO - Using dataframe as input
2023-07-19 12:41:58,162 - julearn - INFO -      Features: ['frontal', 'parietal']
2023-07-19 12:41:58,162 - julearn - INFO -      Target: event
2023-07-19 12:41:58,162 - julearn - INFO -      Expanded features: ['frontal', 'parietal']
2023-07-19 12:41:58,162 - julearn - INFO -      X_types:{}
2023-07-19 12:41:58,162 - julearn - WARNING - The following columns are not defined in X_types: ['frontal', 'parietal']. They will be treated as continuous.
/home/runner/work/julearn/julearn/julearn/utils/logging.py:238: RuntimeWarning: The following columns are not defined in X_types: ['frontal', 'parietal']. They will be treated as continuous.
  warn(msg, category=category)
2023-07-19 12:41:58,163 - julearn - INFO - ====================
2023-07-19 12:41:58,163 - julearn - INFO -
2023-07-19 12:41:58,164 - julearn - INFO - = Model Parameters =
2023-07-19 12:41:58,164 - julearn - INFO - Tuning hyperparameters using grid
2023-07-19 12:41:58,164 - julearn - INFO - Hyperparameters:
2023-07-19 12:41:58,164 - julearn - INFO -      svm__C: [0.01, 0.1]
2023-07-19 12:41:58,164 - julearn - INFO - Using inner CV scheme KFold(n_splits=2, random_state=None, shuffle=False)
2023-07-19 12:41:58,164 - julearn - INFO - Search Parameters:
2023-07-19 12:41:58,164 - julearn - INFO -      cv: KFold(n_splits=2, random_state=None, shuffle=False)
2023-07-19 12:41:58,165 - julearn - INFO - ====================
2023-07-19 12:41:58,165 - julearn - INFO -
2023-07-19 12:41:58,165 - julearn - INFO - = Model Parameters =
2023-07-19 12:41:58,165 - julearn - INFO - Tuning hyperparameters using grid
2023-07-19 12:41:58,165 - julearn - INFO - Hyperparameters:
2023-07-19 12:41:58,165 - julearn - INFO -      svm__C: [0.01, 0.1]
2023-07-19 12:41:58,165 - julearn - INFO -      svm__gamma: ['scale', 'auto', 0.01, 0.001]
2023-07-19 12:41:58,165 - julearn - INFO - Using inner CV scheme KFold(n_splits=2, random_state=None, shuffle=False)
2023-07-19 12:41:58,165 - julearn - INFO - Search Parameters:
2023-07-19 12:41:58,166 - julearn - INFO -      cv: KFold(n_splits=2, random_state=None, shuffle=False)
2023-07-19 12:41:58,166 - julearn - INFO - ====================
2023-07-19 12:41:58,166 - julearn - INFO -
2023-07-19 12:41:58,166 - julearn - INFO - = Model Parameters =
2023-07-19 12:41:58,166 - julearn - INFO - Tuning hyperparameters using grid
2023-07-19 12:41:58,166 - julearn - INFO - Hyperparameters list:
2023-07-19 12:41:58,166 - julearn - INFO -      Set 0
2023-07-19 12:41:58,166 - julearn - INFO -              svm__C: [0.01, 0.1]
2023-07-19 12:41:58,166 - julearn - INFO -              set_column_types: [SetColumnTypes(X_types={})]
2023-07-19 12:41:58,166 - julearn - INFO -              svm: [SVC(kernel='linear')]
2023-07-19 12:41:58,166 - julearn - INFO -      Set 1
2023-07-19 12:41:58,167 - julearn - INFO -              svm__C: [0.01, 0.1]
2023-07-19 12:41:58,167 - julearn - INFO -              svm__gamma: ['scale', 'auto', 0.01, 0.001]
2023-07-19 12:41:58,167 - julearn - INFO -              set_column_types: [SetColumnTypes(X_types={})]
2023-07-19 12:41:58,167 - julearn - INFO -              svm: [SVC()]
2023-07-19 12:41:58,167 - julearn - INFO - Using inner CV scheme KFold(n_splits=2, random_state=None, shuffle=False)
2023-07-19 12:41:58,167 - julearn - INFO - Search Parameters:
2023-07-19 12:41:58,167 - julearn - INFO -      cv: KFold(n_splits=2, random_state=None, shuffle=False)
2023-07-19 12:41:58,167 - julearn - INFO - ====================
2023-07-19 12:41:58,167 - julearn - INFO -
2023-07-19 12:41:58,167 - julearn - INFO - = Data Information =
2023-07-19 12:41:58,167 - julearn - INFO -      Problem type: classification
2023-07-19 12:41:58,168 - julearn - INFO -      Number of samples: 532
2023-07-19 12:41:58,168 - julearn - INFO -      Number of features: 2
2023-07-19 12:41:58,168 - julearn - INFO - ====================
2023-07-19 12:41:58,168 - julearn - INFO -
2023-07-19 12:41:58,168 - julearn - INFO -      Number of classes: 2
2023-07-19 12:41:58,168 - julearn - INFO -      Target type: object
2023-07-19 12:41:58,169 - julearn - INFO -      Class distributions: event
cue     266
stim    266
Name: count, dtype: int64
2023-07-19 12:41:58,169 - julearn - INFO - Using outer CV scheme KFold(n_splits=5, random_state=None, shuffle=False)
2023-07-19 12:41:58,169 - julearn - INFO - Binary classification problem detected.
0.7087109857168048

It seems that we might have found a better model, but which one is it?

{'set_column_types': SetColumnTypes(X_types={}), 'svm': SVC(C=0.1), 'svm__C': 0.1, 'svm__gamma': 'scale'}
0.5

Total running time of the script: ( 0 minutes 1.689 seconds)

Gallery generated by Sphinx-Gallery