Note
Click here to download the full example code
Simple Binary Classification
This example uses the ‘iris’ dataset and performs a simple binary classification using a Support Vector Machine classifier.
# Authors: Federico Raimondo <f.raimondo@fz-juelich.de>
#
# License: AGPL
from seaborn import load_dataset
from julearn import run_cross_validation
from julearn.utils import configure_logging
Set the logging level to info to see extra information
configure_logging(level='INFO')
2022-12-08 10:45:28,956 - julearn - INFO - ===== Lib Versions =====
2022-12-08 10:45:28,956 - julearn - INFO - numpy: 1.23.5
2022-12-08 10:45:28,956 - julearn - INFO - scipy: 1.9.3
2022-12-08 10:45:28,956 - julearn - INFO - sklearn: 1.0.2
2022-12-08 10:45:28,956 - julearn - INFO - pandas: 1.4.4
2022-12-08 10:45:28,956 - julearn - INFO - julearn: 0.2.7
2022-12-08 10:45:28,956 - julearn - INFO - ========================
df_iris = load_dataset('iris')
The dataset has three kind of species. We will keep two to perform a binary classification.
df_iris = df_iris[df_iris['species'].isin(['versicolor', 'virginica'])]
As features, we will use the sepal length, width and petal length. We will try to predict the species.
2022-12-08 10:45:28,960 - julearn - INFO - Using default CV
2022-12-08 10:45:28,960 - julearn - INFO - ==== Input Data ====
2022-12-08 10:45:28,960 - julearn - INFO - Using dataframe as input
2022-12-08 10:45:28,960 - julearn - INFO - Features: ['sepal_length', 'sepal_width', 'petal_length']
2022-12-08 10:45:28,960 - julearn - INFO - Target: species
2022-12-08 10:45:28,960 - julearn - INFO - Expanded X: ['sepal_length', 'sepal_width', 'petal_length']
2022-12-08 10:45:28,960 - julearn - INFO - Expanded Confounds: []
2022-12-08 10:45:28,961 - julearn - INFO - ====================
2022-12-08 10:45:28,961 - julearn - INFO -
2022-12-08 10:45:28,961 - julearn - INFO - ====== Model ======
2022-12-08 10:45:28,961 - julearn - INFO - Obtaining model by name: svm
2022-12-08 10:45:28,961 - julearn - INFO - ===================
2022-12-08 10:45:28,961 - julearn - INFO -
2022-12-08 10:45:28,961 - julearn - INFO - CV interpreted as RepeatedKFold with 5 repetitions of 5 folds
0 0.90
1 0.90
2 0.90
3 1.00
4 0.85
5 1.00
6 0.80
7 0.90
8 0.95
9 0.95
10 0.90
11 0.95
12 0.95
13 0.85
14 0.80
15 0.95
16 1.00
17 0.90
18 0.80
19 0.95
20 0.95
21 0.90
22 0.85
23 0.90
24 0.90
Name: test_score, dtype: float64
Additionally, we can choose to assess the performance of the model using different scoring functions.
For example, we might have an unbalanced dataset:
df_unbalanced = df_iris[20:] # drop the first 20 versicolor samples
print(df_unbalanced['species'].value_counts())
virginica 50
versicolor 30
Name: species, dtype: int64
If we compute the accuracy, we might not account for this imbalance. A more suitable metric is the balanced_accuracy. More information in scikit-learn: Balanced Accuracy
We will also set the random seed so we always split the data in the same way.
2022-12-08 10:45:29,330 - julearn - INFO - Setting random seed to 42
2022-12-08 10:45:29,330 - julearn - INFO - Using default CV
2022-12-08 10:45:29,330 - julearn - INFO - ==== Input Data ====
2022-12-08 10:45:29,330 - julearn - INFO - Using dataframe as input
2022-12-08 10:45:29,330 - julearn - INFO - Features: ['sepal_length', 'sepal_width', 'petal_length']
2022-12-08 10:45:29,330 - julearn - INFO - Target: species
2022-12-08 10:45:29,330 - julearn - INFO - Expanded X: ['sepal_length', 'sepal_width', 'petal_length']
2022-12-08 10:45:29,330 - julearn - INFO - Expanded Confounds: []
2022-12-08 10:45:29,331 - julearn - INFO - ====================
2022-12-08 10:45:29,331 - julearn - INFO -
2022-12-08 10:45:29,331 - julearn - INFO - ====== Model ======
2022-12-08 10:45:29,331 - julearn - INFO - Obtaining model by name: svm
2022-12-08 10:45:29,331 - julearn - INFO - ===================
2022-12-08 10:45:29,331 - julearn - INFO -
2022-12-08 10:45:29,331 - julearn - INFO - CV interpreted as RepeatedKFold with 5 repetitions of 5 folds
0.895
0.8708886668886668
Other kind of metrics allows us to evaluate how good our model is to detect specific targets. Suppose we want to create a model that correctly identifies the versicolor samples.
Now we might want to evaluate the precision score, or the ratio of true positives (tp) over all positives (true and false positives). More information in scikit-learn: Precision
For this metric to work, we need to define which are our positive values. In this example, we are interested in detecting versicolor.
2022-12-08 10:45:29,834 - julearn - INFO - Setting random seed to 42
2022-12-08 10:45:29,834 - julearn - INFO - Using default CV
2022-12-08 10:45:29,834 - julearn - INFO - ==== Input Data ====
2022-12-08 10:45:29,834 - julearn - INFO - Using dataframe as input
2022-12-08 10:45:29,834 - julearn - INFO - Features: ['sepal_length', 'sepal_width', 'petal_length']
2022-12-08 10:45:29,834 - julearn - INFO - Target: species
2022-12-08 10:45:29,834 - julearn - INFO - Expanded X: ['sepal_length', 'sepal_width', 'petal_length']
2022-12-08 10:45:29,834 - julearn - INFO - Expanded Confounds: []
2022-12-08 10:45:29,835 - julearn - INFO - Setting the following as positive labels ['versicolor']
2022-12-08 10:45:29,835 - julearn - INFO - ====================
2022-12-08 10:45:29,835 - julearn - INFO -
2022-12-08 10:45:29,836 - julearn - INFO - ====== Model ======
2022-12-08 10:45:29,836 - julearn - INFO - Obtaining model by name: svm
2022-12-08 10:45:29,836 - julearn - INFO - ===================
2022-12-08 10:45:29,836 - julearn - INFO -
2022-12-08 10:45:29,836 - julearn - INFO - CV interpreted as RepeatedKFold with 5 repetitions of 5 folds
0.9223333333333333
Total running time of the script: ( 0 minutes 1.263 seconds)