Note

This page is a reference documentation. It only explains the function signature, and not how to use it. Please refer to the What you really need to know section for the big picture.

julearn.stats.corrected_ttest#

julearn.stats.corrected_ttest(*scores, df=None, method='bonferroni', alternative='two-sided')#

Perform corrected t-test on the scores of two or more models.

Parameters:
*scorespd.DataFrame

DataFrames containing the scores of the models. The DataFrames must be the output of run_cross_validation

df: int

Degrees of freedom.

methodstr

Method used for testing and adjustment of pvalues. Can be either the full name or initial letters. Available methods are:

  • bonferroni : one-step correction

  • sidak : one-step correction

  • holm-sidak : step down method using Sidak adjustments

  • holm : step-down method using Bonferroni adjustments

  • simes-hochberg : step-up method (independent)

  • hommel : closed method based on Simes tests (non-negative)

  • fdr_bh : Benjamini/Hochberg (non-negative)

  • fdr_by : Benjamini/Yekutieli (negative)

  • fdr_tsbh : two stage fdr correction (non-negative)

  • fdr_tsbky : two stage fdr correction (non-negative)

alternative{‘two-sided’, ‘less’, ‘greater’}, optional

Defines the alternative hypothesis. The following options are available (default is ‘two-sided’):

  • ‘two-sided’: the means of the distributions underlying the samples are unequal.

  • ‘less’: the mean of the distribution underlying the first sample is less than the mean of the distribution underlying the second sample.

  • ‘greater’: the mean of the distribution underlying the first sample is greater than the mean of the distribution underlying the second sample.

Examples using julearn.stats.corrected_ttest#

Simple Model Comparison

Simple Model Comparison

Model Comparison

Model Comparison