# PyCM Document¶

## Overview¶

PyCM is a multi-class confusion matrix library written in Python that supports both input data vectors and direct matrix, and a proper tool for post-classification model evaluation that supports most classes and overall statistics parameters. PyCM is the swiss-army knife of confusion matrices, targeted mainly at data scientists that need a broad array of metrics for predictive models and an accurate evaluation of large variety of classifiers. Fig1. ConfusionMatrix Block Diagram

## Installation¶

⚠️ PyCM 2.4 is the last version to support Python 2.7 & Python 3.4

### Source code¶

• Download Version 2.4 or Latest Source
• Run pip install -r requirements.txt or pip3 install -r requirements.txt (Need root access)
• Run python3 setup.py install or python setup.py install (Need root access)

### Easy install¶

• Run easy_install --upgrade pycm (Need root access)

## Usage¶

### From vector¶

In :
from pycm import *

In :
y_actu = [2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]
y_pred = [0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]

In :
cm = ConfusionMatrix(y_actu, y_pred,digit=5)

• Notice : digit (the number of digits to the right of the decimal point in a number) is new in version 0.6 (default value : 5)
• Only for print and save
In :
cm

Out:
pycm.ConfusionMatrix(classes: [0, 1, 2])
In :
cm.actual_vector

Out:
[2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]
In :
cm.predict_vector

Out:
[0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]
In :
cm.classes

Out:
[0, 1, 2]
In :
cm.class_stat

Out:
{'ACC': {0: 0.8333333333333334, 1: 0.75, 2: 0.5833333333333334},
'AGF': {0: 0.9135962935560564, 1: 0.5399492471560389, 2: 0.5515973485146916},
'AGM': {0: 0.837285964012303, 1: 0.6919986974962765, 2: 0.6071224016819726},
'AM': {0: 2, 1: -1, 2: -1},
'AUC': {0: 0.8888888888888888, 1: 0.611111111111111, 2: 0.5833333333333333},
'AUCI': {0: 'Very Good', 1: 'Fair', 2: 'Poor'},
'AUPR': {0: 0.8, 1: 0.41666666666666663, 2: 0.55},
'BCD': {0: 0.08333333333333333,
1: 0.041666666666666664,
2: 0.041666666666666664},
'BM': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
'CEN': {0: 0.25, 1: 0.49657842846620864, 2: 0.6044162769630221},
'DOR': {0: 'None', 1: 3.999999999999998, 2: 1.9999999999999998},
'DP': {0: 'None', 1: 0.331933069996499, 2: 0.16596653499824957},
'DPI': {0: 'None', 1: 'Poor', 2: 'Poor'},
'ERR': {0: 0.16666666666666663, 1: 0.25, 2: 0.41666666666666663},
'F0.5': {0: 0.6521739130434783,
1: 0.45454545454545453,
2: 0.5769230769230769},
'F1': {0: 0.75, 1: 0.4, 2: 0.5454545454545454},
'F2': {0: 0.8823529411764706, 1: 0.35714285714285715, 2: 0.5172413793103449},
'FDR': {0: 0.4, 1: 0.5, 2: 0.4},
'FN': {0: 0, 1: 2, 2: 3},
'FNR': {0: 0.0, 1: 0.6666666666666667, 2: 0.5},
'FOR': {0: 0.0, 1: 0.19999999999999996, 2: 0.4285714285714286},
'FP': {0: 2, 1: 1, 2: 2},
'FPR': {0: 0.2222222222222222,
1: 0.11111111111111116,
2: 0.33333333333333337},
'G': {0: 0.7745966692414834, 1: 0.408248290463863, 2: 0.5477225575051661},
'GI': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
'GM': {0: 0.8819171036881969, 1: 0.5443310539518174, 2: 0.5773502691896257},
'IBA': {0: 0.9506172839506174, 1: 0.1316872427983539, 2: 0.2777777777777778},
'IS': {0: 1.263034405833794, 1: 1.0, 2: 0.2630344058337938},
'J': {0: 0.6, 1: 0.25, 2: 0.375},
'LS': {0: 2.4, 1: 2.0, 2: 1.2},
'MCC': {0: 0.6831300510639732, 1: 0.25819888974716115, 2: 0.1690308509457033},
'MCCI': {0: 'Moderate', 1: 'Negligible', 2: 'Negligible'},
'MCEN': {0: 0.2643856189774724, 1: 0.5, 2: 0.6875},
'MK': {0: 0.6000000000000001, 1: 0.30000000000000004, 2: 0.17142857142857126},
'N': {0: 9, 1: 9, 2: 6},
'NLR': {0: 0.0, 1: 0.7500000000000001, 2: 0.75},
'NLRI': {0: 'Good', 1: 'Negligible', 2: 'Negligible'},
'NPV': {0: 1.0, 1: 0.8, 2: 0.5714285714285714},
'OC': {0: 1.0, 1: 0.5, 2: 0.6},
'OOC': {0: 0.7745966692414834, 1: 0.4082482904638631, 2: 0.5477225575051661},
'OP': {0: 0.7083333333333334, 1: 0.2954545454545454, 2: 0.4404761904761905},
'P': {0: 3, 1: 3, 2: 6},
'PLR': {0: 4.5, 1: 2.9999999999999987, 2: 1.4999999999999998},
'PLRI': {0: 'Poor', 1: 'Poor', 2: 'Poor'},
'POP': {0: 12, 1: 12, 2: 12},
'PPV': {0: 0.6, 1: 0.5, 2: 0.6},
'PRE': {0: 0.25, 1: 0.25, 2: 0.5},
'Q': {0: 'None', 1: 0.6, 2: 0.3333333333333333},
'RACC': {0: 0.10416666666666667,
1: 0.041666666666666664,
2: 0.20833333333333334},
'RACCU': {0: 0.1111111111111111,
1: 0.04340277777777778,
2: 0.21006944444444442},
'TN': {0: 7, 1: 8, 2: 4},
'TNR': {0: 0.7777777777777778, 1: 0.8888888888888888, 2: 0.6666666666666666},
'TON': {0: 7, 1: 10, 2: 7},
'TOP': {0: 5, 1: 2, 2: 5},
'TP': {0: 3, 1: 1, 2: 3},
'TPR': {0: 1.0, 1: 0.3333333333333333, 2: 0.5},
'Y': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
'dInd': {0: 0.2222222222222222, 1: 0.6758625033664689, 2: 0.6009252125773316},
'sInd': {0: 0.8428651597363228, 1: 0.5220930407198541, 2: 0.5750817072006014}}
• Notice : cm.statistic_result prev versions (0.2 >)
In :
cm.overall_stat

Out:
{'95% CI': (0.30438856248221097, 0.8622781041844558),
'ACC Macro': 0.7222222222222223,
'AUNP': 0.6666666666666666,
'AUNU': 0.6944444444444443,
'Bennett S': 0.37500000000000006,
'CBA': 0.4777777777777778,
'Chi-Squared': 6.6,
'Chi-Squared DF': 4,
'Conditional Entropy': 0.9591479170272448,
'Cramer V': 0.5244044240850757,
'Cross Entropy': 1.5935164295556343,
'F1 Macro': 0.5651515151515151,
'F1 Micro': 0.5833333333333334,
'Gwet AC1': 0.3893129770992367,
'Hamming Loss': 0.41666666666666663,
'Joint Entropy': 2.4591479170272446,
'KL Divergence': 0.09351642955563438,
'Kappa': 0.35483870967741943,
'Kappa 95% CI': (-0.07707577422109269, 0.7867531935759315),
'Kappa No Prevalence': 0.16666666666666674,
'Kappa Standard Error': 0.2203645326012817,
'Kappa Unbiased': 0.34426229508196726,
'Lambda A': 0.16666666666666666,
'Lambda B': 0.42857142857142855,
'Mutual Information': 0.5242078379544426,
'NIR': 0.5,
'Overall ACC': 0.5833333333333334,
'Overall CEN': 0.4638112995385119,
'Overall J': (1.225, 0.4083333333333334),
'Overall MCC': 0.36666666666666664,
'Overall MCEN': 0.5189369467580801,
'Overall RACC': 0.3541666666666667,
'Overall RACCU': 0.3645833333333333,
'P-Value': 0.38720703125,
'PPV Macro': 0.5666666666666668,
'PPV Micro': 0.5833333333333334,
'Pearson C': 0.5956833971812705,
'Phi-Squared': 0.5499999999999999,
'RCI': 0.3494718919696284,
'RR': 4.0,
'Reference Entropy': 1.5,
'Response Entropy': 1.4833557549816874,
'SOA1(Landis & Koch)': 'Fair',
'SOA2(Fleiss)': 'Poor',
'SOA3(Altman)': 'Fair',
'SOA4(Cicchetti)': 'Poor',
'SOA5(Cramer)': 'Relatively Strong',
'SOA6(Matthews)': 'Weak',
'Scott PI': 0.34426229508196726,
'Standard Error': 0.14231876063832777,
'TPR Macro': 0.611111111111111,
'TPR Micro': 0.5833333333333334,
'Zero-one Loss': 5}
• Notice : new in version 0.3
• Notice : _ removed from overall statistics names in version 1.6
In :
cm.table

Out:
{0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}}
In :
cm.matrix

Out:
{0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}}
In :
cm.normalized_matrix

Out:
{0: {0: 1.0, 1: 0.0, 2: 0.0},
1: {0: 0.0, 1: 0.33333, 2: 0.66667},
2: {0: 0.33333, 1: 0.16667, 2: 0.5}}
In :
cm.normalized_table

Out:
{0: {0: 1.0, 1: 0.0, 2: 0.0},
1: {0: 0.0, 1: 0.33333, 2: 0.66667},
2: {0: 0.33333, 1: 0.16667, 2: 0.5}}
• Notice : matrix, normalized_matrix & normalized_table added in version 1.5 (changed from print style)
In :
import numpy

In :
y_actu = numpy.array([2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2])
y_pred = numpy.array([0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2])

In :
cm = ConfusionMatrix(y_actu, y_pred,digit=5)

In :
cm

Out:
pycm.ConfusionMatrix(classes: [0, 1, 2])
• Notice : numpy.array support in versions > 0.7

### Direct CM¶

In :
cm2 = ConfusionMatrix(matrix={0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}},digit=5)

In :
cm2

Out:
pycm.ConfusionMatrix(classes: [0, 1, 2])
In :
cm2.actual_vector

In :
cm2.predict_vector

In :
cm2.classes

Out:
[0, 1, 2]
In :
cm2.class_stat

Out:
{'ACC': {0: 0.8333333333333334, 1: 0.75, 2: 0.5833333333333334},
'AGF': {0: 0.9135962935560564, 1: 0.5399492471560389, 2: 0.5515973485146916},
'AGM': {0: 0.837285964012303, 1: 0.6919986974962765, 2: 0.6071224016819726},
'AM': {0: 2, 1: -1, 2: -1},
'AUC': {0: 0.8888888888888888, 1: 0.611111111111111, 2: 0.5833333333333333},
'AUCI': {0: 'Very Good', 1: 'Fair', 2: 'Poor'},
'AUPR': {0: 0.8, 1: 0.41666666666666663, 2: 0.55},
'BCD': {0: 0.08333333333333333,
1: 0.041666666666666664,
2: 0.041666666666666664},
'BM': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
'CEN': {0: 0.25, 1: 0.49657842846620864, 2: 0.6044162769630221},
'DOR': {0: 'None', 1: 3.999999999999998, 2: 1.9999999999999998},
'DP': {0: 'None', 1: 0.331933069996499, 2: 0.16596653499824957},
'DPI': {0: 'None', 1: 'Poor', 2: 'Poor'},
'ERR': {0: 0.16666666666666663, 1: 0.25, 2: 0.41666666666666663},
'F0.5': {0: 0.6521739130434783,
1: 0.45454545454545453,
2: 0.5769230769230769},
'F1': {0: 0.75, 1: 0.4, 2: 0.5454545454545454},
'F2': {0: 0.8823529411764706, 1: 0.35714285714285715, 2: 0.5172413793103449},
'FDR': {0: 0.4, 1: 0.5, 2: 0.4},
'FN': {0: 0, 1: 2, 2: 3},
'FNR': {0: 0.0, 1: 0.6666666666666667, 2: 0.5},
'FOR': {0: 0.0, 1: 0.19999999999999996, 2: 0.4285714285714286},
'FP': {0: 2, 1: 1, 2: 2},
'FPR': {0: 0.2222222222222222,
1: 0.11111111111111116,
2: 0.33333333333333337},
'G': {0: 0.7745966692414834, 1: 0.408248290463863, 2: 0.5477225575051661},
'GI': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
'GM': {0: 0.8819171036881969, 1: 0.5443310539518174, 2: 0.5773502691896257},
'IBA': {0: 0.9506172839506174, 1: 0.1316872427983539, 2: 0.2777777777777778},
'IS': {0: 1.263034405833794, 1: 1.0, 2: 0.2630344058337938},
'J': {0: 0.6, 1: 0.25, 2: 0.375},
'LS': {0: 2.4, 1: 2.0, 2: 1.2},
'MCC': {0: 0.6831300510639732, 1: 0.25819888974716115, 2: 0.1690308509457033},
'MCCI': {0: 'Moderate', 1: 'Negligible', 2: 'Negligible'},
'MCEN': {0: 0.2643856189774724, 1: 0.5, 2: 0.6875},
'MK': {0: 0.6000000000000001, 1: 0.30000000000000004, 2: 0.17142857142857126},
'N': {0: 9, 1: 9, 2: 6},
'NLR': {0: 0.0, 1: 0.7500000000000001, 2: 0.75},
'NLRI': {0: 'Good', 1: 'Negligible', 2: 'Negligible'},
'NPV': {0: 1.0, 1: 0.8, 2: 0.5714285714285714},
'OC': {0: 1.0, 1: 0.5, 2: 0.6},
'OOC': {0: 0.7745966692414834, 1: 0.4082482904638631, 2: 0.5477225575051661},
'OP': {0: 0.7083333333333334, 1: 0.2954545454545454, 2: 0.4404761904761905},
'P': {0: 3, 1: 3, 2: 6},
'PLR': {0: 4.5, 1: 2.9999999999999987, 2: 1.4999999999999998},
'PLRI': {0: 'Poor', 1: 'Poor', 2: 'Poor'},
'POP': {0: 12, 1: 12, 2: 12},
'PPV': {0: 0.6, 1: 0.5, 2: 0.6},
'PRE': {0: 0.25, 1: 0.25, 2: 0.5},
'Q': {0: 'None', 1: 0.6, 2: 0.3333333333333333},
'RACC': {0: 0.10416666666666667,
1: 0.041666666666666664,
2: 0.20833333333333334},
'RACCU': {0: 0.1111111111111111,
1: 0.04340277777777778,
2: 0.21006944444444442},
'TN': {0: 7, 1: 8, 2: 4},
'TNR': {0: 0.7777777777777778, 1: 0.8888888888888888, 2: 0.6666666666666666},
'TON': {0: 7, 1: 10, 2: 7},
'TOP': {0: 5, 1: 2, 2: 5},
'TP': {0: 3, 1: 1, 2: 3},
'TPR': {0: 1.0, 1: 0.3333333333333333, 2: 0.5},
'Y': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
'dInd': {0: 0.2222222222222222, 1: 0.6758625033664689, 2: 0.6009252125773316},
'sInd': {0: 0.8428651597363228, 1: 0.5220930407198541, 2: 0.5750817072006014}}
In :
cm2.overall_stat

Out:
{'95% CI': (0.30438856248221097, 0.8622781041844558),
'ACC Macro': 0.7222222222222223,
'AUNP': 0.6666666666666666,
'AUNU': 0.6944444444444443,
'Bennett S': 0.37500000000000006,
'CBA': 0.4777777777777778,
'Chi-Squared': 6.6,
'Chi-Squared DF': 4,
'Conditional Entropy': 0.9591479170272448,
'Cramer V': 0.5244044240850757,
'Cross Entropy': 1.5935164295556343,
'F1 Macro': 0.5651515151515151,
'F1 Micro': 0.5833333333333334,
'Gwet AC1': 0.3893129770992367,
'Hamming Loss': 0.41666666666666663,
'Joint Entropy': 2.4591479170272446,
'KL Divergence': 0.09351642955563438,
'Kappa': 0.35483870967741943,
'Kappa 95% CI': (-0.07707577422109269, 0.7867531935759315),
'Kappa No Prevalence': 0.16666666666666674,
'Kappa Standard Error': 0.2203645326012817,
'Kappa Unbiased': 0.34426229508196726,
'Lambda A': 0.16666666666666666,
'Lambda B': 0.42857142857142855,
'Mutual Information': 0.5242078379544426,
'NIR': 0.5,
'Overall ACC': 0.5833333333333334,
'Overall CEN': 0.4638112995385119,
'Overall J': (1.225, 0.4083333333333334),
'Overall MCC': 0.36666666666666664,
'Overall MCEN': 0.5189369467580801,
'Overall RACC': 0.3541666666666667,
'Overall RACCU': 0.3645833333333333,
'P-Value': 0.38720703125,
'PPV Macro': 0.5666666666666668,
'PPV Micro': 0.5833333333333334,
'Pearson C': 0.5956833971812705,
'Phi-Squared': 0.5499999999999999,
'RCI': 0.3494718919696284,
'RR': 4.0,
'Reference Entropy': 1.5,
'Response Entropy': 1.4833557549816874,
'SOA1(Landis & Koch)': 'Fair',
'SOA2(Fleiss)': 'Poor',
'SOA3(Altman)': 'Fair',
'SOA4(Cicchetti)': 'Poor',
'SOA5(Cramer)': 'Relatively Strong',
'SOA6(Matthews)': 'Weak',
'Scott PI': 0.34426229508196726,
'Standard Error': 0.14231876063832777,
'TPR Macro': 0.611111111111111,
'TPR Micro': 0.5833333333333334,
'Zero-one Loss': 5}
• Notice : new in version 0.8.1
• In direct matrix mode actual_vector and predict_vector are empty

### Activation threshold¶

threshold is added in version 0.9 for real value prediction.

• Notice : new in version 0.9

### Load from file¶

file is added in version 0.9.5 in order to load saved confusion matrix with .obj format generated by save_obj method.

• Notice : new in version 0.9.5

### Sample weights¶

sample_weight is added in version 1.2

• Notice : new in version 1.2

### Transpose¶

transpose is added in version 1.2 in order to transpose input matrix (only in Direct CM mode)

In :
cm = ConfusionMatrix(matrix={0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}},digit=5,transpose=True)

In :
cm.print_matrix()

Predict 0       1       2
Actual
0       3       0       2

1       0       1       1

2       0       2       3


• Notice : new in version 1.2

### Relabel¶

relabel method is added in version 1.5 in order to change ConfusionMatrix classnames.

In :
cm.relabel(mapping={0:"L1",1:"L2",2:"L3"})

In :
cm

Out:
pycm.ConfusionMatrix(classes: ['L1', 'L2', 'L3'])
• Notice : new in version 1.5

online_help function is added in version 1.1 in order to open each statistics definition in web browser

>>> from pycm import online_help
>>> online_help("J")
>>> online_help("SOA1(Landis & Koch)")
>>> online_help(2)

• List of items are available by calling online_help() (without argument)
• If PyCM website is not available, set alt_link = True
In :
online_help()

Please choose one parameter :

Example : online_help("J") or online_help(2)

1-95% CI
2-ACC
3-ACC Macro
4-AGF
5-AGM
6-AM
7-AUC
8-AUCI
9-AUNP
10-AUNU
11-AUPR
12-BCD
13-BM
14-Bennett S
15-CBA
16-CEN
17-Chi-Squared
18-Chi-Squared DF
19-Conditional Entropy
20-Cramer V
21-Cross Entropy
22-DOR
23-DP
24-DPI
25-ERR
26-F0.5
27-F1
28-F1 Macro
29-F1 Micro
30-F2
31-FDR
32-FN
33-FNR
34-FOR
35-FP
36-FPR
37-G
38-GI
39-GM
40-Gwet AC1
41-Hamming Loss
42-IBA
43-IS
44-J
45-Joint Entropy
46-KL Divergence
47-Kappa
48-Kappa 95% CI
49-Kappa No Prevalence
50-Kappa Standard Error
51-Kappa Unbiased
52-LS
53-Lambda A
54-Lambda B
55-MCC
56-MCCI
57-MCEN
58-MK
59-Mutual Information
60-N
61-NIR
62-NLR
63-NLRI
64-NPV
65-OC
66-OOC
67-OP
68-Overall ACC
69-Overall CEN
70-Overall J
71-Overall MCC
72-Overall MCEN
73-Overall RACC
74-Overall RACCU
75-P
76-P-Value
77-PLR
78-PLRI
79-POP
80-PPV
81-PPV Macro
82-PPV Micro
83-PRE
84-Pearson C
85-Phi-Squared
86-Q
87-RACC
88-RACCU
89-RCI
90-RR
91-Reference Entropy
92-Response Entropy
93-SOA1(Landis & Koch)
94-SOA2(Fleiss)
95-SOA3(Altman)
96-SOA4(Cicchetti)
97-SOA5(Cramer)
98-SOA6(Matthews)
99-Scott PI
100-Standard Error
101-TN
102-TNR
103-TON
104-TOP
105-TP
106-TPR
107-TPR Macro
108-TPR Micro
109-Y
110-Zero-one Loss
111-dInd
112-sInd

• Notice : alt_link , new in version 2.4

### Parameter recommender¶

This option has been added in version 1.9 in order to recommend most related parameters considering the characteristics of the input dataset. The characteristics according to which the parameters are suggested are balance/imbalance and binary/multiclass. All suggestions can be categorized into three main groups: imbalanced dataset, binary classification for a balanced dataset, and multi-class classification for a balanced dataset. The recommendation lists have been gathered according to the respective paper of each parameter and the capabilities which had been claimed by the paper. Fig2. Parameter Recommender Block Diagram

In :
cm.imbalance

Out:
False
In :
cm.binary

Out:
False
In :
cm.recommended_list

Out:
['ERR',
'TPR Micro',
'TPR Macro',
'F1 Macro',
'PPV Macro',
'ACC',
'Overall ACC',
'MCC',
'MCCI',
'Overall MCC',
'SOA6(Matthews)',
'BCD',
'Hamming Loss',
'Zero-one Loss']
• Notice : also available in HTML report
• Notice : The recommender system assumes that the input is the result of classification over the whole data rather than just a part of it. If the confusion matrix is the result of test data classification, the recommendation is not valid.

### Comapre¶

In version 2.0 a method for comparing several confusion matrices is introduced. This option is a combination of several overall and class-based benchmarks. Each of the benchmarks evaluates the performance of the classification algorithm from good to poor and give them a numeric score. The score of good performance is 1 and for the poor performance is 0.

After that, two scores are calculated for each confusion matrices, overall and class based. The overall score is the average of the score of six overall benchmarks which are Landis & Koch, Fleiss, Altman, Cicchetti, Cramer, and Matthews. And with a same manner, the class based score is the average of the score of five class-based benchmarks which are Positive Likelihood Ratio Interpretation, Negative Likelihood Ratio Interpretation, Discriminant Power Interpretation, AUC value Interpretation, and Matthews Correlation Coefficient Interpretation. It should be notice that if one of the benchmarks returns none for one of the classes, that benchmarks will be eliminate in total averaging. If user set weights for the classes, the averaging over the value of class-based benchmark scores will transform to a weighted average.

If the user set the value of by_class boolean input True, the best confusion matrix is the one with the maximum class-based score. Otherwise, if a confusion matrix obtain the maximum of the both overall and class-based score, that will be the reported as the best confusion matrix but in any other cases the compare object doesn’t select best confusion matrix. Fig3. Compare Block Diagram

In :
cm2 = ConfusionMatrix(matrix={0:{0:2,1:50,2:6},1:{0:5,1:50,2:3},2:{0:1,1:7,2:50}})
cm3 = ConfusionMatrix(matrix={0:{0:50,1:2,2:6},1:{0:50,1:5,2:3},2:{0:1,1:55,2:2}})

In :
cp = Compare({"cm2":cm2,"cm3":cm3})

In :
print(cp)

Best : cm2

Rank  Name   Class-Score    Overall-Score
1     cm2    7.05           2.55
2     cm3    4.55           1.98333


In :
cp.scores

Out:
{'cm2': {'class': 7.05, 'overall': 2.55},
'cm3': {'class': 4.55, 'overall': 1.98333}}
In :
cp.sorted

Out:
['cm2', 'cm3']
In :
cp.best

Out:
pycm.ConfusionMatrix(classes: [0, 1, 2])
In :
cp.best_name

Out:
'cm2'
In :
cp2 = Compare({"cm2":cm2,"cm3":cm3},by_class=True,weight={0:5,1:1,2:1})

In :
print(cp2)

Best : cm3

Rank  Name   Class-Score     Overall-Score
1     cm3    13.55           1.98333
2     cm2    11.65           2.55



### Acceptable data types¶

#### ConfusionMatrix¶

1. actual_vector : python list or numpy array of any stringable objects
2. predict_vector : python list or numpy array of any stringable objects
3. matrix : dict
4. digit: int
5. threshold : FunctionType (function or lambda)
6. file : File object
7. sample_weight : python list or numpy array of numbers
8. transpose : bool
• run help(ConfusionMatrix) for more information

#### Compare¶

1. cm_dict : python dict of ConfusionMatrix object (str : ConfusionMatrix)
2. by_class : bool
3. weight : python dict of class weights (class_name : float)
4. digit: int
• run help(Compare) for more information

## Basic parameters¶

### TP (True positive)¶

A true positive test result is one that detects the condition when the condition is present (correctly identified) .

In :
cm.TP

Out:
{'L1': 3, 'L2': 1, 'L3': 3}

### TN (True negative)¶

A true negative test result is one that does not detect the condition when the condition is absent (correctly rejected) .

In :
cm.TN

Out:
{'L1': 7, 'L2': 8, 'L3': 4}

### FP (False positive)¶

A false positive test result is one that detects the condition when the condition is absent (incorrectly identified) .

In :
cm.FP

Out:
{'L1': 0, 'L2': 2, 'L3': 3}

### FN (False negative)¶

A false negative test result is one that does not detect the condition when the condition is present (incorrectly rejected) .

In :
cm.FN

Out:
{'L1': 2, 'L2': 1, 'L3': 2}

### P (Condition positive)¶

Number of positive samples. Also known as support (the number of occurrences of each class in y_true) .

$$P=TP+FN$$

In :
cm.P

Out:
{'L1': 5, 'L2': 2, 'L3': 5}

### N (Condition negative)¶

Number of negative samples .

$$N=TN+FP$$

In :
cm.N

Out:
{'L1': 7, 'L2': 10, 'L3': 7}

### TOP (Test outcome positive)¶

Number of positive outcomes .

$$TOP=TP+FP$$

In :
cm.TOP

Out:
{'L1': 3, 'L2': 3, 'L3': 6}

### TON (Test outcome negative)¶

Number of negative outcomes .

$$TON=TN+FN$$

In :
cm.TON

Out:
{'L1': 9, 'L2': 9, 'L3': 6}

### POP (Population)¶

Total sample size .

$$POP=TP+TN+FN+FP$$

In :
cm.POP

Out:
{'L1': 12, 'L2': 12, 'L3': 12}

## Class statistics¶

### TPR (True positive rate)¶

Sensitivity (also called the true positive rate, the recall, or probability of detection in some fields) measures the proportion of positives that are correctly identified as such (e.g. the percentage of sick people who are correctly identified as having the condition) .

$$TPR=\frac{TP}{P}=\frac{TP}{TP+FN}$$

In :
cm.TPR

Out:
{'L1': 0.6, 'L2': 0.5, 'L3': 0.6}

### TNR (True negative rate)¶

Specificity (also called the true negative rate) measures the proportion of negatives that are correctly identified as such (e.g. the percentage of healthy people who are correctly identified as not having the condition) .

$$TNR=\frac{TN}{N}=\frac{TN}{TN+FP}$$

In :
cm.TNR

Out:
{'L1': 1.0, 'L2': 0.8, 'L3': 0.5714285714285714}

### PPV (Positive predictive value)¶

Positive predictive value (PPV) is the proportion of positives that correspond to the presence of the condition .

$$PPV=\frac{TP}{TP+FP}$$

In :
cm.PPV

Out:
{'L1': 1.0, 'L2': 0.3333333333333333, 'L3': 0.5}

### NPV (Negative predictive value)¶

Negative predictive value (NPV) is the proportion of negatives that correspond to the absence of the condition .

$$NPV=\frac{TN}{TN+FN}$$

In :
cm.NPV

Out:
{'L1': 0.7777777777777778, 'L2': 0.8888888888888888, 'L3': 0.6666666666666666}

### FNR (False negative rate)¶

The false negative rate is the proportion of positives which yield negative test outcomes with the test, i.e., the conditional probability of a negative test result given that the condition being looked for is present .

$$FNR=\frac{FN}{P}=\frac{FN}{FN+TP}=1-TPR$$

In :
cm.FNR

Out:
{'L1': 0.4, 'L2': 0.5, 'L3': 0.4}

### FPR (False positive rate)¶

The false positive rate is the proportion of all negatives that still yield positive test outcomes, i.e., the conditional probability of a positive test result given an event that was not present .

The false positive rate is equal to the significance level. The specificity of the test is equal to 1 minus the false positive rate.

$$FPR=\frac{FP}{N}=\frac{FP}{FP+TN}=1-TNR$$

In :
cm.FPR

Out:
{'L1': 0.0, 'L2': 0.19999999999999996, 'L3': 0.4285714285714286}

### FDR (False discovery rate)¶

The false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections) .

$$FDR=\frac{FP}{FP+TP}=1-PPV$$

In :
cm.FDR

Out:
{'L1': 0.0, 'L2': 0.6666666666666667, 'L3': 0.5}

### FOR (False omission rate)¶

False omission rate (FOR) is a statistical method used in multiple hypothesis testing to correct for multiple comparisons and it is the complement of the negative predictive value. It measures the proportion of false negatives which are incorrectly rejected .

$$FOR=\frac{FN}{FN+TN}=1-NPV$$

In :
cm.FOR

Out:
{'L1': 0.2222222222222222,
'L2': 0.11111111111111116,
'L3': 0.33333333333333337}

### ACC (Accuracy)¶

The accuracy is the number of correct predictions from all predictions made .

$$ACC=\frac{TP+TN}{P+N}=\frac{TP+TN}{TP+TN+FP+FN}$$

In :
cm.ACC

Out:
{'L1': 0.8333333333333334, 'L2': 0.75, 'L3': 0.5833333333333334}

### ERR (Error rate)¶

The error rate is the number of incorrect predictions from all predictions made .

$$ERR=\frac{FP+FN}{P+N}=\frac{FP+FN}{TP+TN+FP+FN}=1-ACC$$

In :
cm.ERR

Out:
{'L1': 0.16666666666666663, 'L2': 0.25, 'L3': 0.41666666666666663}
• Notice : new in version 0.4

### FBeta-Score¶

In statistical analysis of classification, the F1 score (also F-score or F-measure) is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score. The F1 score is the harmonic average of the precision and recall, where F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0 .

$$F_{\beta}=(1+\beta^2)\times \frac{PPV\times TPR}{(\beta^2 \times PPV)+TPR}=\frac{(1+\beta^2) \times TP}{(1+\beta^2)\times TP+FP+\beta^2 \times FN}$$

In :
cm.F1

Out:
{'L1': 0.75, 'L2': 0.4, 'L3': 0.5454545454545454}
In :
cm.F05

Out:
{'L1': 0.8823529411764706, 'L2': 0.35714285714285715, 'L3': 0.5172413793103449}
In :
cm.F2

Out:
{'L1': 0.6521739130434783, 'L2': 0.45454545454545453, 'L3': 0.5769230769230769}
In :
cm.F_beta(beta=4)

Out:
{'L1': 0.6144578313253012, 'L2': 0.4857142857142857, 'L3': 0.5930232558139535}
• Notice : new in version 0.4

### MCC (Matthews correlation coefficient)¶

The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975. It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes.The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation .

$$MCC=\frac{TP \times TN-FP \times FN}{\sqrt{(TP+FP)\times (TP+FN)\times (TN+FP)\times (TN+FN)}}$$

In :
cm.MCC

Out:
{'L1': 0.6831300510639732, 'L2': 0.25819888974716115, 'L3': 0.1690308509457033}

### BM (Bookmaker informedness)¶

The informedness of a prediction method as captured by a contingency matrix is defined as the probability that the prediction method will make a correct decision as opposed to guessing and is calculated using the bookmaker algorithm .

$$BM=TPR+TNR-1$$

In :
cm.BM

Out:
{'L1': 0.6000000000000001,
'L2': 0.30000000000000004,
'L3': 0.17142857142857126}

### MK (Markedness)¶

In statistics and psychology, the social science concept of markedness is quantified as a measure of how much one variable is marked as a predictor or possible cause of another, and is also known as Δp (deltaP) in simple two-choice cases .

$$MK=PPV+NPV-1$$

In :
cm.MK

Out:
{'L1': 0.7777777777777777, 'L2': 0.2222222222222221, 'L3': 0.16666666666666652}

### PLR (Positive likelihood ratio)¶

Likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a disease state) exists. The first description of the use of likelihood ratios for decision rules was made at a symposium on information theory in 1954 .

$$LR_+=PLR=\frac{TPR}{FPR}$$

In :
cm.PLR

Out:
{'L1': 'None', 'L2': 2.5000000000000004, 'L3': 1.4}
• Notice : LR+ renamed to PLR in version 1.5

### NLR (Negative likelihood ratio)¶

Likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a disease state) exists. The first description of the use of likelihood ratios for decision rules was made at a symposium on information theory in 1954 .

$$LR_-=NLR=\frac{FNR}{TNR}$$

In :
cm.NLR

Out:
{'L1': 0.4, 'L2': 0.625, 'L3': 0.7000000000000001}
• Notice : LR- renamed to NLR in version 1.5

### DOR (Diagnostic odds ratio)¶

The diagnostic odds ratio is a measure of the effectiveness of a diagnostic test. It is defined as the ratio of the odds of the test being positive if the subject has a disease relative to the odds of the test being positive if the subject does not have the disease .

$$DOR=\frac{LR+}{LR-}$$

In :
cm.DOR

Out:
{'L1': 'None', 'L2': 4.000000000000001, 'L3': 1.9999999999999998}

### PRE (Prevalence)¶

Prevalence is a statistical concept referring to the number of cases of a disease that are present in a particular population at a given time (Reference Likelihood) .

$$Prevalence=\frac{P}{POP}$$

In :
cm.PRE

Out:
{'L1': 0.4166666666666667, 'L2': 0.16666666666666666, 'L3': 0.4166666666666667}

### G (G-measure)¶

Geometric mean of precision and sensitivity, also known as Fowlkes–Mallows index .

$$G=\sqrt{PPV\times TPR}$$

In :
cm.G

Out:
{'L1': 0.7745966692414834, 'L2': 0.408248290463863, 'L3': 0.5477225575051661}

### RACC (Random accuracy)¶

The expected accuracy from a strategy of randomly guessing categories according to reference and response distributions .

$$RACC=\frac{TOP \times P}{POP^2}$$

In :
cm.RACC

Out:
{'L1': 0.10416666666666667,
'L2': 0.041666666666666664,
'L3': 0.20833333333333334}
• Notice : new in version 0.3

### RACCU (Random accuracy unbiased)¶

The expected accuracy from a strategy of randomly guessing categories according to the average of the reference and response distributions .

$$RACCU=(\frac{TOP+P}{2 \times POP})^2$$

In :
cm.RACCU

Out:
{'L1': 0.1111111111111111,
'L2': 0.04340277777777778,
'L3': 0.21006944444444442}
• Notice : new in version 0.8.1

### J (Jaccard index)¶

The Jaccard index, also known as Intersection over Union and the Jaccard similarity coefficient (originally coined coefficient de communauté by Paul Jaccard), is a statistic used for comparing the similarity and diversity of sample sets .

$$J=\frac{TP}{TOP+P-TP}$$

In :
cm.J

Out:
{'L1': 0.6, 'L2': 0.25, 'L3': 0.375}
• Notice : new in version 0.9

### IS (Information score)¶

The amount of information needed to correctly classify an example into class C, whose prior probability is p(C), is defined as -log2(p(C))  .

$$IS=-log_2(\frac{TP+FN}{POP})+log_2(\frac{TP}{TP+FP})$$

In :
cm.IS

Out:
{'L1': 1.2630344058337937, 'L2': 0.9999999999999998, 'L3': 0.26303440583379367}
• Notice : new in version 1.3

### CEN (Confusion entropy)¶

CEN based upon the concept of entropy for evaluating classifier performances. By exploiting the misclassification information of confusion matrices, the measure evaluates the confusion level of the class distribution of misclassified samples. Both theoretical analysis and statistical results show that the proposed measure is more discriminating than accuracy and RCI while it remains relatively consistent with the two measures. Moreover, it is more capable of measuring how the samples of different classes have been separated from each other. Hence the proposed measure is more precise than the two measures and can substitute for them to evaluate classifiers in classification applications .

$$P_{i,j}^{j}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}\Big(Matrix(j,k)+Matrix(k,j)\Big)}$$

$$P_{i,j}^{i}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}\Big(Matrix(i,k)+Matrix(k,i)\Big)}$$

$$CEN_j=-\sum_{k=1,k\neq j}^{|C|}\Bigg(P_{j,k}^jlog_{2(|C|-1)}\Big(P_{j,k}^j\Big)+P_{k,j}^jlog_{2(|C|-1)}\Big(P_{k,j}^j\Big)\Bigg)$$

In :
cm.CEN

Out:
{'L1': 0.25, 'L2': 0.49657842846620864, 'L3': 0.6044162769630221}
• Notice : new in version 1.3

### MCEN (Modified confusion entropy)¶

Modified version of CEN .

$$P_{i,j}^{j}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}\Big(Matrix(j,k)+Matrix(k,j)\Big)-Matrix(j,j)}$$

$$P_{i,j}^{i}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}\Big(Matrix(i,k)+Matrix(k,i)\Big)-Matrix(i,i)}$$

$$MCEN_j=-\sum_{k=1,k\neq j}^{|C|}\Bigg(P_{j,k}^jlog_{2(|C|-1)}\Big(P_{j,k}^j\Big)+P_{k,j}^jlog_{2(|C|-1)}\Big(P_{k,j}^j\Big)\Bigg)$$

In :
cm.MCEN

Out:
{'L1': 0.2643856189774724, 'L2': 0.5, 'L3': 0.6875}
• Notice : new in version 1.3

### AUC (Area under the ROC curve)¶

The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative'). Thus, AUC corresponds to the arithmetic mean of sensitivity and specificity values of each class .

$$AUC=\frac{TNR+TPR}{2}$$

In :
cm.AUC

Out:
{'L1': 0.8, 'L2': 0.65, 'L3': 0.5857142857142856}
• Notice : new in version 1.4
• Notice : this is an approximate calculation of AUC

### dInd (Distance index)¶

Euclidean distance of a ROC point from the top left corner of the ROC space, which can take values between 0 (perfect classification) and sqrt(2) .

$$dInd=\sqrt{(1-TNR)^2+(1-TPR)^2}$$

In :
cm.dInd

Out:
{'L1': 0.4, 'L2': 0.5385164807134504, 'L3': 0.5862367008195198}
• Notice : new in version 1.4

### sInd (Similarity index)¶

sInd is comprised between 0 (no correct classifications) and 1 (perfect classification) .

$$sInd = 1 - \sqrt{\frac{(1-TNR)^2+(1-TPR)^2}{2}}$$

In :
cm.sInd

Out:
{'L1': 0.717157287525381, 'L2': 0.6192113447068046, 'L3': 0.5854680534700882}
• Notice : new in version 1.4

### DP (Discriminant power)¶

Discriminant power (DP) is a measure that summarizes sensitivity and specificity. The DP has been used mainly in feature selection over imbalanced data .

$$X=\frac{TPR}{1-TPR}$$

$$Y=\frac{TNR}{1-TNR}$$

$$DP=\frac{\sqrt{3}}{\pi}(log_{10}X+log_{10}Y)$$

In :
cm.DP

Out:
{'L1': 'None', 'L2': 0.33193306999649924, 'L3': 0.1659665349982495}
• Notice : new in version 1.5

### Y (Youden index)¶

Youden’s index, evaluates the algorithm’s ability to avoid failure; it’s derived from sensitivity and specificity and denotes a linear correspondence balanced accuracy. As Youden’s index is a linear transformation of the mean sensitivity and specificity, its values are difficult to interpret, we retain that a higher value of Y indicates better ability to avoid failure. Youden’s index has been conventionally used to evaluate tests diagnostic, improve efficiency of Telemedical prevention  .

$$\gamma=BM=TPR+TNR-1$$

In :
cm.Y

Out:
{'L1': 0.6000000000000001,
'L2': 0.30000000000000004,
'L3': 0.17142857142857126}
• Notice : new in version 1.5

### PLRI (Positive likelihood ratio interpretation)¶

 PLR Model contribution 1 > Negligible 1 - 5 Poor 5 - 10 Fair > 10 Good
In :
cm.PLRI

Out:
{'L1': 'None', 'L2': 'Poor', 'L3': 'Poor'}
• Notice : new in version 1.5

### NLRI (Negative likelihood ratio interpretation)¶

 NLR Model contribution 0.5 - 1 Negligible 0.2 - 0.5 Poor 0.1 - 0.2 Fair 0.1 > Good
In :
cm.NLRI

Out:
{'L1': 'Poor', 'L2': 'Negligible', 'L3': 'Negligible'}
• Notice : new in version 2.2

### DPI (Discriminant power interpretation)¶

 DP Model contribution 1 > Poor 1 - 2 Limited 2 - 3 Fair > 3 Good
In :
cm.DPI

Out:
{'L1': 'None', 'L2': 'Poor', 'L3': 'Poor'}
• Notice : new in version 1.5

### AUCI (AUC value interpretation)¶

 AUC Model performance 0.5 - 0.6 Poor 0.6 - 0.7 Fair 0.7 - 0.8 Good 0.8 - 0.9 Very Good 0.9 - 1.0 Excellent
In :
cm.AUCI

Out:
{'L1': 'Very Good', 'L2': 'Fair', 'L3': 'Poor'}
• Notice : new in version 1.6

### MCCI (Matthews correlation coefficient interpretation)¶

MCC is a confusion matrix method of calculating the Pearson product-moment correlation coefficient(not to be confused with Pearson's C). Therefore, it has the same interpretation .

 MCC Interpretation 0.3 > Negligible 0.3 - 0.5 Weak 0.5 - 0.7 Moderate 0.7 - 0.9 Strong 0.9 - 1.0 Very Strong
In :
cm.MCCI

Out:
{'L1': 'Moderate', 'L2': 'Negligible', 'L3': 'Negligible'}
• Notice : new in version 2.2
• Notice : only positive values are considered

### GI (Gini index)¶

A chance-standardized variant of the AUC is given by Gini coefficient, taking values between 0 (no difference between the score distributions of the two classes) and 1 (complete separation between the two distributions). Gini coefficient is widespread use metric in imbalanced data learning .

$$GI=2\times AUC-1$$

In :
cm.GI

Out:
{'L1': 0.6000000000000001,
'L2': 0.30000000000000004,
'L3': 0.17142857142857126}
• Notice : new in version 1.7

### LS (Lift score)¶

In the context of classification, lift compares model predictions to randomly generated predictions. Lift is often used in marketing research combined with gain and lift charts as a visual aid  .

$$LS=\frac{PPV}{PRE}$$

In :
cm.LS

Out:
{'L1': 2.4, 'L2': 2.0, 'L3': 1.2}
• Notice : new in version 1.8

### AM (Automatic/Manual)¶

Difference between automatic and manual classification i.e., difference between positive outcomes and of positive samples.

$$AM=TOP-P=(TP+FP)-(TP+FN)$$

In :
cm.AM

Out:
{'L1': -2, 'L2': 1, 'L3': 1}
• Notice : new in version 1.9

### BCD (Bray-Curtis dissimilarity)¶

In ecology and biology, the Bray–Curtis dissimilarity, named after J. Roger Bray and John T. Curtis, is a statistic used to quantify the compositional dissimilarity between two different sites, based on counts at each site .

$$BCD=\frac{|AM|}{\sum_{i=1}^{|C|}\Big(TOP_i+P_i\Big)}$$

In :
cm.BCD

Out:
{'L1': 0.08333333333333333,
'L2': 0.041666666666666664,
'L3': 0.041666666666666664}
• Notice : new in version 1.9

### OP (Optimized precision)¶

Optimized precision is a type of hybrid threshold metrics and has been proposed as a discriminator for building an optimized heuristic classifier. This metric is a combination of accuracy, sensitivity and specificity metrics. The sensitivity and specificity metric were used for stabilizing and optimizing the accuracy performance when dealing with imbalanced class of two class problems  .

$$OP = ACC - \frac{|TNR-TPR|}{|TNR+TPR|}$$

In :
cm.OP

Out:
{'L1': 0.5833333333333334, 'L2': 0.5192307692307692, 'L3': 0.5589430894308943}
• Notice : new in version 2.0

### IBA (Index of balanced accuracy)¶

The method combines an unbiased index of its overall accuracy and a measure about how dominant is the class with the highest individual accuracy rate  .

$$IBA_{\alpha}=(1+\alpha \times(TPR-TNR))\times TNR \times TPR$$

In :
cm.IBA

Out:
{'L1': 0.36, 'L2': 0.27999999999999997, 'L3': 0.35265306122448975}
In :
cm.IBA_alpha(0.5)

Out:
{'L1': 0.48, 'L2': 0.34, 'L3': 0.3477551020408163}
In :
cm.IBA_alpha(0.1)

Out:
{'L1': 0.576, 'L2': 0.388, 'L3': 0.34383673469387754}
• Notice : new in version 2.0

### GM (G-mean)¶

Geometric mean of specificity and sensitivity   .

$$GM=\sqrt{TPR \times TNR}$$

In :
cm.GM

Out:
{'L1': 0.7745966692414834, 'L2': 0.6324555320336759, 'L3': 0.5855400437691198}
• Notice : new in version 2.0

### Q (Yule's Q)¶

In statistics, Yule's Q, also known as the coefficient of colligation, is a measure of association between two binary variables .

$$OR = \frac{TP\times TN}{FP\times FN}$$

$$Q = \frac{OR-1}{OR+1}$$

In :
cm.Q

Out:
{'L1': 'None', 'L2': 0.6, 'L3': 0.3333333333333333}
• Notice : new in version 2.1

### AGM (Adjusted G-mean)¶

An adjusted version of geometric mean of specificity and sensitivity .

$$N_n=\frac{N}{POP}$$

$$AGM=\frac{GM+TNR\times N_n}{1+N_n};TPR>0$$

$$AGM=0;TPR=0$$

In :
cm.AGM

Out:
{'L1': 0.8576400016262, 'L2': 0.708612108382005, 'L3': 0.5803410802752335}
• Notice : new in version 2.1

### AGF (Adjusted F-score)¶

The F-measures used only three of the four elements of the confusion matrix and hence two classifiers with different TNR values may have the same F-score. Therefore, the AGF metric is introduced to use all elements of the confusion matrix and provide more weights to samples which are correctly classified in the minority class .

$$AGF=\sqrt{F_2 \times InvF_{0.5}}$$

$$F_{2}=5\times \frac{PPV\times TPR}{(4 \times PPV)+TPR}$$

$$InvF_{0.5}=(1+0.5^2)\times \frac{NPV\times TNR}{(0.5^2 \times NPV)+TNR}$$

In :
cm.AGF

Out:
{'L1': 0.7285871475307653, 'L2': 0.6286946134619315, 'L3': 0.610088876086563}
• Notice : new in version 2.3

### OC (Overlap coefficient)¶

The overlap coefficient, or Szymkiewicz–Simpson coefficient, is a similarity measure that measures the overlap between two finite sets. It is defined as the size of the intersection divided by the smaller of the size of the two sets .

$$OC=\frac{TP}{min(TOP,P)}=max(PPV,TPR)$$

In :
cm.OC

Out:
{'L1': 1.0, 'L2': 0.5, 'L3': 0.6}
• Notice : new in version 2.3

### OOC (Otsuka-Ochiai coefficient)¶

In biology, there is a similarity index, known as the Otsuka-Ochiai coefficient named after Yanosuke Otsuka and Akira Ochiai, also known as the Ochiai-Barkman or Ochiai coefficient. If sets are represented as bit vectors, the Otsuka-Ochiai coefficient can be seen to be the same as the cosine similarity .

$$OOC=\frac{TP}{\sqrt{TOP\times P}}$$

In :
cm.OOC

Out:
{'L1': 0.7745966692414834, 'L2': 0.4082482904638631, 'L3': 0.5477225575051661}
• Notice : new in version 2.3

### TI (Tversky index)¶

The Tversky index, named after Amos Tversky, is an asymmetric similarity measure on sets that compares a variant to a prototype. The Tversky index can be seen as a generalization of Dice's coefficient and Tanimoto coefficient .

$$TI(\alpha,\beta)=\frac{TP}{TP+\alpha FN+\beta FP}$$

In :
cm.TI(2,3)

Out:
{'L1': 0.42857142857142855, 'L2': 0.1111111111111111, 'L3': 0.1875}
• Notice : new in version 2.4

### AUPR (Area under the PR curve)¶

A PR curve is plotting precision against recall. The precision recall area under curve (AUPR) is just the area under the PR curve. The higher it is, the better the model is  .

$$AUPR=\frac{TPR+PPV}{2}$$

In :
cm.AUPR

Out:
{'L1': 0.8, 'L2': 0.41666666666666663, 'L3': 0.55}
• Notice : new in version 2.4
• Notice : this is an approximate calculation of AUPR

## Overall statistics¶

### Kappa¶

Kappa is a statistic which measures inter-rater agreement for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as kappa takes into account the possibility of the agreement occurring by chance .

$$Kappa=\frac{ACC_{Overall}-RACC_{Overall}}{1-RACC_{Overall}}$$

In :
cm.Kappa

Out:
0.35483870967741943
• Notice : new in version 0.3

### Kappa unbiased¶

The unbiased kappa value is defined in terms of total accuracy and a slightly different computation of expected likelihood that averages the reference and response probabilities .

$$Kappa_{Unbiased}=\frac{ACC_{Overall}-RACCU_{Overall}}{1-RACCU_{Overall}}$$

In :
cm.KappaUnbiased

Out:
0.34426229508196726
• Notice : new in version 0.8.1

### Kappa no prevalence¶

The kappa statistic adjusted for prevalence .

$$Kappa_{NoPrevalence}=2 \times ACC_{Overall}-1$$

In :
cm.KappaNoPrevalence

Out:
0.16666666666666674
• Notice : new in version 0.8.1

### Kappa standard error¶

The standard error(s) of the Kappa coefficient were obtained by Fleiss (1969)  .

$$SE_{Kappa}=\sqrt{\frac{ACC_{Overall}\times (1-RACC_{Overall})}{(1-RACC_{Overall})^2}}$$

In :
cm.Kappa_SE

Out:
0.2203645326012817
• Notice : new in version 0.7

### Kappa 95% CI¶

Kappa 95% Confidence Interval  .

$$CI_{Kappa}=Kappa \pm 1.96\times SE_{Kappa}$$

In :
cm.Kappa_CI

Out:
(-0.07707577422109269, 0.7867531935759315)
• Notice : new in version 0.7

### Chi-squared¶

Pearson's chi-squared test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is suitable for unpaired data from large samples .

$$\chi^2=\sum_{i=1}^n\sum_{j=1}^n\frac{\Big(Matrix(i,j)-E(i,j)\Big)^2}{E(i,j)}$$

$$E(i,j)=\frac{TOP_j\times P_i}{POP}$$

In :
cm.Chi_Squared

Out:
6.6000000000000005
• Notice : new in version 0.7

### Chi-squared DF¶

Number of degrees of freedom of this confusion matrix for the chi-squared statistic .

$$DF=(|C|-1)^2$$

In :
cm.DF

Out:
4
• Notice : new in version 0.7

### Phi-squared¶

In statistics, the phi coefficient (or mean square contingency coefficient) is a measure of association for two binary variables. Introduced by Karl Pearson, this measure is similar to the Pearson correlation coefficient in its interpretation. In fact, a Pearson correlation coefficient estimated for two binary variables will return the phi coefficient .

$$\phi^2=\frac{\chi^2}{POP}$$

In :
cm.Phi_Squared

Out:
0.55
• Notice : new in version 0.7

### Cramer's V¶

In statistics, Cramér's V (sometimes referred to as Cramér's phi) is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based on Pearson's chi-squared statistic and was published by Harald Cramér in 1946 .

$$V=\sqrt{\frac{\phi^2}{|C|-1}}$$

In :
cm.V

Out:
0.5244044240850758
• Notice : new in version 0.7

### Standard error¶

The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation .

$$SE_{ACC}=\sqrt{\frac{ACC\times (1-ACC)}{POP}}$$

In :
cm.SE

Out:
0.14231876063832777
• Notice : new in version 0.7

### 95% CI¶

In statistics, a confidence interval (CI) is a type of interval estimate (of a population parameter) that is computed from the observed data. The confidence level is the frequency (i.e., the proportion) of possible confidence intervals that contain the true value of their corresponding parameter. In other words, if confidence intervals are constructed using a given confidence level in an infinite number of independent experiments, the proportion of those intervals that contain the true value of the parameter will match the confidence level .

$$CI=ACC \pm 1.96\times SE_{ACC}$$

In :
cm.CI

Out:
(0.30438856248221097, 0.8622781041844558)
• Notice : new in version 0.7

### Bennett's S¶

Bennett, Alpert & Goldstein’s S is a statistical measure of inter-rater agreement. It was created by Bennett et al. in 1954. Bennett et al. suggested adjusting inter-rater reliability to accommodate the percentage of rater agreement that might be expected by chance was a better measure than simple agreement between raters .

$$p_c=\frac{1}{|C|}$$

$$S=\frac{ACC_{Overall}-p_c}{1-p_c}$$

In :
cm.S

Out:
0.37500000000000006
• Notice : new in version 0.5

### Scott's Pi¶

Scott's pi (named after William A. Scott) is a statistic for measuring inter-rater reliability for nominal data in communication studies. Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi. Since automatically annotating text is a popular problem in natural language processing, and goal is to get the computer program that is being developed to agree with the humans in the annotations it creates, assessing the extent to which humans agree with each other is important for establishing a reasonable upper limit on computer performance .

$$p_c=\sum_{i=1}^{|C|}(\frac{TOP_i + P_i}{2\times POP})^2$$

$$\pi=\frac{ACC_{Overall}-p_c}{1-p_c}$$

In :
cm.PI

Out:
0.34426229508196726
• Notice : new in version 0.5

### Gwet's AC1¶

AC1 was originally introduced by Gwet in 2001 (Gwet, 2001). The interpretation of AC1 is similar to generalized kappa (Fleiss, 1971), which is used to assess interrater reliability of when there are multiple raters. Gwet (2002) demonstrated that AC1 can overcome the limitations that kappa is sensitive to trait prevalence and rater's classification probabilities (i.e., marginal probabilities), whereas AC1 provides more robust measure of interrater reliability .

$$\pi_i=\frac{TOP_i + P_i}{2\times POP}$$

$$p_c=\frac{1}{|C|-1}\sum_{i=1}^{|C|}\Big(\pi_i\times (1-\pi_i)\Big)$$

$$AC_1=\frac{ACC_{Overall}-p_c}{1-p_c}$$

In :
cm.AC1

Out:
0.3893129770992367
• Notice : new in version 0.5

### Reference entropy¶

The entropy of the decision problem itself as defined by the counts for the reference. The entropy of a distribution is the average negative log probability of outcomes .

$$Likelihood_{Reference}=\frac{P_i}{POP}$$

$$Entropy_{Reference}=-\sum_{i=1}^{|C|}Likelihood_{Reference}(i)\times\log_{2}{Likelihood_{Reference}(i)}$$

$$0\times\log_{2}{0}\equiv0$$

In :
cm.ReferenceEntropy

Out:
1.4833557549816874
• Notice : new in version 0.8.1

### Response entropy¶

The entropy of the response distribution. The entropy of a distribution is the average negative log probability of outcomes .

$$Likelihood_{Response}=\frac{TOP_i}{POP}$$

$$Entropy_{Response}=-\sum_{i=1}^{|C|}Likelihood_{Response}(i)\times\log_{2}{Likelihood_{Response}(i)}$$

$$0\times\log_{2}{0}\equiv0$$

In :
cm.ResponseEntropy

Out:
1.5
• Notice : new in version 0.8.1

### Cross entropy¶

The cross-entropy of the response distribution against the reference distribution. The cross-entropy is defined by the negative log probabilities of the response distribution weighted by the reference distribution .

$$Likelihood_{Reference}=\frac{P_i}{POP}$$

$$Likelihood_{Response}=\frac{TOP_i}{POP}$$

$$Entropy_{Cross}=-\sum_{i=1}^{|C|}Likelihood_{Reference}(i)\times\log_{2}{Likelihood_{Response}(i)}$$

$$0\times\log_{2}{0}\equiv0$$

In :
cm.CrossEntropy

Out:
1.5833333333333335
• Notice : new in version 0.8.1

### Joint entropy¶

The entropy of the joint reference and response distribution as defined by the underlying matrix .

$$P^{'}(i,j)=\frac{Matrix(i,j)}{POP}$$

$$Entropy_{Joint}=-\sum_{i=1}^{|C|}\sum_{j=1}^{|C|}P^{'}(i,j)\times\log_{2}{P^{'}(i,j)}$$

$$0\times\log_{2}{0}\equiv0$$

In :
cm.JointEntropy

Out:
2.4591479170272446
• Notice : new in version 0.8.1

### Conditional entropy¶

The entropy of the distribution of categories in the response given that the reference category was as specified .

$$P^{'}(j|i)=\frac{Matrix(j,i)}{P_i}$$

$$Entropy_{Conditional}=\sum_{i=1}^{|C|}\Bigg(Likelihood_{Reference}(i)\times\Big(-\sum_{j=1}^{|C|}P^{'}(j|i)\times\log_{2}{P^{'}(j|i)}\Big)\Bigg)$$

$$0\times\log_{2}{0}\equiv0$$

In :
cm.ConditionalEntropy

Out:
0.9757921620455572
• Notice : new in version 0.8.1

### Kullback-Liebler divergence¶

In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution diverges from a second, expected probability distribution  .

$$Likelihood_{Response}=\frac{TOP_i}{POP}$$

$$Likelihood_{Reference}=\frac{P_i}{POP}$$

$$Divergence=-\sum_{i=1}^{|C|}Likelihood_{Reference}\times\log_{2}{\frac{Likelihood_{Reference}}{Likelihood_{Response}}}$$

In :
cm.KL

Out:
0.09997757835164581
• Notice : new in version 0.8.1

### Mutual information¶

Mutual information is defined Kullback-Lieblier divergence, between the product of the individual distributions and the joint distribution. Mutual information is symmetric. We could also subtract the conditional entropy of the reference given the response from the reference entropy to get the same result  .

$$P^{'}(i,j)=\frac{Matrix(i,j)}{POP}$$

$$Likelihood_{Reference}=\frac{P_i}{POP}$$

$$Likelihood_{Response}=\frac{TOP_i}{POP}$$

$$MI=-\sum_{i=1}^{|C|}\sum_{j=1}^{|C|}P^{'}(i,j)\times\log_{2}\Big({\frac{P^{'}(i,j)}{Likelihood_{Reference}(i)\times Likelihood_{Response}(i) }\Big)}$$

$$MI=Entropy_{Response}-Entropy_{Conditional}$$

In :
cm.MutualInformation

Out:
0.5242078379544428
• Notice : new in version 0.8.1

### Goodman & Kruskal's lambda A¶

In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis .

$$\lambda_A=\frac{\sum_{j=1}^{|C|}Max\Big(Matrix(-,j)\Big)-Max(P)}{POP-Max(P)}$$

In :
cm.LambdaA

Out:
0.42857142857142855
• Notice : new in version 0.8.1

### Goodman & Kruskal's lambda B¶

In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis .

$$\lambda_B=\frac{\sum_{i=1}^{|C|}Max\Big(Matrix(i,-)\Big)-Max(TOP)}{POP-Max(TOP)}$$

In :
cm.LambdaB

Out:
0.16666666666666666
• Notice : new in version 0.8.1

### SOA1 (Landis & Koch's benchmark)¶

 Kappa Strength of Agreement 0 > Poor 0 - 0.2 Slight 0.2 – 0.4 Fair 0.4 – 0.6 Moderate 0.6 – 0.8 Substantial 0.8 – 1.0 Almost perfect
In :
cm.SOA1

Out:
'Fair'
• Notice : new in version 0.3

### SOA2 (Fleiss' benchmark)¶

 Kappa Strength of Agreement 0.40 > Poor 0.40 - 0.75 Intermediate to Good More than 0.75 Excellent
In :
cm.SOA2

Out:
'Poor'
• Notice : new in version 0.4

### SOA3 (Altman's benchmark)¶

 Kappa Strength of Agreement 0.2 > Poor 0.2 – 0.4 Fair 0.4 – 0.6 Moderate 0.6 – 0.8 Good 0.8 – 1.0 Very Good
In :
cm.SOA3

Out:
'Fair'
• Notice : new in version 0.4

### SOA4 (Cicchetti's benchmark)¶

 Kappa Strength of Agreement 0.40 > Poor 0.40 – 0.59 Fair 0.59 – 0.74 Good 0.74 – 1.00 Excellent
In :
cm.SOA4

Out:
'Poor'
• Notice : new in version 0.7

### SOA5 (Cramer's benchmark)¶

 Cramer's V Strength of Association 0.1 > Negligible 0.1 – 0.2 Weak 0.2 – 0.4 Moderate 0.4 – 0.6 Relatively Strong 0.6 – 0.8 Strong 0.8 – 1.0 Very Strong
In :
cm.SOA5

Out:
'Relatively Strong'
• Notice : new in version 2.2

### SOA6 (Matthews's benchmark)¶

MCC is a confusion matrix method of calculating the Pearson product-moment correlation coefficient(not to be confused with Pearson's C). Therefore, it has the same interpretation .

 Overall MCC Strength of Association 0.3 > Negligible 0.3 - 0.5 Weak 0.5 - 0.7 Moderate 0.7 - 0.9 Strong 0.9 - 1.0 Very Strong
In :
cm.SOA6

Out:
'Weak'
• Notice : new in version 2.2
• Notice : only positive values are considered

### Overall_ACC¶

$$ACC_{Overall}=\frac{\sum_{i=1}^{|C|}TP_i}{POP}$$

In :
cm.Overall_ACC

Out:
0.5833333333333334
• Notice : new in version 0.4

### Overall_RACC¶

$$RACC_{Overall}=\sum_{i=1}^{|C|}RACC_i$$

In :
cm.Overall_RACC

Out:
0.3541666666666667
• Notice : new in version 0.4

### Overall_RACCU¶

$$RACCU_{Overall}=\sum_{i=1}^{|C|}RACCU_i$$

In :
cm.Overall_RACCU

Out:
0.3645833333333333
• Notice : new in version 0.8.1

### PPV_Micro¶

$$PPV_{Micro}=\frac{\sum_{i=1}^{|C|}TP_i}{\sum_{i=1}^{|C|}TP_i+FP_i}$$

In :
cm.PPV_Micro

Out:
0.5833333333333334
• Notice : new in version 0.4

### TPR_Micro¶

$$TPR_{Micro}=\frac{\sum_{i=1}^{|C|}TP_i}{\sum_{i=1}^{|C|}TP_i+FN_i}$$

In :
cm.TPR_Micro

Out:
0.5833333333333334
• Notice : new in version 0.4

### F1_Micro¶

$$F_{1_{Micro}}=2\frac{\sum_{i=1}^{|C|}TPR_i\times PPV_i}{\sum_{i=1}^{|C|}TPR_i+PPV_i}$$

In :
cm.F1_Micro

Out:
0.5833333333333334
• Notice : new in version 2.2

### PPV_Macro¶

$$PPV_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{TP_i}{TP_i+FP_i}$$

In :
cm.PPV_Macro

Out:
0.611111111111111
• Notice : new in version 0.4

### TPR_Macro¶

$$TPR_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{TP_i}{TP_i+FN_i}$$

In :
cm.TPR_Macro

Out:
0.5666666666666668
• Notice : new in version 0.4

### F1_Macro¶

$$F_{1_{Macro}}=\frac{2}{|C|}\sum_{i=1}^{|C|}\frac{TPR_i\times PPV_i}{TPR_i+PPV_i}$$

In :
cm.F1_Macro

Out:
0.5651515151515151
• Notice : new in version 2.2

### ACC_Macro¶

$$ACC_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}{ACC_i}$$

In :
cm.ACC_Macro

Out:
0.7222222222222223
• Notice : new in version 2.2

### Overall_J¶

$$J_{Mean}=\frac{1}{|C|}\sum_{i=1}^{|C|}J_i$$

$$J_{Sum}=\sum_{i=1}^{|C|}J_i$$

$$J_{Overall}=(J_{Sum},J_{Mean})$$

In :
cm.Overall_J

Out:
(1.225, 0.4083333333333334)
• Notice : new in version 0.9

### Hamming loss¶

The average Hamming loss or Hamming distance between two sets of samples .

$$L_{Hamming}=\frac{1}{POP}\sum_{i=1}^{|P|}1(y_i \neq \widehat{y}_i)$$

In :
cm.HammingLoss

Out:
0.41666666666666663
• Notice : new in version 1.0

### Zero-one loss¶

Zero-one loss is a common loss function used with classification learning. It assigns 0 to loss for a correct classification and 1 for an incorrect classification .

$$L_{0-1}=\sum_{i=1}^{|P|}1(y_i \neq \widehat{y}_i)$$

In :
cm.ZeroOneLoss

Out:
5
• Notice : new in version 1.1

### NIR (No information rate)¶

Largest class percentage in the data .

$$NIR=\frac{1}{POP}Max(P)$$

In :
cm.NIR

Out:
0.4166666666666667
• Notice : new in version 1.2

### P-Value¶

In statistical hypothesis testing, the p-value or probability value is, for a given statistical model, the probability that, when the null hypothesis is true, the statistical summary (such as the absolute value of the sample mean difference between two compared groups) would be greater than or equal to the actual observed results  .
Here an one-sided binomial test to see if the accuracy is better than the no information rate .

$$x=\sum_{i=1}^{|C|}TP_{i}$$

$$p=NIR$$

$$n=POP$$

$$P-Value_{(ACC > NIR)}=1-\sum_{i=1}^{x}\left(\begin{array}{c}n\\ i\end{array}\right)p^{i}(1-p)^{n-i}$$

In :
cm.PValue

Out:
0.18926430237560654
• Notice : new in version 1.2

### Overall_CEN¶

$$P_j=\frac{\sum_{k=1}^{|C|}\Big(Matrix(j,k)+Matrix(k,j)\Big)}{2\sum_{k,l=1}^{|C|}Matrix(k,l)}$$

$$CEN_{Overall}=\sum_{j=1}^{|C|}P_jCEN_j$$

In :
cm.Overall_CEN

Out:
0.4638112995385119
• Notice : new in version 1.3

### Overall_MCEN¶

$$\alpha=\begin{cases}1 & |C| > 2\\0 & |C| = 2\end{cases}$$

$$P_j=\frac{\sum_{k=1}^{|C|}\Big(Matrix(j,k)+Matrix(k,j)\Big)-Matrix(j,j)}{2\sum_{k,l=1}^{|C|}Matrix(k,l)-\alpha \sum_{k=1}^{|C|}Matrix(k,k)}$$

$$MCEN_{Overall}=\sum_{j=1}^{|C|}P_jMCEN_j$$

In :
cm.Overall_MCEN

Out:
0.5189369467580801
• Notice : new in version 1.3

### Overall_MCC¶

$$MCC_{Overall}=\frac{cov(X,Y)}{\sqrt{cov(X,X)\times cov(Y,Y)}}$$

$$cov(X,Y)=\sum_{i,j,k=1}^{|C|}\Big(Matrix(i,i)Matrix(k,j)-Matrix(j,i)Matrix(i,k)\Big)$$

$$cov(X,X) = \sum_{i=1}^{|C|}\Bigg[\Big(\sum_{j=1}^{|C|}Matrix(j,i)\Big)\Big(\sum_{k,l=1,k\neq i}^{|C|}Matrix(l,k)\Big)\Bigg]$$

$$cov(Y,Y) = \sum_{i=1}^{|C|}\Bigg[\Big(\sum_{j=1}^{|C|}Matrix(i,j)\Big)\Big(\sum_{k,l=1,k\neq i}^{|C|}Matrix(k,l)\Big)\Bigg]$$

In :
cm.Overall_MCC

Out:
0.36666666666666664
• Notice : new in version 1.4

### RR (Global performance index)¶

$$RR=\frac{1}{|C|}\sum_{i,j=1}^{|C|}Matrix(i,j)$$

In :
cm.RR

Out:
4.0
• Notice : new in version 1.4

### CBA (Class balance accuracy)¶

As an evaluation tool, CBA creates an overall assessment of model predictive power by scrutinizing measures simultaneously across each class in a conservative manner that guarantees that a model’s ability to recall observations from each class and its ability to do so efficiently won’t fall below the bound  .

$$CBA=\frac{\sum_{i=1}^{|C|}\frac{Matrix(i,i)}{Max(TOP_i,P_i)}}{|C|}$$

In :
cm.CBA

Out:
0.4777777777777778
• Notice : new in version 1.4

### AUNU¶

When dealing with multiclass problems, a global measure of classification performances based on the ROC approach (AUNU) has been proposed as the average of single-class measures .

$$AUNU=\frac{\sum_{i=1}^{|C|}AUC_i}{|C|}$$

In :
cm.AUNU

Out:
0.6785714285714285
• Notice : new in version 1.4

### AUNP¶

Another option (AUNP) is that of averaging the AUCi values with weights proportional to the number of samples experimentally belonging to each class, that is, the a priori class distribution .

$$AUNP=\sum_{i=1}^{|C|}\frac{P_i}{POP}AUC_i$$

In :
cm.AUNP

Out:
0.6857142857142857
• Notice : new in version 1.4

### RCI (Relative classifier information)¶

Performance of different classifiers on the same domain can be measured by comparing relative classifier information while classifier information (mutual information) can be used for comparison across different decision problems  .

$$H_d=-\sum_{i=1}^{|C|}\Big(\frac{\sum_{l=1}^{|C|}Matrix(i,l)}{\sum_{h,k=1}^{|C|}Matrix(h,k)}log_2\frac{\sum_{l=1}^{|C|}Matrix(i,l)}{\sum_{h,k=1}^{|C|}Matrix(h,k)}\Big)=Entropy_{Reference}$$

$$H_o=\sum_{j=1}^{|C|}\Big(\frac{\sum_{k=1}^{|C|}Matrix(k,j)}{\sum_{h,l=0}^{|C|}Matrix(h,l)}H_{oj}\Big)=Entropy_{Conditional}$$

$$H_{oj}=-\sum_{i=1}^{|C|}\Big(\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}Matrix(k,j)}log_2\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}Matrix(k,j)}\Big)$$

$$RCI=\frac{H_d-H_o}{H_d}=\frac{MI}{Entropy_{Reference}}$$

In :
cm.RCI

Out:
0.3533932006492363
• Notice : new in version 1.5

### Pearson's C¶

The contingency coefficient is a coefficient of association that tells whether two variables or data sets are independent or dependent of each other. It is also known as Pearson’s coefficient (not to be confused with Pearson’s coefficient of skewness)  .

$$C=\sqrt{\frac{\chi^2}{\chi^2+POP}}$$

In :
cm.C

Out:
0.5956833971812706
• Notice : new in version 2.0

## Print¶

### Full¶

In :
print(cm)

Predict  L1       L2       L3
Actual
L1       3        0        2

L2       0        1        1

L3       0        2        3

Overall Statistics :

95% CI                                                            (0.30439,0.86228)
ACC Macro                                                         0.72222
AUNP                                                              0.68571
AUNU                                                              0.67857
Bennett S                                                         0.375
CBA                                                               0.47778
Chi-Squared                                                       6.6
Chi-Squared DF                                                    4
Conditional Entropy                                               0.97579
Cramer V                                                          0.5244
Cross Entropy                                                     1.58333
F1 Macro                                                          0.56515
F1 Micro                                                          0.58333
Gwet AC1                                                          0.38931
Hamming Loss                                                      0.41667
Joint Entropy                                                     2.45915
KL Divergence                                                     0.09998
Kappa                                                             0.35484
Kappa 95% CI                                                      (-0.07708,0.78675)
Kappa No Prevalence                                               0.16667
Kappa Standard Error                                              0.22036
Kappa Unbiased                                                    0.34426
Lambda A                                                          0.42857
Lambda B                                                          0.16667
Mutual Information                                                0.52421
NIR                                                               0.41667
Overall ACC                                                       0.58333
Overall CEN                                                       0.46381
Overall J                                                         (1.225,0.40833)
Overall MCC                                                       0.36667
Overall MCEN                                                      0.51894
Overall RACC                                                      0.35417
Overall RACCU                                                     0.36458
P-Value                                                           0.18926
PPV Macro                                                         0.61111
PPV Micro                                                         0.58333
Pearson C                                                         0.59568
Phi-Squared                                                       0.55
RCI                                                               0.35339
RR                                                                4.0
Reference Entropy                                                 1.48336
Response Entropy                                                  1.5
SOA1(Landis & Koch)                                               Fair
SOA2(Fleiss)                                                      Poor
SOA3(Altman)                                                      Fair
SOA4(Cicchetti)                                                   Poor
SOA5(Cramer)                                                      Relatively Strong
SOA6(Matthews)                                                    Weak
Scott PI                                                          0.34426
Standard Error                                                    0.14232
TPR Macro                                                         0.56667
TPR Micro                                                         0.58333
Zero-one Loss                                                     5

Class Statistics :

Classes                                                           L1            L2            L3
ACC(Accuracy)                                                     0.83333       0.75          0.58333
AGF(Adjusted F-score)                                             0.72859       0.62869       0.61009
AGM(Adjusted geometric mean)                                      0.85764       0.70861       0.58034
AM(Difference between automatic and manual classification)        -2            1             1
AUC(Area under the ROC curve)                                     0.8           0.65          0.58571
AUCI(AUC value interpretation)                                    Very Good     Fair          Poor
AUPR(Area under the PR curve)                                     0.8           0.41667       0.55
BCD(Bray-Curtis dissimilarity)                                    0.08333       0.04167       0.04167
BM(Informedness or bookmaker informedness)                        0.6           0.3           0.17143
CEN(Confusion entropy)                                            0.25          0.49658       0.60442
DOR(Diagnostic odds ratio)                                        None          4.0           2.0
DP(Discriminant power)                                            None          0.33193       0.16597
DPI(Discriminant power interpretation)                            None          Poor          Poor
ERR(Error rate)                                                   0.16667       0.25          0.41667
F0.5(F0.5 score)                                                  0.88235       0.35714       0.51724
F1(F1 score - harmonic mean of precision and sensitivity)         0.75          0.4           0.54545
F2(F2 score)                                                      0.65217       0.45455       0.57692
FDR(False discovery rate)                                         0.0           0.66667       0.5
FN(False negative/miss/type 2 error)                              2             1             2
FNR(Miss rate or false negative rate)                             0.4           0.5           0.4
FOR(False omission rate)                                          0.22222       0.11111       0.33333
FP(False positive/type 1 error/false alarm)                       0             2             3
FPR(Fall-out or false positive rate)                              0.0           0.2           0.42857
G(G-measure geometric mean of precision and sensitivity)          0.7746        0.40825       0.54772
GI(Gini index)                                                    0.6           0.3           0.17143
GM(G-mean geometric mean of specificity and sensitivity)          0.7746        0.63246       0.58554
IBA(Index of balanced accuracy)                                   0.36          0.28          0.35265
IS(Information score)                                             1.26303       1.0           0.26303
J(Jaccard index)                                                  0.6           0.25          0.375
LS(Lift score)                                                    2.4           2.0           1.2
MCC(Matthews correlation coefficient)                             0.68313       0.2582        0.16903
MCCI(Matthews correlation coefficient interpretation)             Moderate      Negligible    Negligible
MCEN(Modified confusion entropy)                                  0.26439       0.5           0.6875
MK(Markedness)                                                    0.77778       0.22222       0.16667
N(Condition negative)                                             7             10            7
NLR(Negative likelihood ratio)                                    0.4           0.625         0.7
NLRI(Negative likelihood ratio interpretation)                    Poor          Negligible    Negligible
NPV(Negative predictive value)                                    0.77778       0.88889       0.66667
OC(Overlap coefficient)                                           1.0           0.5           0.6
OOC(Otsuka-Ochiai coefficient)                                    0.7746        0.40825       0.54772
OP(Optimized precision)                                           0.58333       0.51923       0.55894
P(Condition positive or support)                                  5             2             5
PLR(Positive likelihood ratio)                                    None          2.5           1.4
PLRI(Positive likelihood ratio interpretation)                    None          Poor          Poor
POP(Population)                                                   12            12            12
PPV(Precision or positive predictive value)                       1.0           0.33333       0.5
PRE(Prevalence)                                                   0.41667       0.16667       0.41667
Q(Yule Q - coefficient of colligation)                            None          0.6           0.33333
RACC(Random accuracy)                                             0.10417       0.04167       0.20833
RACCU(Random accuracy unbiased)                                   0.11111       0.0434        0.21007
TN(True negative/correct rejection)                               7             8             4
TNR(Specificity or true negative rate)                            1.0           0.8           0.57143
TON(Test outcome negative)                                        9             9             6
TOP(Test outcome positive)                                        3             3             6
TP(True positive/hit)                                             3             1             3
TPR(Sensitivity, recall, hit rate, or true positive rate)         0.6           0.5           0.6
Y(Youden index)                                                   0.6           0.3           0.17143
dInd(Distance index)                                              0.4           0.53852       0.58624
sInd(Similarity index)                                            0.71716       0.61921       0.58547



### Matrix¶

In :
cm.print_matrix()

Predict  L1       L2       L3
Actual
L1       3        0        2

L2       0        1        1

L3       0        2        3


In :
cm.matrix

Out:
{'L1': {'L1': 3, 'L2': 0, 'L3': 2},
'L2': {'L1': 0, 'L2': 1, 'L3': 1},
'L3': {'L1': 0, 'L2': 2, 'L3': 3}}
In :
cm.print_matrix(one_vs_all=True,class_name = "L1")

Predict  L1       ~
Actual
L1       3        2

~        0        7



#### Parameters¶

1. one_vs_all : One-Vs-All mode flag (type : bool)
2. class_name : target class name for One-Vs-All mode (type : any valid type)
• Notice : one_vs_all option, new in version 1.4
• Notice : matrix() renamed to print_matrix() and matrix return confusion matrix as dict in version 1.5

### Normalized matrix¶

In :
cm.print_normalized_matrix()

Predict   L1        L2        L3
Actual
L1        0.6       0.0       0.4

L2        0.0       0.5       0.5

L3        0.0       0.4       0.6


In :
cm.normalized_matrix

Out:
{'L1': {'L1': 0.6, 'L2': 0.0, 'L3': 0.4},
'L2': {'L1': 0.0, 'L2': 0.5, 'L3': 0.5},
'L3': {'L1': 0.0, 'L2': 0.4, 'L3': 0.6}}
In :
cm.print_normalized_matrix(one_vs_all=True,class_name = "L1")

Predict   L1        ~
Actual
L1        0.6       0.4

~         0.0       1.0



#### Parameters¶

1. one_vs_all : One-Vs-All mode flag (type : bool)
2. class_name : target class name for One-Vs-All mode (type : any valid type)
• Notice : one_vs_all option, new in version 1.4
• Notice : normalized_matrix() renamed to print_normalized_matrix() and normalized_matrix return normalized confusion matrix as dict in version 1.5

### Stat¶

In :
cm.stat()

Overall Statistics :

95% CI                                                            (0.30439,0.86228)
ACC Macro                                                         0.72222
AUNP                                                              0.68571
AUNU                                                              0.67857
Bennett S                                                         0.375
CBA                                                               0.47778
Chi-Squared                                                       6.6
Chi-Squared DF                                                    4
Conditional Entropy                                               0.97579
Cramer V                                                          0.5244
Cross Entropy                                                     1.58333
F1 Macro                                                          0.56515
F1 Micro                                                          0.58333
Gwet AC1                                                          0.38931
Hamming Loss                                                      0.41667
Joint Entropy                                                     2.45915
KL Divergence                                                     0.09998
Kappa                                                             0.35484
Kappa 95% CI                                                      (-0.07708,0.78675)
Kappa No Prevalence                                               0.16667
Kappa Standard Error                                              0.22036
Kappa Unbiased                                                    0.34426
Lambda A                                                          0.42857
Lambda B                                                          0.16667
Mutual Information                                                0.52421
NIR                                                               0.41667
Overall ACC                                                       0.58333
Overall CEN                                                       0.46381
Overall J                                                         (1.225,0.40833)
Overall MCC                                                       0.36667
Overall MCEN                                                      0.51894
Overall RACC                                                      0.35417
Overall RACCU                                                     0.36458
P-Value                                                           0.18926
PPV Macro                                                         0.61111
PPV Micro                                                         0.58333
Pearson C                                                         0.59568
Phi-Squared                                                       0.55
RCI                                                               0.35339
RR                                                                4.0
Reference Entropy                                                 1.48336
Response Entropy                                                  1.5
SOA1(Landis & Koch)                                               Fair
SOA2(Fleiss)                                                      Poor
SOA3(Altman)                                                      Fair
SOA4(Cicchetti)                                                   Poor
SOA5(Cramer)                                                      Relatively Strong
SOA6(Matthews)                                                    Weak
Scott PI                                                          0.34426
Standard Error                                                    0.14232
TPR Macro                                                         0.56667
TPR Micro                                                         0.58333
Zero-one Loss                                                     5

Class Statistics :

Classes                                                           L1            L2            L3
ACC(Accuracy)                                                     0.83333       0.75          0.58333
AGF(Adjusted F-score)                                             0.72859       0.62869       0.61009
AGM(Adjusted geometric mean)                                      0.85764       0.70861       0.58034
AM(Difference between automatic and manual classification)        -2            1             1
AUC(Area under the ROC curve)                                     0.8           0.65          0.58571
AUCI(AUC value interpretation)                                    Very Good     Fair          Poor
AUPR(Area under the PR curve)                                     0.8           0.41667       0.55
BCD(Bray-Curtis dissimilarity)                                    0.08333       0.04167       0.04167
BM(Informedness or bookmaker informedness)                        0.6           0.3           0.17143
CEN(Confusion entropy)                                            0.25          0.49658       0.60442
DOR(Diagnostic odds ratio)                                        None          4.0           2.0
DP(Discriminant power)                                            None          0.33193       0.16597
DPI(Discriminant power interpretation)                            None          Poor          Poor
ERR(Error rate)                                                   0.16667       0.25          0.41667
F0.5(F0.5 score)                                                  0.88235       0.35714       0.51724
F1(F1 score - harmonic mean of precision and sensitivity)         0.75          0.4           0.54545
F2(F2 score)                                                      0.65217       0.45455       0.57692
FDR(False discovery rate)                                         0.0           0.66667       0.5
FN(False negative/miss/type 2 error)                              2             1             2
FNR(Miss rate or false negative rate)                             0.4           0.5           0.4
FOR(False omission rate)                                          0.22222       0.11111       0.33333
FP(False positive/type 1 error/false alarm)                       0             2             3
FPR(Fall-out or false positive rate)                              0.0           0.2           0.42857
G(G-measure geometric mean of precision and sensitivity)          0.7746        0.40825       0.54772
GI(Gini index)                                                    0.6           0.3           0.17143
GM(G-mean geometric mean of specificity and sensitivity)          0.7746        0.63246       0.58554
IBA(Index of balanced accuracy)                                   0.36          0.28          0.35265
IS(Information score)                                             1.26303       1.0           0.26303
J(Jaccard index)                                                  0.6           0.25          0.375
LS(Lift score)                                                    2.4           2.0           1.2
MCC(Matthews correlation coefficient)                             0.68313       0.2582        0.16903
MCCI(Matthews correlation coefficient interpretation)             Moderate      Negligible    Negligible
MCEN(Modified confusion entropy)                                  0.26439       0.5           0.6875
MK(Markedness)                                                    0.77778       0.22222       0.16667
N(Condition negative)                                             7             10            7
NLR(Negative likelihood ratio)                                    0.4           0.625         0.7
NLRI(Negative likelihood ratio interpretation)                    Poor          Negligible    Negligible
NPV(Negative predictive value)                                    0.77778       0.88889       0.66667
OC(Overlap coefficient)                                           1.0           0.5           0.6
OOC(Otsuka-Ochiai coefficient)                                    0.7746        0.40825       0.54772
OP(Optimized precision)                                           0.58333       0.51923       0.55894
P(Condition positive or support)                                  5             2             5
PLR(Positive likelihood ratio)                                    None          2.5           1.4
PLRI(Positive likelihood ratio interpretation)                    None          Poor          Poor
POP(Population)                                                   12            12            12
PPV(Precision or positive predictive value)                       1.0           0.33333       0.5
PRE(Prevalence)                                                   0.41667       0.16667       0.41667
Q(Yule Q - coefficient of colligation)                            None          0.6           0.33333
RACC(Random accuracy)                                             0.10417       0.04167       0.20833
RACCU(Random accuracy unbiased)                                   0.11111       0.0434        0.21007
TN(True negative/correct rejection)                               7             8             4
TNR(Specificity or true negative rate)                            1.0           0.8           0.57143
TON(Test outcome negative)                                        9             9             6
TOP(Test outcome positive)                                        3             3             6
TP(True positive/hit)                                             3             1             3
TPR(Sensitivity, recall, hit rate, or true positive rate)         0.6           0.5           0.6
Y(Youden index)                                                   0.6           0.3           0.17143
dInd(Distance index)                                              0.4           0.53852       0.58624
sInd(Similarity index)                                            0.71716       0.61921       0.58547


In :
cm.stat(overall_param=["Kappa"],class_param=["ACC","AUC","TPR"])

Overall Statistics :

Kappa                                                             0.35484

Class Statistics :

Classes                                                           L1            L2            L3
ACC(Accuracy)                                                     0.83333       0.75          0.58333
AUC(Area under the ROC curve)                                     0.8           0.65          0.58571
TPR(Sensitivity, recall, hit rate, or true positive rate)         0.6           0.5           0.6


In :
cm.stat(overall_param=["Kappa"],class_param=["ACC","AUC","TPR"],class_name=["L1","L3"])

Overall Statistics :

Kappa                                                             0.35484

Class Statistics :

Classes                                                           L1            L3
ACC(Accuracy)                                                     0.83333       0.58333
AUC(Area under the ROC curve)                                     0.8           0.58571
TPR(Sensitivity, recall, hit rate, or true positive rate)         0.6           0.6


In :
cm.stat(summary=True)

Overall Statistics :

ACC Macro                                                         0.72222
F1 Macro                                                          0.56515
Kappa                                                             0.35484
Overall ACC                                                       0.58333
PPV Macro                                                         0.61111
SOA1(Landis & Koch)                                               Fair
TPR Macro                                                         0.56667
Zero-one Loss                                                     5

Class Statistics :

Classes                                                           L1            L2            L3
ACC(Accuracy)                                                     0.83333       0.75          0.58333
AUC(Area under the ROC curve)                                     0.8           0.65          0.58571
AUCI(AUC value interpretation)                                    Very Good     Fair          Poor
F1(F1 score - harmonic mean of precision and sensitivity)         0.75          0.4           0.54545
FN(False negative/miss/type 2 error)                              2             1             2
FP(False positive/type 1 error/false alarm)                       0             2             3
N(Condition negative)                                             7             10            7
P(Condition positive or support)                                  5             2             5
POP(Population)                                                   12            12            12
PPV(Precision or positive predictive value)                       1.0           0.33333       0.5
TN(True negative/correct rejection)                               7             8             4
TON(Test outcome negative)                                        9             9             6
TOP(Test outcome positive)                                        3             3             6
TP(True positive/hit)                                             3             1             3
TPR(Sensitivity, recall, hit rate, or true positive rate)         0.6           0.5           0.6



#### Parameters¶

1. overall_param : overall statistics names for print (type : list)
2. class_param : class statistics names for print (type : list)
3. class_name : class names for print (sub set of classes) (type : list)
4. summary : summary mode flag (type : bool)
• Notice : cm.params() in prev versions (0.2 >)
• Notice : overall_param & class_param , new in version 1.6
• Notice : class_name , new in version 1.7
• Notice : summary , new in version 2.4

### Compare report¶

In :
cp.print_report()

Best : cm2

Rank  Name   Class-Score    Overall-Score
1     cm2    7.05           2.55
2     cm3    4.55           1.98333


In :
print(cp)

Best : cm2

Rank  Name   Class-Score    Overall-Score
1     cm2    7.05           2.55
2     cm3    4.55           1.98333



## Save¶

In :
import os
if "Document_Files" not in os.listdir():
os.mkdir("Document_Files")


### .pycm file¶

In :
cm.save_stat(os.path.join("Document_Files","cm1"))

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1.pycm',
'Status': True}
In :
cm.save_stat(os.path.join("Document_Files","cm1_filtered"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"])

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_filtered.pycm',
'Status': True}
In :
cm.save_stat(os.path.join("Document_Files","cm1_filtered2"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"],class_name=["L1"])

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_filtered2.pycm',
'Status': True}
In :
cm.save_stat(os.path.join("Document_Files","cm1_summary"),summary=True)

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_summary.pycm',
'Status': True}
In :
cm.save_stat("cm1asdasd/")

Out:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.pycm'",
'Status': False}

#### Parameters¶

1. name : output file name (type : str)
2. address : flag for address return (type : bool)
3. overall_param : overall statistics names for save (type : list)
4. class_param : class statistics names for save (type : list)
5. class_name : class names for print (sub set of classes) (type : list)
6. summary : summary mode flag (type : bool)
• Notice : new in version 0.4
• Notice : overall_param & class_param , new in version 1.6
• Notice : class_name , new in version 1.7
• Notice : summary , new in version 2.4

### HTML¶

In :
cm.save_html(os.path.join("Document_Files","cm1"))

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1.html',
'Status': True}
In :
cm.save_html(os.path.join("Document_Files","cm1_filtered"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"])

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_filtered.html',
'Status': True}
In :
cm.save_html(os.path.join("Document_Files","cm1_filtered2"),overall_param=["Kappa"],class_param=["ACC","AUC","TPR"],class_name=["L1"])

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_filtered2.html',
'Status': True}
In :
cm.save_html(os.path.join("Document_Files","cm1_colored"),color=(255, 204, 255))

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_colored.html',
'Status': True}
In :
cm.save_html(os.path.join("Document_Files","cm1_colored2"),color="Crimson")

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_colored2.html',
'Status': True}
In :
cm.save_html(os.path.join("Document_Files","cm1_normalized"),color="Crimson",normalize=True)

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_normalized.html',
'Status': True}
In :
cm.save_html(os.path.join("Document_Files","cm1_summary"),summary=True,normalize=True)

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_summary.html',
'Status': True}
In :
cm.save_html("cm1asdasd/")

Out:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.html'",
'Status': False}

#### Parameters¶

1. name : output file name (type : str)
2. address : flag for address return (type : bool)
3. overall_param : overall statistics names for save (type : list)
4. class_param : class statistics names for save (type : list)
5. class_name : class names for print (sub set of classes) (type : list)
6. color : matrix color (R,G,B) (type : tuple/str), support X11 color names</li>
7. summary : summary mode flag (type : bool)
8. alt_link : alternative link for document flag (type : bool)
• Notice : new in version 0.5
• Notice : overall_param & class_param , new in version 1.6
• Notice : class_name , new in version 1.7
• Notice : color, new in version 1.8
• Notice : normalize, new in version 2.0
• Notice : summary and alt_link , new in version 2.4
• Notice : If PyCM website is not available, set alt_link = True

### CSV¶

In :
cm.save_csv(os.path.join("Document_Files","cm1"))

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1.csv',
'Status': True}
In :
cm.save_csv(os.path.join("Document_Files","cm1_filtered"),class_param=["ACC","AUC","TPR"])

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_filtered.csv',
'Status': True}
In :
cm.save_csv(os.path.join("Document_Files","cm1_filtered2"),class_param=["ACC","AUC","TPR"],normalize=True)

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_filtered2.csv',
'Status': True}
In :
cm.save_csv(os.path.join("Document_Files","cm1_filtered3"),class_param=["ACC","AUC","TPR"],class_name=["L1"])

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_filtered3.csv',
'Status': True}
In :
cm.save_csv(os.path.join("Document_Files","cm1_summary"),summary=True,matrix_save=False)

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_summary.csv',
'Status': True}
In :
cm.save_csv("cm1asdasd/")

Out:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.csv'",
'Status': False}

#### Parameters¶

1. name : output file name (type : str)
2. address : flag for address return (type : bool)
3. class_param : class statistics names for save (type : list)
4. class_name : class names for print (sub set of classes) (type : list)
5. matrix_save : flag for saving matrix in seperate CSV file (type : bool)
6. normalize : flag for saving normalized matrix instead of matrix (type : bool)
7. summary : summary mode flag (type : bool)
• Notice : new in version 0.6
• Notice : class_param , new in version 1.6
• Notice : class_name , new in version 1.7
• Notice : matrix_save and normalize, new in version 1.9
• Notice : summary , new in version 2.4

### OBJ¶

In :
cm.save_obj(os.path.join("Document_Files","cm1"))

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1.obj',
'Status': True}
In :
cm.save_obj(os.path.join("Document_Files","cm1_stat"),save_stat=True)

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_stat.obj',
'Status': True}
In :
cm.save_obj(os.path.join("Document_Files","cm1_no_vectors"),save_vector=False)

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cm1_no_vectors.obj',
'Status': True}
In :
cm.save_obj("cm1asdasd/")

Out:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.obj'",
'Status': False}

#### Parameters¶

1. name : output file name (type : str)
2. address : flag for address return (type : bool)
3. save_stat : save statistics flag (type : bool)
4. save_vector : save vectors flag (type : bool)
• Notice : new in version 0.9.5
• Notice : save_vector and save_stat, new in version 2.3
• Notice : For more information visit Example 4

### comp¶

In :
cp.save_report(os.path.join("Document_Files","cp"))

Out:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\Document_Files\\cp.comp',
'Status': True}
In :
cp.save_report("cm1asdasd/")

Out:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.comp'",
'Status': False}

#### Parameters¶

1. name : output file name (type : str)
2. address : flag for address return (type : bool)
• Notice : new in version 2.0

## Input errors¶

In :
try:
cm2=ConfusionMatrix(y_actu, 2)
except pycmVectorError as e:
print(str(e))

The type of input vectors is assumed to be a list or a NumPy array

In :
try:
cm3=ConfusionMatrix(y_actu, [1,2,3])
except pycmVectorError as e:
print(str(e))

Input vectors must have same length

In :
try:
cm_4 = ConfusionMatrix([], [])
except pycmVectorError as e:
print(str(e))

Input vectors are empty

In :
try:
cm_5 = ConfusionMatrix([1,1,1,], [1,1,1,1])
except pycmVectorError as e:
print(str(e))

Input vectors must have same length

In :
try:
cm3=ConfusionMatrix(matrix={})
except pycmMatrixError as e:
print(str(e))

Input confusion matrix format error

In :
try:
cm_4=ConfusionMatrix(matrix={1:{1:2,"1":2},"1":{1:2,"1":3}})
except pycmMatrixError as e:
print(str(e))

Type of the input matrix classes is assumed  be the same

In :
try:
cm_5=ConfusionMatrix(matrix={1:{1:2}})
except pycmVectorError as e:
print(str(e))

Number of the classes is lower than 2

In :
try:
cp=Compare([cm2,cm3])
except pycmCompareError as e:
print(str(e))

The input type is considered to be dictionary but it's not!

In :
try:
cp=Compare({"cm1":cm,"cm2":cm2})
except pycmCompareError as e:
print(str(e))

The domain of all ConfusionMatrix objects must be same! The sample size or the number of classes are different.

In :
try:
cp=Compare({"cm1":[],"cm2":cm2})
except pycmCompareError as e:
print(str(e))

The input is considered to consist of pycm.ConfusionMatrix object but it's not!

In :
try:
cp=Compare({"cm2":cm2})
except pycmCompareError as e:
print(str(e))

Lower than two confusion matrices is given for comparing. The minimum number of confusion matrix for comparing is 2.

In :
try:
cp=Compare({"cm1":cm2,"cm2":cm3},by_class=True,weight={1:2,2:0})
except pycmCompareError as e:
print(str(e))

The weight type must be dictionary and also must be set for all classes.

• Notice : updated in version 2.0

## Examples¶

### Example-7 (How to plot via seaborn+pandas)¶

1- J. R. Landis, G. G. Koch, “The measurement of observer agreement for categorical data. Biometrics,” in International Biometric Society, pp. 159–174, 1977.
2- D. M. W. Powers, “Evaluation: from precision, recall and f-measure to roc, informedness, markedness & correlation,” in Journal of Machine Learning Technologies, pp.37-63, 2011.
3- C. Sammut, G. Webb, “Encyclopedia of Machine Learning” in Springer, 2011.
4- J. L. Fleiss, “Measuring nominal scale agreement among many raters,” in Psychological Bulletin, pp. 378-382, 1971.
5- D.G. Altman, “Practical Statistics for Medical Research,” in Chapman and Hall, 1990.
6- K. L. Gwet, “Computing inter-rater reliability and its variance in the presence of high agreement,” in The British Journal of Mathematical and Statistical Psychology, pp. 29–48, 2008.”
7- W. A. Scott, “Reliability of content analysis: The case of nominal scaling,” in Public Opinion Quarterly, pp. 321–325, 1955.
8- E. M. Bennett, R. Alpert, and A. C. Goldstein, “Communication through limited response questioning,” in The Public Opinion Quarterly, pp. 303–308, 1954.
9- D. V. Cicchetti, "Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology," in Psychological Assessment, pp. 284–290, 1994.
10- R.B. Davies, "Algorithm AS155: The Distributions of a Linear Combination of χ2 Random Variables," in Journal of the Royal Statistical Society, pp. 323–333, 1980.
11- S. Kullback, R. A. Leibler "On information and sufficiency," in Annals of Mathematical Statistics, pp. 79–86, 1951.
12- L. A. Goodman, W. H. Kruskal, "Measures of Association for Cross Classifications, IV: Simplification of Asymptotic Variances," in Journal of the American Statistical Association, pp. 415–421, 1972.
13- L. A. Goodman, W. H. Kruskal, "Measures of Association for Cross Classifications III: Approximate Sampling Theory," in Journal of the American Statistical Association, pp. 310–364, 1963.
14- T. Byrt, J. Bishop and J. B. Carlin, “Bias, prevalence, and kappa,” in Journal of Clinical Epidemiology pp. 423-429, 1993.
15- M. Shepperd, D. Bowes, and T. Hall, “Researcher Bias: The Use of Machine Learning in Software Defect Prediction,” in IEEE Transactions on Software Engineering, pp. 603-616, 2014.
16- X. Deng, Q. Liu, Y. Deng, and S. Mahadevan, “An improved method to construct basic probability assignment based on the confusion matrix for classification problem, ” in Information Sciences, pp.250-261, 2016.
17- J.-M. Wei, X.-J. Yuan, Q.-H. Hu, and S.-Q. J. E. S. w. A. Wang, "A novel measure for evaluating classifiers," in Expert Systems with Applications, pp. 3799-3809, 2010.
18- I. Kononenko and I. J. M. L. Bratko, "Information-based evaluation criterion for classifier's performance," in Machine Learning, pp. 67-80, 1991.
19- R. Delgado and J. D. Núñez-González, "Enhancing Confusion Entropy as Measure for Evaluating Classifiers," in The 13th International Conference on Soft Computing Models in Industrial and Environmental Applications, pp. 79-89, 2018: Springer.
20- J. J. C. b. Gorodkin and chemistry, "Comparing two K-category assignments by a K-category correlation coefficient," in Computational Biology and chemistry, pp. 367-374, 2004.
21- C. O. Freitas, J. M. De Carvalho, J. Oliveira, S. B. Aires, and R. Sabourin, "Confusion matrix disagreement for multiple classifiers," in Iberoamerican Congress on Pattern Recognition, pp. 387-396, 2007.
22- P. Branco, L. Torgo, and R. P. Ribeiro, "Relevance-based evaluation metrics for multi-class imbalanced domains," in Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 698-710, 2017. Springer.
23- D. Ballabio, F. Grisoni, R. J. C. Todeschini, and I. L. Systems, "Multivariate comparison of classification performance measures," in Chemometrics and Intelligent Laboratory Systems, pp. 33-44, 2018.
24- J. J. E. Cohen and p. measurement, "A coefficient of agreement for nominal scales," in Educational and Psychological Measurement, pp. 37-46, 1960.
25- S. Siegel, "Nonparametric statistics for the behavioral sciences," in New York : McGraw-Hill, 1956.
26- H. Cramér, "Mathematical methods of statistics (PMS-9),"in Princeton university press, 2016.
27- B. W. J. B. e. B. A.-P. S. Matthews, "Comparison of the predicted and observed secondary structure of T4 phage lysozyme," in Biochimica et Biophysica Acta (BBA) - Protein Structure, pp. 442-451, 1975.
28- J. A. J. S. Swets, "The relative operating characteristic in psychology: a technique for isolating effects of response bias finds wide use in the study of perception and cognition," in American Association for the Advancement of Science, pp. 990-1000, 1973.
29- P. J. B. S. V. S. N. Jaccard, "Étude comparative de la distribution florale dans une portion des Alpes et des Jura," in Bulletin de la Société vaudoise des sciences naturelles, pp. 547-579, 1901.
30- T. M. Cover and J. A. Thomas, "Elements of information theory," in John Wiley & Sons, 2012.
31- E. S. Keeping, "Introduction to statistical inference," in Courier Corporation, 1995.
32- V. Sindhwani, P. Bhattacharya, and S. Rakshit, "Information theoretic feature crediting in multiclass support vector machines," in Proceedings of the 2001 SIAM International Conference on Data Mining, pp. 1-18, 2001.
33- M. Bekkar, H. K. Djemaa, and T. A. J. J. I. E. A. Alitouche, "Evaluation measures for models assessment over imbalanced data sets," in Journal of Information Engineering and Applications, 2013.
34- W. J. J. C. Youden, "Index for rating diagnostic tests," in Cancer, pp. 32-35, 1950.
35- S. Brin, R. Motwani, J. D. Ullman, and S. J. A. S. R. Tsur, "Dynamic itemset counting and implication rules for market basket data," in Proceedings of the 1997 ACM SIGMOD international conference on Management of datavol, pp. 255-264, 1997.
36- S. J. T. J. o. O. S. S. Raschka, "MLxtend: Providing machine learning and data science utilities and extensions to Python’s scientific computing stack," in Journal of Open Source Software, 2018.
37- J. BRAy and J. CuRTIS, "An ordination of upland forest communities of southern Wisconsin.-ecological Monographs," in journal of Ecological Monographs, 1957.
38- J. L. Fleiss, J. Cohen, and B. S. J. P. B. Everitt, "Large sample standard errors of kappa and weighted kappa," in Psychological Bulletin, p. 323, 1969.
39- M. Felkin, "Comparing classification results between n-ary and binary problems," in Quality Measures in Data Mining: Springer, pp. 277-301, 2007.
40- R. Ranawana and V. Palade, "Optimized Precision-A new measure for classifier performance evaluation," in 2006 IEEE International Conference on Evolutionary Computation, pp. 2254-2261, 2006.
41- V. García, R. A. Mollineda, and J. S. Sánchez, "Index of balanced accuracy: A performance measure for skewed class distributions," in Iberian Conference on Pattern Recognition and Image Analysis, pp. 441-448, 2009.
42- P. Branco, L. Torgo, and R. P. J. A. C. S. Ribeiro, "A survey of predictive modeling on imbalanced domains," in Journal ACM Computing Surveys (CSUR), p. 31, 2016.
43- K. Pearson, "Notes on Regression and Inheritance in the Case of Two Parents," in Proceedings of the Royal Society of London, p. 240-242, 1895.
44- W. J. I. Conover, New York, "Practical Nonparametric Statistics," in John Wiley and Sons, 1999.
45- Yule, G. U, "On the methods of measuring association between two attributes." in Journal of the Royal Statistical Society, pp. 579-652, 1912.
46- Batuwita, R. and Palade, V, "A new performance measure for class imbalance learning. application to bioinformatics problems," in Machine Learning and Applications, pp.545–550, 2009.
47- D. K. Lee, "Alternatives to P value: confidence interval and effect size," Korean journal of anesthesiology, vol. 69, no. 6, p. 555, 2016.
48- M. A. Raslich, R. J. Markert, and S. A. Stutes, "Selecting and interpreting diagnostic tests," Biochemia medica: Biochemia medica, vol. 17, no. 2, pp. 151-161, 2007.
49- D. E. Hinkle, W. Wiersma, and S. G. Jurs, "Applied statistics for the behavioral sciences," 1988.
50- A. Maratea, A. Petrosino, and M. Manzo, "Adjusted F-measure and kernel scaling for imbalanced data learning," Information Sciences, vol. 257, pp. 331-341, 2014.
51- L. Mosley, "A balanced approach to the multi-class imbalance problem," 2013.
52- M. Vijaymeena and K. Kavitha, "A survey on similarity measures in text mining," Machine Learning and Applications: An International Journal, vol. 3, no. 2, pp. 19-28, 2016.
53- Y. Otsuka, "The faunal character of the Japanese Pleistocene marine Mollusca, as evidence of climate having become colder during the Pleistocene in Japan," Biogeograph. Soc. Japan, vol. 6, pp. 165-170, 1936.
54- A. Tversky, "Features of similarity," Psychological review, vol. 84, no. 4, p. 327, 1977.
55- K. Boyd, K. H. Eng, and C. D. Page, "Area under the precision-recall curve: point estimates and confidence intervals," in Joint European conference on machine learning and knowledge discovery in databases, 2013, pp. 451-466: Springer.
56- J. Davis and M. Goadrich, "The relationship between Precision-Recall and ROC curves," in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 233-240: ACM.
57- M. Kuhn, "Building predictive models in R using the caret package," Journal of statistical software, vol. 28, no. 5, pp. 1-26, 2008.