Fork me on GitHub

This documentation is for scikit-learn version 0.18.dev0Other versions

If you use the software, please consider citing scikit-learn.

sklearn.metrics.coverage_error

sklearn.metrics.coverage_error(y_true, y_score, sample_weight=None)[source]

Coverage error measure

Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in y_true per sample.

Ties in y_scores are broken by giving maximal rank that would have been assigned to all tied values.

Read more in the User Guide.

Parameters:

y_true : array, shape = [n_samples, n_labels]

True binary labels in binary indicator format.

y_score : array, shape = [n_samples, n_labels]

Target scores, can either be probability estimates of the positive class, confidence values, or binary decisions.

sample_weight : array-like of shape = [n_samples], optional

Sample weights.

Returns:

coverage_error : float

References

[R169]Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
Previous