A tutorial for the ECML-PKDD 2020 conference in Ghent, Belgium, from the 14th to the 18th of September 2020. Full information will be provided at classifier-calibration.github.io
This tutorial introduces fundamental concepts in classifier calibration and gives an overview of recent progress in the enhancement and evaluation of calibration methods. Participants will learn why some training algorithms produce calibrated probability estimates and others don't, and how to apply post-hoc calibration techniques in order to improve the probability estimates in theory and in practice, the latter in a Section dedicated to Hands-On explanations. Participants will furthermore learn how to test if a classifier’s outputs are calibrated and how to assess and evaluate probabilistic classifiers using a range of evaluation metrics and exploratory graphical tools. Additionally, participants will obtain a basic appreciation of the more abstract perspective provided by proper scoring rules, and learn about related topics and some open problems in the field.
Peter Flach, University of Bristol, UK, [email protected] , www.cs.bris.ac.uk/~flach/
Miquel Perello-Nieto, University of Bristol, UK, [email protected], https://www.perellonieto.com/
Hao Song, University of Bristol, UK, [email protected]
Meelis Kull, University of Tartu, Estonia, [email protected]
Telmo Silva Filho, Federal University of Paraiba, Brazil, [email protected]