A call-theoretic characterization of good calibration is that an agent looking for to attenuate a correct loss in expectation can not enhance their consequence by post-processing a wonderfully calibrated predictor. Hu and Wu (FOCS’24) use this to outline an approximate calibration measure referred to as calibration choice loss (CDL), which measures the maximal enchancment achievable by any post-processing over any correct loss. Sadly, CDL seems to be intractable to even weakly approximate within the offline setting, given black-box entry to the predictions and labels. We recommend circumventing this by proscribing consideration to structured households of post-processing capabilities Ok. We outline the calibration choice loss relative to Ok, denoted CDLOk the place we think about all correct losses however prohibit post-processings to a structured household Ok. We develop a complete concept of when CDLOk is information-theoretically and computationally tractable, and use it to show each higher and decrease bounds for pure courses Ok. Along with introducing new definitions and algorithmic strategies to the idea of calibration for choice making, our outcomes give rigorous ensures for some extensively used recalibration procedures in machine studying.
- † College of Texas at Austin
- ‡ Harvard College
