The scenario approach is a technique for data-driven decision making that has found application in a variety of fields including systems and control design. Although initially conceived in the context of worst-case optimization, the scenario approach has progressively evolved into a general methodology that allows one to keep control on the risk of solutions designed from data according to complex decision processes. In a recent contribution, the theory of compression schemes (a paradigm that plays a fundamental role in statistical learning theory) has been deeply revisited in the wake of the scenario approach, which has led to unprecedentedly sharp generalization and risk quantification results. In this paper, we build on these achievements to gain insight on a classification paradigm called Guaranteed Error Machine (GEM). First, by leveraging the theory of reproducing kernels Hilbert spaces, we introduce a new, more flexible, GEM algorithm, which allows for complex classification geometries. The proposed scheme is then shown to fit into the new compression theory, from which new sharp results for the probability of GEM misclassification are derived in a distribution-free context.

Compression at the service of learning: a case study for the Guaranteed Error Machine

Garatti S.;
2022-01-01

Abstract

The scenario approach is a technique for data-driven decision making that has found application in a variety of fields including systems and control design. Although initially conceived in the context of worst-case optimization, the scenario approach has progressively evolved into a general methodology that allows one to keep control on the risk of solutions designed from data according to complex decision processes. In a recent contribution, the theory of compression schemes (a paradigm that plays a fundamental role in statistical learning theory) has been deeply revisited in the wake of the scenario approach, which has led to unprecedentedly sharp generalization and risk quantification results. In this paper, we build on these achievements to gain insight on a classification paradigm called Guaranteed Error Machine (GEM). First, by leveraging the theory of reproducing kernels Hilbert spaces, we introduce a new, more flexible, GEM algorithm, which allows for complex classification geometries. The proposed scheme is then shown to fit into the new compression theory, from which new sharp results for the probability of GEM misclassification are derived in a distribution-free context.
2022
Proceedings of the IEEE Conference on Decision and Control
978-1-6654-6761-2
File in questo prodotto:
File Dimensione Formato  
CDC2023-Garatti-Campi.pdf

Accesso riservato

: Publisher’s version
Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1233542
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact