Minimum Description Length and Generalization Guarantees for Representation Learning - École des Ponts ParisTech
Communication Dans Un Congrès Année : 2024

Minimum Description Length and Generalization Guarantees for Representation Learning

Résumé

A major challenge in designing efficient statistical supervised learning algorithms is finding representations that perform well not only on available training samples but also on unseen data. While the study of representation learning has spurred much interest, most existing such approaches are heuristic; and very little is known about theoretical generalization guarantees. In this paper, we establish a compressibility framework that allows us to derive upper bounds on the generalization error of a representation learning algorithm in terms of the "Minimum Description Length" (MDL) of the labels or the latent variables (representations). Rather than the mutual information between the encoder's input and the representation, which is often believed to reflect the algorithm's generalization capability in the related literature but in fact, falls short of doing so, our new bounds involve the "multi-letter" relative entropy between the distribution of the representations (or labels) of the training and test sets and a fixed prior. In particular, these new bounds reflect the structure of the encoder and are not vacuous for deterministic algorithms. Our compressibility approach, which is information-theoretic in nature, builds upon that of Blum-Langford for PAC-MDL bounds and introduces two essential ingredients: block-coding and lossy-compression. The latter allows our approach to subsume the so-called geometrical compressibility as a special case. To the best knowledge of the authors, the established generalization bounds are the first of their kind for Information Bottleneck (IB) type encoders and representation learning. Finally, we partly exploit the theoretical results by introducing a new data-dependent prior. Numerical simulations illustrate the advantages of well-chosen such priors over classical priors used in IB.

Dates et versions

hal-04456954 , version 1 (14-02-2024)

Identifiants

Citer

Milad Sefidgaran, Abdellatif Zaidi, Piotr Krasnowski. Minimum Description Length and Generalization Guarantees for Representation Learning. The Thirty-Seventh Annual Conference on Neural Information Processing Systems (NeurIPS), 2023, New Orleans, United States. ⟨hal-04456954⟩
22 Consultations
0 Téléchargements

Altmetric

Partager

More