Self-supervised learning with rotation-invariant kernels - Université Jean Moulin Lyon 3 Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Self-supervised learning with rotation-invariant kernels

Résumé

A major paradigm for learning image representations in a self-supervised manner is to learn a model that is invariant to some predefined image transformations (cropping, blurring, color jittering, etc.), while regularizing the embedding distribution to avoid learning a degenerate solution. Our first contribution is to propose a general kernel framework to design a generic regularization loss that promotes the embedding distribution to be close to the uniform distribution on the hypersphere, with respect to the maximum mean discrepancy pseudometric. Our framework uses rotation-invariant kernels defined on the hypersphere, also known as dot-product kernels. Our second contribution is to show that this flexible kernel approach encompasses several existing self-supervised learning methods, including uniformity-based and information-maximization methods. Finally, by exploring empirically several kernel choices, our experiments demonstrate that using a truncated rotation-invariant kernel provides competitive results compared to state-of-the-art methods, and we show practical situations where our method benefits from the kernel trick to reduce computational complexity.
Fichier principal
Vignette du fichier
main.pdf (431.94 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03738466 , version 1 (26-07-2022)
hal-03738466 , version 2 (03-10-2022)
hal-03738466 , version 3 (11-10-2022)
hal-03738466 , version 4 (06-03-2023)

Identifiants

Citer

Léon Zheng, Gilles Puy, Elisa Riccietti, Patrick Pérez, Rémi Gribonval. Self-supervised learning with rotation-invariant kernels. 2022. ⟨hal-03738466v1⟩
198 Consultations
254 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More