Rapture of the deep: highs and lows of sparsity in a world of depths
Résumé
Promoting sparsity in deep networks is a natural way to control their complexity, and a timely endeavor, since practical neural model sizes have grown to unprecedented levels. The lessons from sparsity in linear inverse problems also bear the promise of many other benefits beyond such computational aspects, from statistical significance to explainability. Can these promises be fulfilled? Can we safely leverage the know-how of sparsity-promoting regularizers for inverse problems to harness sparsity in deeper contexts, linear or not? This article surveys the curses and blessings of deep sparsity. After reminding the main lessons from inverse problems, we tour a number of results that challenge their immediate deep extensions, both from a mathematical and a computational perspective. In particular, we highlight that l1 regularization does not always lead to sparsity, and that optimization with a prescribed set of allowed nonzero coefficients can be NP-hard. We emphasize the role of
rescaling-invariances in these phenomena, and the need to favor structured sparsity to keep sparse network training problems under control, ensure their stability, and actually enable efficient network implementations on GPUs. We finally outline the promises and challenges of a flexible family of so-called Kronecker sparsity structures, which extend the classical butterfly structure and appear in many classical scientific computing applications and that have recently emerged also in deep learning.
Origine | Fichiers produits par l'(les) auteur(s) |
---|