Loading Events

Patrick Rebeschini (University of Oxford)

27 October 2023 @ 12:00 - 13:00


  • Past event


27 October 2023
12:00 - 13:00
Event Category:
Academic Events

Algorithmic Stability, Generalization, and Privacy for Diffusion Models

Abstract. The acclaimed success of diffusion models sparks a need to formulate theoretical guarantees to explain the generalization capability of such distribution learning methods, despite the high-dimensional nature of the training data and sampling procedures being utilized. Prior efforts accounting for the empirical nature of score matching mechanisms typically hinge on uniform learning strategies, which provide error bounds that are algorithm-agnostic and often yield results that depend exponentially badly on the problem dimension. In this work, we move away from uniform learning approaches and develop algorithm-dependent generalization error bounds based on the framework of algorithmic stability from supervised learning. We introduce the notion of ‘score stability’, which encapsulates the score learning process’s susceptibility to perturbations in the training data. We show that score-stable algorithms generalize well and generate differentially-private samples. To showcase the applicability of our framework, we consider score functions learned via canonical gradient-based optimization procedures, and establish generalization bounds that display a linear dependence on the dimension. Our bounds are stated with respect to the reverse KL divergence and the maximum mean discrepancy, notions of error commonly used in the analysis and training of diffusion models.

Joint with Tyler Farghly, George Deligiannidis, and Arnaud Doucet.