Consistency of augmentation graph and network approximability in contrastive learning

29.04.2025 11:30 - 13:00

Martina Neuman (University of Vienna)

Contrastive learning leverages data augmentation to develop feature representation without relying on large labeled datasets. However, despite its empirical success, the theoretical foundations of contrastive learning remain incomplete, with many essential guarantees left unaddressed, particularly the realizability assumption concerning neural approximability of an optimal spectral contrastive loss solution. We overcome these limitations by analyzing the pointwise and spectral consistency of the augmentation graph Laplacian.

We establish that, under specific conditions for data generation and graph connectivity, as the augmented dataset size increases, the augmentation graph Laplacian converges to a weighted Laplace-Beltrami operator on the natural data manifold. These consistency results in turn give way to a robust framework for establishing neural approximability, directly resolving the realizability assumption in a current paradigm.

The talk is based on the paper of the same title. I will begin with brief introductions to manifold learning and the current paradigm of contrastive learning. I will then present our manifold learning analysis used to address the realizability assumption in contrastive learning.  

Organiser:
J. L. Romero and J. T. van Velthoven
Location:
SR9 (Kolingasse)