Invertibility and Stability in Neural Networks: Tools from Frame Theory

26.08.2025 15:00 - 16:30

Daniel Haider (University of Vienna)

Abstract:
This thesis investigates the invertibility and stability of neural network layers in different
scenarios using tools from frame theory. Based on four research papers, it follows two main
threads. The first thread focuses on layers with ReLU activation. For a given bias vector and
a data domain, we introduce a family of frames that characterize when the corresponding
ReLU layer is injective on that domain. Alternative characterizations arise when fixing
either the bias or the data domain. Based on these insights, we derive practical sufficient
criteria, provide reconstruction formulas, and prove a novel lower Lipschitz stability bound
for ReLU layers that is optimal up to a constant. The second thread addresses the stability
of the linear component of convolutional layers for audio input. We present a natural way
of estimating frame bounds and characterizing tightness via the aliasing terms in the
associated Walnut representation. This leads to a systematic approach for improving
stability where tightness is approximated by suppressing aliasing. For randomly initialized
filter kernels, we formalize how signal characteristics can influence stability, and describe
the statistical behavior of the frame bounds and the aliasing terms.


Online:
univienna.zoom.us/j/67839459397
WD.1
Meeting ID: 678 3945 9397
Kenncode: 973318

Organiser:

Fakultät für Mathematik, Dekan Radu Ioan Boţ

Location:
Online