A frame is a family of vectors in a Hilbert space such that each element of the space can be stably reconstructed from its frame-coefficients. Dually, each signal can be represented as a linear combination of the frame elements. Unlike for an orthonormal basis, however, the coefficients in this linear combination need not be unique.
For a general frame and a general element of the Hilbert space, the frame coefficients can decay slowly; all one knows is that they are square summable. Yet, in applications ranging from signal compression to the discretization of differential equations, it is desirable that most of the information about a “sufficiently nice” function should be concentrated in just a few coefficients, meaning that the coefficients should decay quickly; such signals are called compressible with respect to the frame under consideration. To be useful, the property of being compressible should be preserved by usual signal-processing operations like “analyzing → thresholding → reconstructing”.
In the first part of the talk, I will present a large family of frames satisfying these properties. This family includes widely used systems like wavelets, curvelets, shearlets, and also Gabor frames. The theory is based on associating to each class of frames a certain family of smoothness spaces, similar to the way that Gabor frames are connected to modulation spaces and wavelet systems are related to Besov spaces. We will also discuss the embedding theory of these smoothness spaces, which allows one to describe how the compressibility of a given function is related to classical smoothness properties, or whether compressibility with respect to one system (e.g., wavelets) implies compressibility with respect to a different system (e.g., curvelets).
In the second part of the talk, we consider approximation spaces associated to neural networks. Here, one classifies different functions according to how well they can be approximated by neural networks, as a function of the size of the used networks. It is of particular interest to understand which properties of a function are decisive for it to be approximated well by neural networks. We will see that classical smoothness properties are sufficient for this to be the case, but that several “classically non-smooth” functions are still approximated well.
Smoothness spaces associated to localized frames and neural networks
22.06.2020 09:50 - 10:35
Organiser:
Fakultät für Mathematik
Location:
Zoom Meeting