An implicit function theorem for neural networks

10.04.2024 15:30 - 16:00

Fabian Kleon Zehetgruber (TU Wien)

Abstract:

The classical method to understand how well the solutions of parametric PDEs can be approximated using artificial neural networks employs regularity theory. Since regularity results are sparse for nonlinear PDEs, we aim to bypass this bottleneck by studying neural networks in connection with the implicit function theorem. We investigate if an implicitly given set of points that can be described by the realization of a neural network can locally be well approximated as the graph of the realization of another neural network. In addition to the mere existence of such an approximation we obtain bounds on the number of nodes and the depth of the approximating neural network. Mathematically, this requires a precise understanding of sums and compositions of neural networks.

This event takes place in hybrid form (in person and online on Zoom). Slides and additional materials are available on the Moodle service of the University of Vienna. If you want to participate, please write an email to matteo.tommasini@univie.ac.at. Further details are available at this link.

 

 

Organiser:
SFB 65
Location:

HS 2, EG, OMP 1