Learning operators with neural networks

10.06.2024 09:50 - 10:35

Samuel Lanthaler (Caltech)

Neural networks have proven to be effective approximators of high dimensional functions in a wide variety of
applications. In scientific computing the goal is often to approximate an underlying operator, which defines a
mapping between infinite-dimensional spaces of input and output functions. Extensions of neural networks to this infinite-dimensional setting have been proposed in recent years, giving rise to the emerging field of operator learning. Despite their practical success, our theoretical understanding of these approaches remains incomplete; Why are neural networks so effective in these applications? In this talk, I will discuss recent work on the approximation theory underpinning operator learning. This work aims to address what can and cannot be achieved in this context.

Organiser:

Fakultät für Mathematik, Dekan Radu Ioan Boţ

Location:

SR 11, 2.OG, OMP 1