Abstract:
We introduce (or assimilate) a number of key concepts, which allows us to compare neural networks to classical representation systems (meaning e.g. wavelets, shearlets, and Gabor systems, or more generally any system generated from some mother function through translation, dilation and modulation). This enables us to establish that any function class is (asymptotically) at least as sparse w.r.t. ReLU neural networks, as it is in any 'reasonable' classical representation system.