The Bias Variance Tradeoff

05.03.2026 14:00 - 14:20

Johannes Hertrich (Université Paris Dauphine - PSL)

Abstract:

We decompose the mean square error of a regression model into three part. The first part, called bias, describes
the error of the mean prediction. The second part, called variance, describes the variation in the prediction.
Finally, the third part arises from noise in the test set and is often called the irreducible error. We illustrate the
bias-variance tradeoff based on the regularization parameter of a linear regression model.
The teaching sample roughly follows Section 4.3 of the text book “Deep Learning: Foun- dations and Concepts”
of C. M. Bishop.
Course Syllabus: The teaching sample is part of a course “Mathematical Foundations of Machine Learning”
intended for Bachelor students. The course itself introduces regression models and corresponding error analysis
2
and focuses considers then neural networks and universal approximation theorems. It is generally held as a
lecture accompanied by exercise classes for both the theoretical part and some implementations.
At the point of the teaching sample, we assume that we have already introduced standard regression models,
particularly linear regression, and are now moving towards the error analysis.

Organiser:

Fakultät für Mathematik, Dekan Radu Ioan Boţ

Location:

BZ 2, 2. OG., OMP 1