Abstract:
Recent advancements have witnessed an emerging synergy between diffusion models and entropy-regularized optimal transport (OT), yielding remarkable breakthroughs across diverse domains such as image processing, medical engineering, and biology. The primary objective of this presentation is to examine the canonical training algorithms for these OT problems, the iterative proportional fitting (IPF) procedure, using the perspective of classical mirror descent on probability measures. More specifically, we will introduce a continuous-time counterpart to the IPF scheme that offers several significant advantages:
1. It provides a simple framework for understanding IPF through the lens of classical optimization theory, namely the mirror flow.
2. It enables the derivation of improved IPF variants that, unlike existing schemes, are robust to noise and bias.
3. It extends and unifies various recently discovered dynamics in machine learning and mathematics, all within the framework of Otto geometry.
4. It readily extends to the setting of monotone inclusion or variational inequalities.