Many problems in Machine Learning involve loops of inner and outer optimization. Finding update steps for the outer loop is usually difficult, because of the.need to differentiate through the inner loop’s procedure over multiple steps. Such loop unrolling is very limited and constrained to very few steps. Other papers have found solutions around unrolling in very specific, individual problems. This paper proposes a unified framework for implicit differentiation of inner optimization procedures without unrolling and provides implementations that integrate seamlessly into JAX.
0:00 - Intro & Overview
2:05 - Automatic Differentiation of Inner Optimizations
4:30 - Example: Meta-Learning
7:45 - Unrolling Optimization
13:00 - Unified Framework Overview & Pseudocode
21:10 - Implicit Function Theorem
25:45 - More Technicalities
28:45 - Experiments
- Dataset Distillation is done with respect to the training set, not the validation or test set.