Paper Explained - Resolution-robust Large Mask Inpainting with Fourier Convolutions (Full Video Analysis w/ Author Interview)

At the end of the video is an interview with the paper authors!
LaMa is a system that is amazing at removing foreground objects from images, especially when those objects cover a large part of the image itself. LaMa is specifically trained to reconstruct large masked areas and includes global information throughout its forward propagation by using Fourier Convolutions in its layers. This makes it incredibly effective at reconstructing periodic structures with long-range consistency, compared to regular convolutions.

0:00 - Intro
0:45 - Sponsor: ClearML
3:30 - Inpainting Examples
5:05 - Live Demo
6:40 - Locality as a weakness of convolutions
10:30 - Using Fourier Transforms for global information
12:55 - Model architecture overview
14:35 - Fourier convolution layer
21:15 - Loss function
24:25 - Mask generation algorithm
25:40 - Experimental results
28:25 - Interview with the authors

Paper: [2109.07161] Resolution-robust Large Mask Inpainting with Fourier Convolutions
Online Demo:

Sponsor: ClearML

Modern image inpainting systems, despite the significant progress, often struggle with large missing areas, complex geometric structures, and high-resolution images. We find that one of the main reasons for that is the lack of an effective receptive field in both the inpainting network and the loss function. To alleviate this issue, we propose a new method called large mask inpainting (LaMa). LaMa is based on i) a new inpainting network architecture that uses fast Fourier convolutions (FFCs), which have the image-wide receptive field; ii) a high receptive field perceptual loss; iii) large training masks, which unlocks the potential of the first two components. Our inpainting network improves the state-of-the-art across a range of datasets and achieves excellent performance even in challenging scenarios, e.g. completion of periodic structures. Our model generalizes surprisingly well to resolutions that are higher than those seen at train time, and achieves this at lower parameter&time costs than the competitive baselines. The code is available at \url{this https URL}.

Authors: Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky