Tunable Convolutions with Parametric Multi-Loss Optimization

04/03/2023
by   Matteo Maggioni, et al.
0

Behavior of neural networks is irremediably determined by the specific loss and data used during training. However it is often desirable to tune the model at inference time based on external factors such as preferences of the user or dynamic characteristics of the data. This is especially important to balance the perception-distortion trade-off of ill-posed image-to-image translation tasks. In this work, we propose to optimize a parametric tunable convolutional layer, which includes a number of different kernels, using a parametric multi-loss, which includes an equal number of objectives. Our key insight is to use a shared set of parameters to dynamically interpolate both the objectives and the kernels. During training, these parameters are sampled at random to explicitly optimize all possible combinations of objectives and consequently disentangle their effect into the corresponding kernels. During inference, these parameters become interactive inputs of the model hence enabling reliable and consistent control over the model behavior. Extensive experimental results demonstrate that our tunable convolutions effectively work as a drop-in replacement for traditional convolutions in existing neural networks at virtually no extra computational cost, outperforming state-of-the-art control strategies in a wide range of applications; including image denoising, deblurring, super-resolution, and style transfer.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

research
05/18/2021

Overparametrization of HyperNetworks at Fixed FLOP-Count Enables Fast Neural Image Enhancement

Deep convolutional neural networks can enhance images taken with small m...
research
01/25/2022

Revisiting L1 Loss in Super-Resolution: A Probabilistic View and Beyond

Super-resolution as an ill-posed problem has many high-resolution candid...
research
12/10/2019

HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks

Video-to-video translation for super-resolution, inpainting, style trans...
research
04/29/2021

Hardware Architecture of Embedded Inference Accelerator and Analysis of Algorithms for Depthwise and Large-Kernel Convolutions

In order to handle modern convolutional neural networks (CNNs) efficient...
research
10/30/2021

Functional Neural Networks for Parametric Image Restoration Problems

Almost every single image restoration problem has a closely related para...
research
01/11/2021

Multi-Domain Image-to-Image Translation with Adaptive Inference Graph

In this work, we address the problem of multi-domain image-to-image tran...
research
07/24/2018

User Loss -- A Forced-Choice-Inspired Approach to Train Neural Networks directly by User Interaction

In this paper, we investigate whether is it possible to train a neural n...

Please sign up or login with your details

Forgot password? Click here to reset