Transforming the output of GANs by fine-tuning them with features from different datasets

10/06/2019
by   Terence Broad, et al.
16

In this work we present a method for fine-tuning pre-trained GANs with features from different datasets, resulting in the transformation of the output distribution into a new distribution with novel characteristics. The weights of the generator are updated using the weighted sum of the losses from a cross-dataset classifier and the frozen weights of the pre-trained discriminator. We discuss details of the technical implementation and share some of the visual results from this training process.

READ FULL TEXT

page 3

page 4

page 5

page 6

research
03/10/2021

Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks

Adversarial training of end-to-end (E2E) ASR systems using generative ad...
research
04/04/2023

Improved Visual Fine-tuning with Natural Language Supervision

Fine-tuning a pre-trained model can leverage the semantic information fr...
research
04/14/2020

Weight Poisoning Attacks on Pre-trained Models

Recently, NLP has seen a surge in the usage of large pre-trained models....
research
05/08/2023

Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias

Pre-trained Language Models (PLMs) may be poisonous with backdoors or bi...
research
03/07/2019

Discovering Visual Patterns in Art Collections with Spatially-consistent Feature Learning

Our goal in this paper is to discover near duplicate patterns in large c...
research
05/15/2017

Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

This paper introduces an end-to-end fine-tuning method to improve hand-e...
research
03/08/2023

RADAM: Texture Recognition through Randomized Aggregated Encoding of Deep Activation Maps

Texture analysis is a classical yet challenging task in computer vision ...

Please sign up or login with your details

Forgot password? Click here to reset