Latent-Optimized Adversarial Neural Transfer for Sarcasm Detection

04/19/2021
by   Xu Guo, et al.
0

The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality. The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while optimizing for domain-specific performance. However, these objectives may be in conflict, which can lead to optimization difficulties and sometimes diminished transfer. We propose a generalized latent optimization strategy that allows different losses to accommodate each other and improves training dynamics. The proposed method outperforms transfer learning and meta-learning baselines. In particular, we achieve 10.02 of the art on the iSarcasm dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset