Learning Transductions and Alignments with RNN Seq2seq Models

03/13/2023
by   Zhengxiang Wang, et al.
0

The paper studies the capabilities of Recurrent-Neural-Network sequence to sequence (RNN seq2seq) models in learning four string-to-string transduction tasks: identity, reversal, total reduplication, and input-specified reduplication. These transductions are traditionally well studied under finite state transducers and attributed with varying complexity. We find that RNN seq2seq models are only able to approximate a mapping that fits the training or in-distribution data. Attention helps significantly, but does not solve the out-of-distribution generalization limitation. Task complexity and RNN variants also play a role in the results. Our results are best understood in terms of the complexity hierarchy of formal languages as opposed to that of string transductions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset