Applying a Pre-trained Language Model to Spanish Twitter Humor Prediction
Our entry into the HAHA 2019 Challenge placed 3^rd in the classification task and 2^nd in the regression task. We describe our system and innovations, as well as comparing our results to a Naive Bayes baseline. A large Twitter based corpus allowed us to train a language model from scratch focused on Spanish and transfer that knowledge to our competition model. To overcome the inherent errors in some labels we reduce our class confidence with label smoothing in the loss function. All the code for our project is included in a GitHub repository for easy reference and to enable replication by others.
READ FULL TEXT