Predicting metrical patterns in Spanish poetry with language models

11/18/2020
by   Javier de la Rosa, et al.
0

In this paper, we compare automated metrical pattern identification systems available for Spanish against extensive experiments done by fine-tuning language models trained on the same task. Despite being initially conceived as a model suitable for semantic tasks, our results suggest that BERT-based models retain enough structural information to perform reasonably well for Spanish scansion.

READ FULL TEXT

page 1

page 2

page 3

research
07/28/2023

Tutorials on Stance Detection using Pre-trained Language Models: Fine-tuning BERT and Prompting Large Language Models

This paper presents two self-contained tutorials on stance detection in ...
research
02/08/2022

Do Language Models Learn Position-Role Mappings?

How is knowledge of position-role mappings in natural language learned? ...
research
09/01/2023

Let the Models Respond: Interpreting Language Model Detoxification Through the Lens of Prompt Dependence

Due to language models' propensity to generate toxic or hateful response...
research
07/30/2023

User-Controlled Knowledge Fusion in Large Language Models: Balancing Creativity and Hallucination

In modern dialogue systems, the use of Large Language Models (LLMs) has ...
research
09/05/2021

Teaching Autoregressive Language Models Complex Tasks By Demonstration

This paper demonstrates that by fine-tuning an autoregressive language m...
research
04/22/2021

Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?

Language models trained on billions of tokens have recently led to unpre...
research
07/18/2023

Overthinking the Truth: Understanding how Language Models Process False Demonstrations

Modern language models can imitate complex patterns through few-shot lea...

Please sign up or login with your details

Forgot password? Click here to reset