-
GLTR: Statistical Detection and Visualization of Generated Text
The rapid improvement of language models has raised the specter of abuse...
read it
-
Human and Automatic Detection of Generated Text
With the advent of generative models with a billion parameters or more, ...
read it
-
Real or Fake? Learning to Discriminate Machine from Human Generated Text
Recent advances in generative modeling of text have demonstrated remarka...
read it
-
Machine Generation and Detection of Arabic Manipulated and Fake News
Fake news and deceptive machine-generated text are serious problems thre...
read it
-
Automatic Detection of Machine Generated Text: A Critical Survey
Text generative models (TGMs) excel in producing text that matches the s...
read it
-
Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting
Language generation tasks that seek to mimic human ability to use langua...
read it
-
Algorithmic Detection of Computer Generated Text
Computer generated academic papers have been used to expose a lack of th...
read it
RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text
In recent years, large neural networks for natural language generation (NLG) have made leaps and bounds in their ability to generate fluent text. However, the tasks of evaluating quality differences between NLG systems and understanding how humans perceive the generated text remain both crucial and difficult. In this system demonstration, we present Real or Fake Text (RoFT), a website that tackles both of these challenges by inviting users to try their hand at detecting machine-generated text in a variety of domains. We introduce a novel evaluation task based on detecting the boundary at which a text passage that starts off human-written transitions to being machine-generated. We show preliminary results of using RoFT to evaluate detection of machine-generated news articles.
READ FULL TEXT
Comments
There are no comments yet.