Training Optimus Prime, M.D.: Generating Medical Certification Items by Fine-Tuning OpenAI's gpt2 Transformer Model

08/23/2019
by   Matthias von Davier, et al.
0

This article describes new results of an application using transformer-based language models to automated item generation, an area of ongoing interest in the domain of certification testing as well as in educational measurement and psychological testing. OpenAI's gpt2 pre-trained 345M parameter language model was retrained using the public domain text mining set of PubMed articles and subsequently used to generate item stems (case vignettes) as well as distractor proposals for multiple-choice items. This case study shows promise and produces draft text that can be used by human item writers as input for authoring. Future experiments with more recent transformer models (such as Grover, TransformerXL) using existing item pools is expected to further improve results and facilitate the development of assessment materials.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset