Assistive Recipe Editing through Critiquing

05/05/2022
by   Diego Antognini, et al.
6

There has recently been growing interest in the automatic generation of cooking recipes that satisfy some form of dietary restrictions, thanks in part to the availability of online recipe data. Prior studies have used pre-trained language models, or relied on small paired recipe data (e.g., a recipe paired with a similar one that satisfies a dietary constraint). However, pre-trained language models generate inconsistent or incoherent recipes, and paired datasets are not available at scale. We address these deficiencies with RecipeCrit, a hierarchical denoising auto-encoder that edits recipes given ingredient-level critiques. The model is trained for recipe completion to learn semantic relationships within recipes. Our work's main innovation is our unsupervised critiquing module that allows users to edit recipes by interacting with the predicted ingredients; the system iteratively rewrites recipes to satisfy users' feedback. Experiments on the Recipe1M recipe dataset show that our model can more effectively edit recipes compared to strong language-modeling baselines, creating recipes that satisfy user constraints and are more correct, serendipitous, coherent, and relevant as measured by human judges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2022

CodeEditor: Learning to Edit Source Code with Pre-trained Models

Developers often perform repetitive code editing activities for various ...
research
02/04/2022

Pre-Trained Neural Language Models for Automatic Mobile App User Feedback Answer Generation

Studies show that developers' answers to the mobile app users' feedbacks...
research
04/14/2023

Stochastic Code Generation

Large language models pre-trained for code generation can generate high-...
research
04/11/2022

Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning

Pre-trained sequence-to-sequence language models have led to widespread ...
research
05/22/2023

Can LLMs facilitate interpretation of pre-trained language models?

Work done to uncover the knowledge encoded within pre-trained language m...
research
05/27/2020

A Structural Model for Contextual Code Changes

We address the problem of predicting edit completions based on a learned...
research
04/14/2018

Simultaneous Edit and Imputation for Household Data with Structural Zeros

Multivariate categorical data nested within households often include rep...

Please sign up or login with your details

Forgot password? Click here to reset