With a rapid increase of cooking videos uploaded on the web, multimedia food computing has become an important topic [29, 28, 24, 25, 34, 39]. Among various developing technologies, recipe generation from unsegmented cooking videos 
is challenging. It requires artificial agents to (1) extract key events that are essential to dish completion and (2) generate sentences for the extracted events. This task is important for both scene understanding and real-world applications. In terms of scene understanding, (1) and (2) are recognized as temporal event localization[41, 8] and video captioning [17, 31, 22]
, respectively, and learning both simultaneously is an ambitious challenge in computer vision (CV) and natural language processing (NLP). Meanwhile, from a perspective of real-world applications, this technology can support people learning new skills by providing key events and their explanation sentences as a multimedia summarization of cooking videos. These academic and industrial motivations impel us to tackle recipe generation from unsegmented cooking videos.
from videos and generating sentences for them. Although the input/output pairs of our task (video/(events, sentences)) and that of the DVC are of the same format, our task is different from the perspective of the recipe’s story awareness. DVC allows agents to exclusively detect false-positive events in its evaluation metrics; our task, on the other hand, requires them to extract the accurate number of key events in the correct order. Fujitaet al.  reported that DVC models produced more than 200 redundant events per video on average on the ActivityNet captions dataset  while the number of manually-annotated events is only 3-4. Such redundant events and sentences make it difficult for users to grasp an overview of the video content.
Although the events predicted by DVC models are redundant, we observed that (1) several events are adoptable as a story of a recipe, but (2) the generated sentences for such events are not grounded well to the visual contents (i.e., ingredients and actions in the sentences are incorrect). We confirm this by analyzing the outputs of the state-of-the-art DVC model. We refer to the DVC output events as event candidates. Through an approach that we refer to as oracle selection, we select events that have the maximum temporal IoU (tIoU) to ground-truth events (i.e., manually-annotated events) and compute the DVC scores [9, 16]. The results support our observations. Although oracle selection ensures that events that are adoptable as the story of a recipe are selected, the obtained sentences are not grounded in the visual content.
Our task requires models to predict the next step for the story of a recipe by memorizing previously predicted events and generated sentences. For example, in Fig. 1, to generate “egg mixture” in step 3, the models should memorize the visual and textual content of step 2. Existing DVC approaches do not consider them explicitly; thus they miss or hallucinate ingredients (e.g., say ingredients that do not appear in the video), resulting in performance degradation. Based on this, our hypothesis is that we can obtain correct recipes by selecting oracle events from the event candidates and re-generating sentences for them (Fig. 1).
To achieve this, we propose a novel transformer-based joint learning approach of an event selector and sentence generator. The event selector selects oracle events from the event candidates in the correct order, and the sentence generator generates a recipe that is grounded with the events. Both of the modules have memory representations as with , which enables the model to predict the accurate number of events in the correct order by remembering previously selected events and generated sentences. Moreover, the model is designed to be trainable in an end-to-end manner because the modules of the model are connected without breaking a differentiable chain. We refer to this model as the base model.
We also propose an extended model that can generate more accurate recipes in the settings where the inputs are videos with ingredients as in [22, 38]. The task requires agents to describe detailed manipulations of ingredients from cooking videos. To achieve this goal, our extended model has two additional modules: (1) a dot-product visual simulator and (2) a textual attention module. The dot-product visual simulator is introduced by the insights from our previous work , where we discovered that the manipulations change the state of the ingredients, yielding state-aware visual representations. In Fig. 1, “eggs” are transformed into “cracked” then “stirred” form to cook scrambled eggs. To address this, we introduced the visual simulator, which reasons about the state transition of ingredients, into the model. The motivation of the extension is the difference in the inputs. Although in our previous work, the inputs are pre-segmented ground-truth events with ingredients, the inputs in this work are event candidates. The dot-product visual simulator accepts them and reasons about the state transition of ingredients. Furthermore, we also introduce a textual attention module that verbalizes grounded ingredients and actions in the recipes. These modules are effective for grounded recipe generation from cooking videos.
In our experiments, we use the YouCook2  dataset, which contains 2,000 videos with event/sentence annotation pairs. The proposed method outperforms the state-of-the-art DVC approaches based on commonly used DVC metrics. In addition, the extended models boost the model’s performance. We also show that the proposed models can select the correct number of events that effectively reflects the ground-truth events. In addition, our qualitative evaluation reveals that the proposed approaches can select events in the correct order and generate recipes grounded in the video contents. We discuss the detailed experiment settings for optimal recipe generation.
Ii Related Work
Ii-a Recipe generation from visual observations
Recipes are a popular target in the NLP community because they require artificial agents to generate coherent sentences [14, 2]. The inputs of these works are the title and ingredients of recipes. Recently, many multimodal cooking datasets have appeared [29, 10] and recipe generation from images has also been proposed [28, 34]. Salvador et al.  tackled this problem using a transformer-based model  to generate high-quality recipes from a single image of a completed dish. Other researchers focus on generating a recipe from an image sequence depicting the intermediate food states [4, 21, 23].
The essential limitation of generating recipes from images is the lack of detailed, continuous scene information of human manipulations, which images do not contain. Hence, this task is essentially an ill-posed problem; that is, it depends on the obtained language model to generate correct actions. Meanwhile, videos contain this information and have been attracting the attention of many researchers in recent years [22, 38, 30, 31]. As described in Section I, this task requires agents to (1) extract key events from videos and (2) generate grounded sentences for events. Most researchers have focused on (2) generating grounded recipes from pre-segmented key events [22, 31, 38]; only few researchers attempt to learn both (1) and (2) to generate recipes from unsegmented videos. Shi et al.  are pioneer researchers that have tackled this problem by utilizing the narration of videos effectively. Although this approach is effective for narrated videos, transcription is not always available for all cooking videos. Speaking during cooking or adding narration to videos requires significant effort. In addition, narrations may lead the model to attend to textual features, ignoring the visual features of videos. Our task extends their work to generate recipes only from videos. This enables the model to treat even non-narrated videos and be more useful than the previous setting.
Ii-B Video captioning
Video captioning is an attractive field for both CV and NLP communities. The task settings of video captioning vary according to the nature of the input video (e.g., one short event, pre-defined key events, or long unsegmented video). Our aim is to extract key events and generate sentences simultaneously from long unsegmented videos. This task is similar to DVC because the input/output pairs are in the same format as in our task. We first describe traditional DVC approaches and then enumerate the difference between DVC and our task.
DVC aims at densely detecting events from the video and generating sentences for them. Recently, transformer-based approaches are well investigated for this task. Zhou et al.  proposed a Masked Transformer, which detects events and generates sentences via a differentiable mask in an end-to-end manner. Their approach, however, outputs more than 200 redundant events per video on average . To handle this issue, Wang et al.  proposed PDVC, which detects events in parallel, re-ranks the top ( is also the prediction target), and generates sentences for the re-ranked events. Deng  proposed a top-down DVC approach, whereby whole sentences generated for the video are aligned into timestamps, based on which the sentences are refined.
The key difference between our task and DVC lies in the story awareness of recipes. Our task requires agents to extract an appropriate number of events in the correct order. DVC allows a model to detect events densely; however, redundant event/sentence pairs impede readability, and users are unable to grasp an overview of the video contents. To achieve story awareness, we propose a transformer-based joint learning approach of an event selector and sentence generator based on our observation that oracle events can be selectable from the output events of a DVC model.
Iii Oracle-based Analysis of the Existing DVC Model
As Fujita et al.  reported, the outputs of DVC models are redundant. However, we observed that although (1) several events are adoptable as a story of a recipe, (2) the generated sentences for such events are not grounded well in the visual contents. To verify this, we analyze the outputs of the state-of-the-art DVC model. Specifically, we select events with the maximum tIoU scores to ground-truth events from event candidates and compute the DVC scores, an approach referred to as oracle selection hereinafter. The analyzed results demonstrate that the oracle selection boosts the performance, especially on story-oriented DVC metrics.
We use the YouCook2 dataset , one of the largest cooking video-and-language datasets. For the DVC model, we employ PDVC , the state-of-the-art DVC model. It detects events densely and then re-ranks the top of them for its outputs. Note that is a hyper-parameter111 is set to be a sufficiently large number. For YouCook2, the authors of  set to be as a default parameter. and is the prediction target. We adopt this model because it achieves the best performance on DVC tasks and it is easy to control before training the model. We use the detected events for the analysis, not the re-ranked events.
dvc_eval firstly computes tIoU of all the combinations between the prediction and ground-truth events, and then computes word-overlap metrics (e.g., METEOR or CIDEr-D), if the tIoU scores are over the threshold . ranges from to by . Their average is the output score of these metrics.
SODA stands for story-oriented dense video captioning evaluation, whereby the story awareness of the output events is evaluated. Specifically, it uses dynamic programming to explore an alignment of events between prediction and ground-truth for obtaining the maximum story scores. The story scores are computed as a product of tIoU and word overlap metrics. Because SODA evaluates the predicted events by penalizing redundant outputs, it is suitable for computing the story awareness of recipes.
SODA evaluates whether the output event/sentence pairs are appropriate as a story; thus, it is rated above dvc_eval in this study. As word-overlap metrics, we use BLEU , METEOR , and CIDEr-D , which are commonly-used metrics for text generation tasks. We also introduce SODA tIoU, which computes story scores as tIoU scores, rather than a product of tIoU and word overlap metrics. These metrics can evaluate whether the selected events are appropriate as components of the generated recipes. In this analysis, we range from and .
Iii-a Quantitative evaluation
Table I shows the results of the oracle selection on dvc_eval and SODA metrics, indicating that the oracle selection outperforms the PDVC. Specifically, the SODA scores of the oracle selection are quite better than that of the PDVC, demonstrating that the generated recipes are more suitable as a story than the ones generated by the PDVC.
Fig. 2 shows the distribution of tIoU scores of the oracle events on training and validation sets. The number of candidate events was directly proportional to the average tIoU. This is because the more was, the more suitable oracle events appear in the candidate. Both the training and validation sets confirm this tendency.
Iii-B Qualitative evaluation
Fig. 3 shows a comparison of the recipes generated by the oracle selection and ground truth. The selected event timestamps are close to the ground-truth events, indicating that the appropriate selection can construct the correct recipes. However, the sentences differ from the ground truth, the ingredients, especially, are different from the ground truth (e.g., “baking soda”, “salt”, and “pepper” are missing in step (1)).
The main reason for this error is that the model does not consider previously predicted events and sentences. This causes the model to miss or hallucinate ingredients and motivates us to propose a model that re-generates sentences for the predicted events, rather than using the corresponding generated sentences without modification.
Iv Proposed method
Based on the oracle-based analysis, we hypothesize that we can obtain correct recipes by selecting oracle events from the output events of the DVC model and re-generating sentences for them. Our task requires models to predict the next step for the story of a recipe by memorizing previously predicted events and generated sentences. To achieve this, we propose a joint learning approach of event selection module and sentence generation module (Fig. 4), which are memory-augmented recurrent transformers . The event selection module and sentence generation module are referred to as event selector and sentence generator, respectively. The event selector chooses oracle events from event candidates repeatedly and the sentence generator outputs sentences for the selected events. The memories are updated to remember the history of the outputs (selected events/generated sentences) and used in the next steps.
We formulate this task using notations. Let be event candidates, where consists of start and end timestamps. Note that is sorted on the start time of the events in chronological order. Given , the model generate pairs , where , , and represent an index of the oracle event candidates, corresponding generated sentences, and the number of the selected events, respectively. The memories in the event selector and in the sentence generator are updated at each step, where represents the layer number of transformers.
Iv-a Event selector
The event selector can be divided into two main components: event encoder and event transformer. The event encoder converts given candidate events into event-level representations , based on which the event transformer outputs a holistic representations of events
. They are used for computing event probabilities, which represent the likelihood of oracle events. We apply Gumbel softmax resampling to select events and forward them to the sentence generator without breaking a differentiable chain during the training phase. In the inference phase, events are deterministically selected by applying argmax to event probabilities.
Event encoder. The event encoder converts events into representations . First, we extract event clips from the video, according to the start and end time of , and input them into pre-trained visual encoders. One of the straightforward ways to encode events is to average frame-level image representations extracted by ResNet . However, we find that this approach is unable to capture fine-grained event semantics for overlapped events. For example, assuming that the ground-truth event is a scene of cutting potatoes, and the event candidates have two scenes, where one is a scene of cutting potatoes and tomatoes, and another is of cutting only potatoes. We expect the model to select the latter scene because the former contains extra information of cutting tomatoes. Representing events based on the average of frame-level features cannot capture the semantic difference of these events effectively. Thus, it is necessary to employ an event encoder that focuses on extracting fine-grained event-level semantics.
To achieve this, we focus on multiple instance learning-noise contrastive estimation (MIL-NCE) model pre-trained on Howto100M . The model is trained on more than 100M pairs of an event with narration, and it can capture event-level semantics of overlapped events. Using this encoder, we obtain event-level representations as . Then, the positional encoding (PE)  and relative encoding of events  are added to .
Event transformer. Given , the event transformer outputs holistic representations of events . This module is based on the memory-augmented recurrent transformer , where each layer has the memories
to remember the history of the selected events. The output vectorsand memories are used to compute -th event probability as follows:
represents an element-wise max-pooling of vectors. During training, the event selector selects events through a Gumbel softmax resampling
, which enables us to train the model in an end-to-end manner without breaking a differentiable chain. This indicates that the loss computed on the sentence generator is backpropagated to the event selector. During inference, the model selects event index based on the argmax of.
Iv-B Sentence generator
Based on the selected event representations, the sentence generator outputs sentences grounded for them. Let the selected event index be , and the selected event representation can be written as . This vector is added to the word vectors
from the word embedding layer, which is the concatenated neural networks of the pre-trained global vectors for word representation (GloVe)222We employ pre-trained 300D word embedding, which can be downloaded from http://nlp.stanford.edu/data/glove.6B.zip and input them into the sentence transformer, another memory-augmented recurrent transformer. The model generates words repeatedly by applying softmax and argmax operations to the output vectors of the sentence transformer.
Iv-C Memory update
The memory modules in the event and sentence transformers can be separately updated, according to the memory update equation given in the original paper . However, intuitively, mixing memories is effective for coherent recipe generation because the history of the selected events contributes to recipe generation and vice versa. To achieve this, we update the memory modules by mixing them as follows:
where represents a single linear layer and
represents the Hadamard dot product and sigmoid function, respectively. The obtainedand are forwarded into the next step.
Iv-D Loss functions
To train the model in an end-to-end manner, we sum up two types of losses: event selection loss and sentence generation loss. These losses are formulated as a negative log-likelihood of the event selection and sentence generation tasks as follows:
where and represents the total, event selection, and sentence generation losses, respectively.
V Extended model
In addition to the base model, we also propose an extended model that can generate more grounded recipes in the settings where the inputs are videos with ingredients, as in [22, 38]. This task requires agents to describe detailed manipulations of ingredients from cooking videos.
To achieve this goal, our extended model has two additional modules: (1) dot-product visual simulator and (2) textual attention module (Fig. 5). The dot-product visual simulator is introduced based on the insights from our previous work . In our previous work, we discovered that the manipulations change the state of the ingredients, yielding state-aware visual representations. For example, “eggs” are transformed into “cracked,” then “stirred” form to cook scrambled eggs. To address this, we introduced the visual simulator into the model, which reasons about the state transition of ingredients. Although pre-segmented ground-truth events with ingredients are regarded as input in our previous work, they are considered event candidates with ingredients in this study. Thus we extend it to the dot-product visual simulator described in Section V-A. In addition, we also introduce a textual attention module that verbalizes grounded ingredients and actions in the recipes. These modules are effective for grounded recipe generation from cooking videos.
Another extension is to add an ingredient encoder to convert ingredients into representations . The ingredient encoders are concatenated neural networks of pre-trained GloVe 
word embedding and multi-layer perceptrons (MLPs) with ReLU activation function and are added to the event selector and sentence generator without sharing their parameters. Multi-word ingredients (e.g., parmesan cheese) are represented by the average embedding vector of the words. They are concatenated with event/word representations and inputted to the event/sentence transformers as shown in Fig.5.
V-a Dot-product visual simulator
We first revisit the original visual simulator and then describe how we extend it to the dot-product visual simulator.
Visual simulator revisit. In the original study, the inputs are the pairs of ingredients and ground-truth events. Let and be the ingredient and ground-truth event vectors encoded by ingredient encoder (e.g., GloVe) and event encoder (e.g., MIL-NCE), respectively. Given , the visual simulator reasons about the state transition of the ingredients by updating them at each -th step. To this end, it consists of three components: (1) action selector, (2) ingredient selector, and (3) updater. The action selector and ingredient selector predict executed actions and used ingredients at -th step. Based on the selected actions/ingredients, the -th ingredient vectors are updated into -th new proposal ingredient vectors , which are forwarded into the next step. The visual simulator recurrently repeats the above process until processing the end element of the ground-truth events.
Proposed extension. The proposed model is similar to the visual simulator because it also has the same three components to update the ingredient vectors. However, instead of the ground-truth event vectors , the event candidates are assumed to be the inputs in this study, that is, the model needs to predict not only the actions/ingredients but also the events that constitute the recipe story.
To achieve this, we extend the visual simulator into the dot-product visual simulator shown in Fig. 6. It computes the relationships between action-to- and ingredient-to-events as attention matrices and outputs four vectors: action-weighted and ingredient-weighted event vectors, event-weighted action, and ingredient vectors. The former two vectors are used to calculate the event probabilities and the latter two vectors are forwarded to the textual attention module.
(1) Action selector outputs event-weighted action and action-weighted event vectors by predicting the executed actions and related events. Let the action vectors be , that is, the pre-defined action embedding, where represents -th actions and is the number of the actions. In Fig. 6, the actions “crack” and “stir” are executed at the oracle event ; thus that corresponding crack and stir indices should be selected with the relation to . This computation can be formulated as the dot-product attention as follows:
where represents a linear layer and represents the dimension size of and 333The dimension size of is set to be equal to .. We also acquire action-weighted event vectors as follows:
where represents a linear layer.
(2) Ingredient selector outputs the event-weighted ingredient and ingredient-weighted event vectors by predicting the ingredients used and related events. For example, in Fig. 6, the raw “eggs” should be selected at . This computation is achieved by replacing with in Eq (8) and Eq (9) as follows:
where represents a linear layer.
(3) Updater represents the state transition of the ingredients by updating the ingredient vectors . Based on the selected actions and ingredients , the updater computes the dot-product attention as follows:
where expands the max-pooled action vector by repeating it times ( is the number of ingredients).
Output representations. The action- and ingredient-weighted event representations are used to compute the event probability by replacing with in Eq (2). The event-weighted action vectors and updated ingredients are forwarded to the textual attention module. is also set to be the ingredient vectors at -th prediction.
V-B Textual attention
The textual attention module encourages the sentence generator to output a recipe grounded with the events based on the actions and ingredients. Let be the -th output word vector from the sentence transformer. Given and , the textual attention module computes two attention matrices: (1) word-action attention and (2) word-ingredient attention. Then it outputs the context vectors as follows:
where represents a linear layer and indicates a concatenation function of vectors. is forwarded to the linear layer and softmax activation to obtain the word probability across the vocabulary.
V-C Loss functions
In addition to the losses described in Section IV-D
, we introduce the following two types of loss functions: (1) visual simulator lossand (2) textual attention loss . Fig. 7 shows an overview of the loss computation for these two losses.
(1) Visual simulator loss aims to train the visual simulator, consisting of two losses: (1) ingredient selection loss and (2) action selection loss. Given the action- and ingredient-event attention matrices in Eq (8) and Eq (10), they are computed as the summed negative log-likelihood based on the ingredients/actions and events that constitute the recipe story.
To avoid costly human annotations, we compute the loss through distant supervision  following our previous work. For the ingredient selection loss, labels are obtained whether the sentence corresponding ground-truth event contains ingredients and the events are oracle or not at each step. For the action selection loss, labels are obtained whether the sentence has actions in the 384 actions defined by  and the events are oracle or not at each step. For example, in Fig. 7, “eggs” at the event index of 3 is extracted as an ingredient label, and “crack” and “stir” are extracted at the same event index as an action label.
(2) Textual attention loss aims to train the textual attention, consisting of two losses: (1) ingredient attention loss and (2) action attention loss. Given the word-ingredient/action attention matrices in Eq (13) and Eq (15), they are computed as the sum of the negative log-likelihood based on whether the -th word is the same as the ingredients/actions.
Total loss. We simply add these losses to the loss defined in Section IV-D as follows:
We use the YouCook2 dataset , which consists of 2,000 cooking videos from 89 recipe categories. All of the videos have 3–16 clips with a human-annotated start/end timestamp, and each clip is also annotated with an English sentence. The average length of videos is 5.27 minutes per video. The original YouCook2 dataset does not contain ingredient annotations, thus we use the YouCook2-ingredient dataset , which contains additional ingredient annotations. The dataset also contains 1,788 videos with events, sentences, and ingredients. We use the official split proposed by  for evaluation. Because the test set is not available online, we use the validation set for evaluation.
Event encoder. We employ different two encoders described below:
TSN  converts the appearance and optical flow into frame-level representations, and then outputs the event-level representations by averaging them. For appearance, we use 2,048D feature vectors extracted from the “Flatten-673” layer in ResNet-200 . For the optical flow, 1,024D feature vectors extracted from the “global pool” layer in BN-Inception 
. Note that these models are pre-trained on only vision resources (e.g., ImageNet).
MIL-NCE  converts the events into representations. The model is pre-trained on Howto100M , which consists of automatically constructed 100M clip-narration pairs. We expect the MIL-NCE to yield better event representations than the TSN because this model is pre-trained on instructional vision-and-language resources.
Data preprocessing. As in , we truncated sequences longer than 100 for the clip and 20 for the sentence and set the maximum length of the clip sequence to 12. Finally, we built the vocabulary based on words that occurred at least three times. The resulting vocabulary contained 991 words.
Hyper-parameter settings. For both the encoder and decoder transformers, we set the hidden size to 768, number of layers to two, and number of attention heads to 12. We train the model using the optimization method described in [7, 17]; we use the Adam optimizer  with an initial learning rate of , , and . The L2 weight decay is set to
, and the learning rate warmup is over the first five epochs. We set the batch size to 16, and continue training at most 50 epochs using early stopping with SODA:CIDEr-D.
Models. We test the proposed method by comparing it with four state-of-the-art dense video captioning models, as described below:
Masked Transformer (MT)  is a transformer-based encoder-decoder DVC model that can be trained in an end-to-end manner by using a differentiable mask.
Event-centric Hierarchical Representation for DVC (ECHR)  is an event-oriented encoder-decoder architecture for DVC. The ECHR incorporates temporal and semantic relations into the output events for generating captions accurately.
SGR  is a top-down DVC model consisting of four processes. The model (1) generates an overall paragraph from the input video, (2) grounds the sentences with events in the video, (3) refines captions based on the grounded events, and (4) refines events, referring to the refined captions.
Ablations. We examine the impact of the components of the proposed method through ablation studies on the following variations:
Base model (B) is the model introduced in Section IV.
B + Ingredient (BI) incorporates the ingredient encoder into the base model.
BI + Visual simulator (BIV) incorporates the visual simulator into the BI model.
BIV + Textual attention (VIVT) additionally incorporates the textual attention module into the BIV model.
Vi-a Word-overlap evaluation
|Input modality||Video feature||dvc_eval||SODA|
|Base + Ingredients (BI)||Video + Ingredients (VI)||MIL-NCE||1.39||7.18||31.07||6.44||31.69||35.10|
|BI + Visual simulator (BIV)||VI||MIL-NCE||1.40||7.27||32.67||6.46||32.95||34.13|
|BIV + Textual attention (BIVT)||VI||MIL-NCE||1.92||8.04||37.24||7.29||38.93||35.06|
Table II demonstrates the results of the word-overlap evaluation on both dvc_eval and SODA with BLEU, METEOR, and CIDEr-D. We observe that the base model consistently outperforms state-of-the-art captioning models by a significant margin in both evaluations. In the ablation, the BIV model outperforms the BI model, and the BIVT model further improves the BIV model. This indicates that both the dot-product visual simulator and the textual attention module are effective for accurate recipe generation.
Vi-B Discussion on the number of predicted events
In addition to the word-overlap metrics, we discuss the generated recipes from the perspective of the number of predicted events. Table III shows the percentage of recipes that satisfy , where represents the number of predicted events, ground truth, and a threshold, respectively. In this experiment, we change from to . This result demonstrates that the proposed models consistently predict a more precise number of events than the PDVC. Fig. 8 shows the histogram of the number of the predicted events. While the PDVC outputs the specific number of events (i.e., 5, 7, 10), the histogram of the proposed method draws a similar curve to that of the ground-truth.
Vi-C Qualitative analysis
Fig. 9 shows the predicted events and generated recipes from the PDVC, B, and BIVT models, in comparison to that of the ground-truth. In terms of the predicted events, PDVC outputs highly overlapped events. The proposed method, on the other hand, predicts events in the correct order, with minimal overlap. This story-oriented sequential event prediction is an advantage of the proposed method. In terms of the generated recipes, the PDVC repeatedly generates the same contents, ignoring the events (see (c) to (g)). The proposed methods suppress this problem. A comparison of B and BIVT reveals that the BIVT can generate sentences that are grounded with events (e.g., “pork” is accurately verbalized in (a) in BIVT, whereas “fat” is generated in (a) in B).
Vi-D Discussion on the detailed model settings
Here, we discuss the detailed model settings from four perspectives: (1) loss ablation studies, (2) memory update strategies, (3) event encoders, and (4) parameter sensitivity and the event candidates . The results demonstrate that these parameters are important to succeed in our task.
Vi-D1 Loss ablation studies
Table IV shows ablation studies on the loss function for the BIVT model. The experiment yields two insights. First, all the losses are essential for training the model. This is confirmed by removing all of them from the full model (compare (a) and (g)). Second, ingredient-based losses are more necessary than action-based ones. Removing the ingredient selection loss has a more significant influence on the performance, compared to the action selection loss (compare (b) with (d) and (c) with (d)). This tendency is the same as the relationship between the ingredient attention and action attention losses.
Vi-D2 Memory update strategies: separate or joint?
Table V shows a comparison of the memory update strategies. While the separate memory update does not mix the memories in the event and sentence transformers, the joint approach proposed in Section IV fuses these memories. The result demonstrates that the joint approach outperforms the separate approach, indicating the effectiveness of the joint memory update strategy.
Vi-D3 Event encoders
Table VI shows the performance difference when changing the event encoders, indicating that the MIL-NCE proved significantly superior to the TSN in all of the settings. This occurs because the MIL-NCE is pre-trained on the vision-and-language resource, Howto100M, which captures the fine-grained event-level semantics of cooking procedures. We conclude that pre-training on an appropriate resource is essential to effective performance on our task.
Vi-D4 Parameter sensitivity and the event candidates
The PDVC allows users to select the number of candidates when training the model. In the original study on the PDVC, is set to be on the YouCook2 dataset. In this experiment, we change the parameter to be and to investigate the parameter sensitivity of the model. Table VII shows the performance change of the proposed method in different model settings: B, BIV, and BIVT. The results demonstrate that the proposed method consistently performs well on . We observe that increasing degrades the model performance. Although a higher makes the maximum tIoU larger (shown in Section III), the ratio of events that are not oracle but highly overlapped with the ground-truth increases. This prevents the model from selecting oracle events precisely and causes it to overfit the training set.
In this paper, we tackled recipe generation from unsegmented cooking videos, a task that requires agents to (1) extract key events that are essential to dish completion and (2) generate sentences for the extracted events. We analyzed the output of the DVC model and observed that, although (1) several events are adoptable as a recipe story of a recipe, (2) the generated sentences for such events are not grounded in the visual content. Therefore, we hypothesized that we can obtain correct recipes by selecting oracle events from the output events of the DVC model and re-generating sentences for them. To achieve this, we proposed a transformer-based joint learning approach for the event selector and sentence generator. The event selector selects oracle events from the event candidate in the correct order, and the sentence generator generates a recipe grounded in the events. We also extend the model by including ingredient inputs. The additional modules encourage the model to generate more grounded recipes effectively. In experiments, we confirmed that the base model outperforms the state-of-the-art DVC models and the extended model boosts the model’s performance. In addition, we showed that the proposed models can select the correct number of events, as with the ground-truth events. The qualitative evaluation revealed that the proposed approaches can select events in the correct order and generate recipes grounded in the video content. Finally, we discussed the detailed experimental settings for optimal recipe generation.
-  (2005) METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proc. ACL Workshop IEEMMTS, pp. 65–72. Cited by: §III.
-  (2018) Discourse-aware neural rewards for coherent text generation. In Proc. NAACL-HLT, pp. 173–184. Cited by: §II-A.
-  (2018) Simulating action dynamics with neural process networks. In Proc. ICLR, pp. . Cited by: §V-C.
-  (2019) Storyboarding of recipes: grounded contextual generation. In Proc. ACL, pp. 6040–6046. Cited by: §II-A.
-  (2021) Sketch, ground, and refine: top-down dense video captioning. In Proc. CVPR, pp. 234–243. Cited by: §I, §II-B, 3rd item.
-  (2009) ImageNet: a large-scale hierarchical image database. In Proc. CVPR, pp. 248–255. Cited by: 1st item.
-  (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL, pp. 4171–4186. Cited by: §VI.
-  (2016) DAPs: deep action proposals for action understanding. In Proc. ECCV, pp. 768–784. Cited by: §I.
-  (2020) SODA: story oriented dense video captioning evaluation framework. In Proc. ECCV, pp. 517–531. Cited by: §I, §I, §II-B, §III, §III.
-  (2017) Cookpad image dataset: an image collection as infrastructure for food research. In Proc. ACM SIGIR, pp. 1229–1232. Cited by: §II-A.
-  (2016) Deep residual learning for image recognition. In Proc. CVPR, pp. 770–778. Cited by: §IV-A, 1st item.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proc. ICML, pp. 448–456. Cited by: 1st item.
-  (2017) Categorical reparametrization with gumble-softmax. In Proc. ICLR, pp. . Cited by: §IV-A, §IV-A.
-  (2016) Globally coherent text generation with neural checklist models. In Proc. EMNLP, pp. 329–339. Cited by: §II-A.
-  (2015) Adam: a method for stochastic optimization. In Proc. ICLR, Cited by: §VI.
-  (2017) Dense-captioning events in videos. In Proc. ICCV, pp. 706–715. Cited by: §I, §I, §III.
-  (2020) MART: memory-augmented recurrent transformer for coherent video paragraph captioning. In Proc. ACL, pp. 2603–2614. Cited by: §I, §I, §IV-A, §IV-C, §IV, §VI, §VI.
-  (2020) End-to-end learning of visual representations from uncurated instructional videos. In Proc. CVPR, pp. 9879–9889. Cited by: §IV-A, 2nd item.
-  (2019) HowTo100M: learning a text-video embedding by watching hundred million narrated video clips. In Proc. ICCV, pp. 2630–2640. Cited by: §IV-A, 2nd item.
-  (2009) Distant supervision for relation extraction without labeled data. In Proc. ACL-IJCNLP, pp. 1003–1011. Cited by: §V-C.
-  (2019) Procedural text generation from a photo sequence. In Proc. INLG, pp. 409–414. Cited by: §II-A.
-  (2021) State-aware video procedural captioning. In Proc. ACMMM, pp. 1766–1774. Cited by: §I, §I, §II-A, Fig. 5, §V, §V, §VI.
-  (2020) Structure-aware procedural text generation from an image sequence. IEEE Access 9, pp. 2125–2141. Cited by: §II-A.
-  (2020) Multi-modal cooking workflow construction for food recipes. In Proc. ACMMM, pp. 1132–1141. Cited by: §I.
-  (2022) Learning program representations for food images and cooking recipes. In Proc. CVPR, pp. 16559–16569. Cited by: §I.
-  (2002) BLEU: a method for automatic evaluation of machine translation. In Proc. ACL, pp. 311–318. Cited by: §III.
-  (2014) GloVe: global vectors for word representation. In Proc. EMNLP, pp. 1532–1543. Cited by: §IV-B, §V.
-  (2019) Inverse Cooking: recipe generation from food images. In Proc. CVPR, pp. 10453–10462. Cited by: §I, §II-A.
-  (2017) Learning cross-modal embeddings for cooking recipes and food images. In Proc. CVPR, pp. 3020–3028. Cited by: §I, §II-A.
-  (2019) Dense procedure captioning in narrated instructional videos. In Proc. ACL, pp. 6382–6391. Cited by: §I, §II-A.
-  (2020) Learning semantic concepts and temporal alignment for narrated video procedural captioning. In Proc. ACMMM, pp. 4355–4363. Cited by: §I, §II-A.
-  (2017) Attention is all you need. In Proc. NeurIPS, pp. 5998–6008. Cited by: §I, §II-A, §IV-A.
-  (2015) CIDEr: consensus-based image description evaluation. In Proc. CVPR, pp. 4566–4575. Cited by: §III.
-  (2020) Structure-aware generation network for recipe generation from images. In Proc. ECCV, pp. 359–374. Cited by: §I, §II-A.
-  (2019) Temporal segment networks for action recognition in videos. IEEE Trans. on Pattern Analysis and Machine Intelligence, pp. 2740–2755. Cited by: 1st item.
-  (2021) End-to-end dense video captioning with parallel decoding. In Proc. ICCV, pp. 6847–6857. Cited by: §I, §II-B, §III, 4th item, footnote 1.
-  (2021) Event-centric hierarchical representation for dense video captioning. IEEE Trans. on Circuits and Systems for Video Technology, pp. 1890–1900. Cited by: §IV-A, 2nd item.
-  (2022) Ingredient-enriched recipe generation from cooking videos. In Proc. ICMR, pp. 249–257. Cited by: §I, §II-A, §V.
-  (2021) Reinforcement learning for logic recipe generation: bridging gaps from images to plans. IEEE Transactions on Multimedia. Cited by: §I.
-  (2019) Grounded video description. In Proc. CVPR, pp. 6578–6587. Cited by: §II-B.
-  (2018) Towards automatic learning of procedures from web instructional videos. In Proc. AAAI, pp. 7590–7598. Cited by: §I, §I, §III, §VI.
-  (2018) End-to-end dense video captioning with masked transformer. In Proc. CVPR, pp. 8739–8748. Cited by: 1st item.