Lip-reading is an appealing tool for intelligent human-computer interaction and has gained increasing attention in recent years. It aims to infer the speech content by using the visual information  like the lip movements, and so is robust to the ubiquitous acoustic noises. This appealing property makes it play an important role as the complement of the audio-based speech recognition system, especially in a noisy environment. At the same time, lip-reading is also crucial for several other potential applications, such as transcribing and re-dubbing archival silent films, sound-source localization, liveness verification and so on 
. Benefiting from the vigorous development of deep learning (DL) and the emergence of several large-scale lip-reading datasets, such as GRID, LRW , LRW-1000 , LRS  and so on, lip-reading has made great progress over the past two years.
Typically, lip-reading can be seen as a sequence-to-sequence (seq2seq) problem to translate the lip movement sequence to a character or word sequence, as shown in Fig. 1. Several previous work have tried successfully to introduce seq2seq models for lip-reading , , , . However, most seq2seq-based methods suffer from two drawbacks. The first problem is the exposure bias resulted from the strategy of “teacher-forcing”. Most current seq2seq models are learned in a way to obtain a correct prediction based on providing the ground-truth word at the previous time-step, which is named “teacher-forcing”  and used widely in lip-reading , , . This strategy is much favorable to make the model converge at a fast speed. But the optimized model learned in this way has a heavy dependence on the ground-truth words at the previous time-steps. Therefore, it is always difficult to obtain a consistent performance when using the optimized model for the actual test process, where no ground-truth is available and the model has to make a correct prediction based on the previous predictions . This discrepancy would inevitably yield inaccurate results and even errors. Furthermore, this discrepancy would make the errors accumulate quickly along the sequence, leading to worse and worse predictions in the end. The second problem
is the inconsistency between the optimized discriminative target and the final non-differentiable evaluation metric. For most lip-reading models, Cross-Entropy minimization (CE loss) is always applied as the discriminative optimization target. The CE loss is always used for each time step and the average over all time steps is finally used as the measure of the quality of the prediction results. This would always lead to two problems. Firstly, the optimized model may be not able to perform well when coming to test due to the inconsistency between the CE loss and the evaluation metric of WER/CER (word/character error rate). Secondly, the optimization process computes the cost at each time step independently, giving little consideration to the predictions before and after each time-step. This could probably lead to the case that only a single time step or just a few time steps are predicted well if they have a larger loss compared with the time steps nearby.
Inspired by the popular convolutional operation which has the appealing properties of local perception and weight sharing, we propose a novel pseudo convolutional policy gradient (PCPG) based seq2seq model to solve the above two problems for robust lip-reading. On the one hand, we introduce reinforcement learning (RL) into the seq2seq model to connect the optimized discriminative target and the evaluation metric of WER/CER directly. On the other hand, we mimic the computation process of traditional convolutional operation to consider more time steps nearby when computing the reward and loss at each time step. By a thorough evaluation and comparison on several lip-reading benchmarks, we demonstrate both the effectiveness and the generalization ability of our proposed model. Finally, we report either new state-of-the-art performance or competitive accuracy on both the word-level and sentence-level benchmarks.
Ii Related work
In this section, we firstly give a brief review of the previous work for the lip-reading task. Then we discuss the work related with seq2seq models and reinforcement learning involved in our model.
Lip-reading has made several substantial progress since the emergence of deep learning (DL) [10, 7, 9, 8, 20, 4, 3, 30, 29, 14]. These contempts can be divided into two strands according to their particular task.
The first one mainly focuses on word-level lip-reading. This type of method aims to classify the whole image sequence into a single word class, no matter how many frames in the sequence. For example, the work in proposed to extract frame-level features by the VGG network and then combine all the features of all the frames at several different stages to obtain the final prediction of the whole sequence. In , the authors proposed an end-to-end deep learning architecture by combining a spatiotemporal convolutional module, the ResNet module and the final bidirectional LSTM for word-level lipreading, and have obtained the state-of-the-art performance when they were published. In this paper, we would keep a similar front-end module with them and would present details in the next section.
The second strand pays more attention to sentence-level lip-reading task. Different from the word-level which considers the whole image sequence as a single category, the sentence-level method has to decode the character or word at each time step to output a sentence. The LipNet model presented in  is the first end-to-end sentence-level lip-reading model. It employes the CTC loss to predict different characters at different time steps, which would then compose the final sentence-level predictions. Then, the work in  evaluated not only the CTC loss but also the beam search based process  for lip-reading. However, the CTC based methods are all based on the assumption that all the predictions at each time step are independent of each other, which we think is not proper in a sequence-based task. In the meantime, the CTC loss has also a limitation that the length of the input image sequence has to be larger than the length of the speech content sequence. Therefore, seq2seq models are gradually introduced to the sentence-level lip-reading.
Ii-B Seq2Seq models
Sequence-to-sequence (seq2seq) models always contain an RNN-based encoder to encode the input image sequence into a vector and an RNN-based decoder to generate the prediction result at each time step. In the decoding process, attention mechanism is always introduced to make the prediction operation at each output’s time step being able to “observe” all the input sequence’s time steps. For example,[7, 9, 1] have successfully introduced seq2seq models for sentence-level lip-reading.
However, the usual seq2seq models always suffer from two main limitations. Firstly, most current seq2seq models are trained in the way of “teacher forcing”, which would provide the ground-truth word at the previous time step as a condition to predict the results at the next time step. In fact, the models have to predict each output based on the previous prediction results when comes to testing, where no ground-truth words are available. This could easily lead to error accumulation along with the sentence, which is not the result we want. Secondly, most seq2seq models are optimized by minimizing the sum of the cross-entropy loss at each time step. But when it comes to evaluation, the metrics take the whole sentence into account with discrete and non-differentiable metrics, such as BLEU , ROUGE , METEOR , CIDEr , WER (word error rate), CER (character error rate), SER (sentence error rate), and so on. The inconsistency between the loss and the evaluation metric is a serious problem, which could easily lead to the case that even we get a very small cross-entropy loss during training, we may still not be able to get a good performance when it comes to testing.
In this paper, we propose a novel pseudo-convolutional policy gradient (PCPG) based seq2seq models for the lip-reading task, as shown in Fig. 2. In our model, the evaluation metric is introduced directly as a form of reward to optimize the model together with the original discriminative target. At the same time, a pseudo-convolutional operation is performed on the reward and loss dimension to take more context around each time step into account to generate a robust reward and loss for the whole optimization. The pseudo-convolutional operation is to mimic the local perception and weight sharing property of convolutional operation, contributing to enhancing the contextual relevance among different time steps for robust lip-reading.
Iii The Proposed work
In this section, we present the proposed PCPG (Pseudo-Convolutional Policy Gradient) based seq2seq model in detail. Firstly, we describe the overall model architecture in the first subsection. Then we introduce the PCPG based learning process. Finally, we state the advantages of our method compared with the traditional methods.
Iii-a The Overall Model architecture
As shown in Fig. 2, our model can be divided into two main parts: the video encoder (shown with green color) and the GRU based decoder (shown with yellow color). The video encoder is responsible for encoding the spatiotemporal patterns in the image sequence to obtain a preliminary representation of the sequence. After encoding, the GRU decoder tries to generate predictions at each time step in the principle of making the reward at each time step maximized.
The CNN and RNN based Encoder: As shown in Fig. 3, the encoder consists of two main modules: the CNN based front-end to encode the short-term spatial-temporal patterns, and the RNN based back-end to model the long-term global spatial-temporal patterns. The input image sequence would firstly go through a 3D Convolutional layer to capture the initial short-term patterns in the sequence. Then a ResNet-18  module is followed to capture the specific movement patterns at each time step. Finally, a 2-layer Bi-GRU is adopted to produce a global representation of the whole sequence. We use the GRU’s output and hidden-state vector to record the patterns of the input video , which is defined as:
where denote the encoder, the original input video and the temporal length of input video rerspectively.
The PCPG based RNN Decoder: Our PCPG based RNN decoder is showed as Fig. 4. Given the representation of each input sequence, a 2-layer RNN is followed to decode each character at each output’s time step , where denote the maximum length of the output text sequence. The PCPG based loss would then guide the learning process of the model. In this paper, we use GRU as the basic RNN unit.
To learn the dependency of each output character with each input time step , attention mechanism is introduced into the decoding process, which takes advantage of the context information around each time step to aid the decoding of each output’s time step , where and are used to denote the time steps in the input image sequence and the output text sequence respectively. With the decoder GRU’s hidden state where denotes decoder, we can compute the attention weight on the current time step with respect to each input time step and the corresponding output as
where and . Finally, would be used as the final output.
Iii-B The PCPG based learning process
With the model given above, a popular way to learn the model is to minimize the cross-entropy loss at each time step as follows:
where is the predicted class label index at time step , is the number of categories to be predicted at each time step. In this paper, there are categories at each time step , including alphabets,
numbers, the space, the begining, padding and ending symbol.
At each time step , the model would generate a prediction result according to the predictions at previous time steps . In other words, the prediction at each time step would be decided by the predictions at the previous time step.
Besides the above optimization target, we also view the seq2seq model as an ‘agent’ in this paper that interacts with an external ‘environment’ which is corresponding to video frames, words or sentences here, as shown in Fig. 2 with the parameters of the model denoted as , the model can be viewed as a policy leading to an ‘action’ of choosing a character to output. At each time step , the agent will get a new internal ‘state’, which is decided by the attention weight , the previous hidden state and the predicted character . With this state, there would be an immediate reward to evaluate the reward and cost of predicting the character at the time step . Then the training goal is to maximize the expected reward where refers to the cumulative reward. In the following, we would describe the reward function and the principle to update the parameters in this paper in detail.
Reward function: In lip-reading tasks, the performance of the model is finally evaluated by CER or WER, both of which are usually obtained by the edit-distance or Levenshtein distance between the predicted word/sentence and the ground truth word/sentence. Here, we choose the negative CER of the whole sentence as the immediate reward to evalute the effect of the prediction at each time step , which is defined as follows:
where refers to the ground truth text sequence and is the length of , refers to the CER between characeter sequences and , which is computed by the edit-distance.
An example of the process is shown in Fig. 2, where the ground-truth is the sequence of ‘bin blue at f two now’, the old state (i.e. the previous decoding sequence) is ‘bin blue at f t’. The model observes the old state and then takes an action of choosing a character ‘w’ to generate a new state , corresponding to ‘bin blue at f tw’. We would compute the reward at the time step as Eq. (4) to evaluate the reward of predict ‘w’.
Optimization: We use and to denote the immediate reward at time step and the future expected reward at time step respectively. Given the reward at each time step , the future expected reward at current time step would be computed as , where is the discount factor and is the max length of the character sequence. The final reward for the whole sequence is , which is used to denote the cumulative reward of the whole prediction result from the begining to the end (). Inspired by the properties of local perception and weight sharing of convolutional operation, we compute the PCPG based loss as
where , , , denotes the kernel weights, the kernel size, the parameters’ distribution of the model and the parameters. The computation process is shown in Fig. 5 (where ). To ensure the value of the gradient in PCPG at the same quantitative level as the traditional PG, we set in our paper where is the kernel size and is the kernel weight.
Finally, when a pair of prediction result is generated, and denoted as , , we would use as our final loss for our PCPG based seq2seq model which can be defined as:
where the is a scalar weight to balance the two loss functions.
In practice, it is always much difficult to integrate all possible transcriptions () to compute the above gradient of the expected reward. So we introduce Monte Carlo sampling here to sample transcription sequences
to estimate the true gradient, which is shown in Fig.4. So the gradient can be computed finally as:
Therefore, the parameters could be updated as follows:
where denotes the learning rate.
Iii-C Compared with traditional Policy Gradient
Compared with the traditional policy gradient (PG), the PCPG has two more important operations as the convolutional operation: local perception and weights sharing shown as Fig. 5. In traditional PG, there is no concept of receptive field and the loss function could not make use of the context information. However, the lip-reading task can be considered as a sequence-to-sequence task and, the context is very important for accurate decoding results. With the existence of local perception and weights sharing, the prposed PCPG can make the proposed model to establish stronger semantic relationships among different time steps in the optimization process.
On the other hand, it is well-known that it is usually unstable for the models to train with RL [6, 18, 15]. There will be a big gradient change due to the randomness in the process of deciding an action with a traditional PG algorithm. While in our work, we can find that the immediate loss at each time-step will be used to generate an average value based on multiple time-steps in PCPG, as shown in Eq.(7). At the same time, the existence of the overlapping parts has further made the local gradient value not change dramatically. So the model can obtain more favorable contextual loss constraints to make the convergence more stable and faster with our proposed PCPG.
In this section, we evaluate our method on three large-scale benchmarks, including both the word-level and sentence-level lip-reading benchmarks. At the same time, we also discuss the effects of the pseudo-convolutional kernel’s hyperparameters, , on the performance of lip-reading through a detailed ablation study. By comparing with several other related methods, the advantages of our proposed PCPG based seq2seq model are clearly shown.
We evaluate our method on three datasets in total, including the sentence-level benchmark, GRID, and the large-scale word-level datasets, LRW and LRW-1000.
GRID , released in 2006, is a widely used sentence-level benchmark for the lip-reading task , , . There are 34 speakers and each one speaks out 1000 sentences, leading to about 34,000 sentence-level videos in total. All the videos are recorded with a fixed clean single-colored background and the speakers are required to face the camera with the frontal view in the speaking process.
LRW , released in 2016, is the first large scale word-level lip-reading datasets. The videos are all collected from BBC TV broadcasts including several different TV shows, leading to various types of speaking conditions in the wild. Most current methods perform word-level tasks using classification-based methods. In our experiments, we try to explore the potential of seq2seq models for the word-level tasks but we also perform classification based experiments to evaluate the representation learned by our PCPG based model.
LRW-1000 , released in 2018, is a naturally-distributed large-scale benchmark for Mandarin word-level lip-reading. There are 1000 Mandarin words and more than 700 thousand samples in total. Besides owning a diversified range of speakers’ pose, age, make-up, gender and so on, this dataset has no length or frequency constraints in the words, forcing the corresponding model to be robust and adaptive enough to the practical case where some words are indeed more or less longer or frequent than others. These properties make LRW-1000 very challenging for most lip-reading methods.
Iv-B Implementation details
In our experiments, all the images are normalized with the overall mean and variance of the whole dataset. When fed into models, each frame is randomly cropped, but all the frames in a sequence would be cropped in the same random position for training. All frames are centrally cropped for validation and test.
Our implementation is based on PyTorch and the model is trained on servers with four NVIDIA Titan X GPUs, with 12GB memory of each one. We use Adam optimizer with an initial learning rate of 0.001. Dropout with probability 0.5 is applied to the RNN and the final FC layer of the model. For PCPG, we consider the following three situations: (1)=1, =1 which is degenerated to the usual REINFORCEMENT algorithm, i.e. the traditional PG algorithm in this case. (2) , , which is sample PCPG version without overlapping parts. (3) , , which has 4-time steps as the overlapped part between twice adjacent computation. Here, we use CER and WER as our evaluation metrics. The denotes that larger is better while the denotes that lower is better. And the kernel’s weight is set to [1/5, 1/5, 1/5, 1/5, 1/5] by default.
In this paper, we would use the common cross-entropy loss (shown as Eq. (3)) based seq2seq model as our baseline model. To evaluate the representation generated by our PCPG based method, we also perform classification based experiments on the LRW and LRW-1000 with a fully connected (FC) classifier based back-end. Specifically, we firstly trained the PCPG based seq2seq model in LRW and LRW-1000 with the loss and obtained a video encoder and a GRU based decoder. Then we just fixed the encoder part to fix the representation learned by the PCPG based seq2seq model and then trained the FC based classifier with this representation, where the loss used for the FC classifier is defined as:
Iv-C Ablation Study
By performing like the convolutional operation, the proposed PCPG has gained the propertis of receptive field () and overlapping parts (). When there is no overlapping parts and the receptive field is equal to 1, the proposed PCPG based seq2seq just equals to the traditional PG based seq2seq. We summarize the detailed comparison of different choices of and in TABLE I, where ‘’ and ‘’ means and , and ‘’ and ‘’ means and respectively. and is set to 5 by default when they are not 1.
From TABLE I, we can see that the model with traditional RL when there is no extra RF and OP () performs better than the traditional cross-entropy based seq2seq baseline. This shows that RL is an effective strategy to improve the seq2seq model for lip-reading. When both RF and OP are effective (), the best performance is given, which proves the effectiveness of the proposed PCPG for robust lip reading. Besides the comparison with the baseline, we also present the loss curves in different settings during the training process on different benchmarks in Fig. 6 (a), (b), (c). We can see that the PCPG can make the model learned more stably and also take less time to converge than the other two baselines.
|RF and OP||=1, =1||7.6||16.6|
|RF and OP||=5, =5||6.9||15.3|
|RF and OP||=5, =1||5.9||12.3|
|RF and OP||=1, =1||15.2||24.8|
|RF and OP||=5, =5||15.0||26.5|
|RF and OP||=5, =1||14.1||22.7|
|RF and OP||=1, =1||51.4||67.7|
|RF and OP||=5, =5||51.6||67.2|
|RF and OP||=5, =1||51.3||66.9|
|FE and TC||82.4|
|TE and TC||83.5|
|FE and TC||38.5|
|TE and TC||38.7|
Iv-D Effect of the kernel size in PCPG
Different kernel size corresponds to the different receptive fields when computing the reward at each time step. In theory, choices of different size of the receptive fields should bring different effects on the final lip-reading performance. To explore the impact of different kernel size , we perform several different experiments on the sentence-level benchmark GRID, because the samples in sentence-level are long enough to test the effects of different . In this part, we keep at 1 to make the model have more overlapping parts. To make the pseudo-convolutional kernel put the same attention to the reward at each time step, the -dimensional kernel weight is set to [,,…,]. The results are shown in TABLE II(a). As is shown, we get the best result when . When is too small (such as ), the context considered to compute the reward at each time step is not much enough and so there is an indeed improvement but not too much. When is too big (such as ), the context considered at each time step is so much that it may cover up and so weaken the contribution of the current time step. But no matter which value is, the performance is always better than the baseline when .
Iv-E Effect of the kernel weight in PCPG
Different choices of the kernel weight means the different weight values would be put on the contextual time steps when computing the reward at each targeted time step. In this experiments, we fix to the above-optimized value 3 and also keep to 1 to evaluate and compare the effect of different kernel weights on the sentence-level benchmark GRID, as shown in TABLE II(b). From this table, we can easily see that the performance with different kernel weights is almost kept at the same level where the gap between the best and the worst is no more than , which shows the robustness of the PCPG based seq2seq model with respect to the value of the weight.
Iv-F Evaluation of the learned representation
To evaluate the representation learned by the PCPG based seq2seq model, we fixed the video encoder with the same parameters as the learned PCPG based seq2seq model, and then train an FC based classifier to perform sequence-level classification back-end on the word-level lip-reading dataset LRW and LRW-1000. The results are shown in TABLE III. From this table, we can see that when we use the representation learned by the PCPG based seq2seq model, there is a clear improvement. And when training the representation and the FC based classifier together, the improvement is getting more obvious. We also compare with other sequence-level classification based methods in Table IV, which also clearly show the effectiveness of our method.
Iv-G Comparison with state-of-the-art methods
Besides the above thorough comparison and analysis of our proposed PCPG based seq2seq in different settings, we also perform a comparison with other related state-of-the-art methods, including both sentence-level and word-level methods. Please note that we have not counted in the methods using large-scale extra data except the published dataset itself for fair comparison here. As shown in TABLE V, we can see that our proposed methods achieve state-of-the-art performance in the decoding tasks, no matter with or without beam search (BM). As shown in TABLE IV, in the classifying tasks, our method has also achieved a significant improvement, especially on the LRW-1000 where the improvement is about 0.5 percents which is always hard to obtain for the difficulty of this dataset. These results clearly prove the effectiveness of the proposed PCPG module and the PCPG based seq2seq model for lip-reading.
In this work, we proposed a pseudo-convolutional policy gradient (PCPG) based seq2seq model for the lip-reading task. Inspired by the principle of convolutional operation, we consider to extend the policy gradient’s receptive field and overlapping parts in the training process. We perform a thorough evaluation of both the word-level and the sentence-level dataset. Compared with the state-of-the-art results, the PCPG outperforms or equals to the state-of-the-art performance, which verifies the advantages of the PCPG. Moreover, the PCPG can also be applied to other seq2seq tasks, such as machine translation, automatic speech recognition, image caption, video caption and so on.
This work is partially supported by National Key R&D Program of China (No. 2017YFA0700804) and National Natural Science Foundation of China (No. 61702486, 61876171).
-  (2018) Deep audio-visual speech recognition. IEEE transactions on pattern analysis and machine intelligence. Cited by: §I, §II-B.
-  (2017) Deep learning for lip reading. Poster Presentation in the University of Oxford. Cited by: §I, TABLE V, V(b).
-  (2018) Deep lip reading: a comparison of models and an online application. In Proceedings of Interspeech abs/1806.06053. External Links: Cited by: §II-A, §II-A.
-  (2017) LipNet: end-to-end sentence-level lipreading. Cited by: §II-A, §II-A, §IV-A, V(a).
-  (2005) METEOR: an automatic metric for mt evaluation with improved correlation with human judgments. In IEEvaluation@ACL, Cited by: §II-B.
-  (2017) Safe model-based reinforcement learning with stability guarantees. In NIPS, Cited by: §III-C.
-  (2016) Lip reading sentences in the wild. , pp. 3444–3453. Cited by: §I, §I, §II-A, §II-B.
-  (2016) Lip reading in the wild. In ACCV, Cited by: §I, §II-A, §IV-A.
-  (2017) Lip reading in profile. In BMVC, Cited by: §I, §II-A, §II-A, §II-B.
-  (2018) Learning to lip read words by watching videos. Comput. Vis. Image Underst. 173, pp. 76–85. Cited by: §II-A.
-  (2006) An audio-visual corpus for speech perception and automatic speech recognition.. The Journal of the Acoustical Society of America, pp. 2421–4. Cited by: §I, §IV-A.
-  (2016) Dynamic stream weighting for turbo-decoding-based audiovisual asr. In INTERSPEECH, Cited by: V(a).
-  (2015) Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §III-A.
-  (2016) Temporal multimodal learning in audiovisual speech recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3574–3582. Cited by: §II-A.
-  (2018) Control-theoretic analysis of smoothness for stability-certified reinforcement learning. 2018 IEEE Conference on Decision and Control (CDC), pp. 6840–6847. Cited by: §III-C.
-  (2009) Comparing visual features for lipreading. In AVSP, Cited by: §IV-A.
-  (2004) ROUGE: a package for automatic evaluation of summaries. In ACL 2004, Cited by: §II-B.
-  (2018) Improving stability in deep reinforcement learning with weight averaging. Cited by: §III-C.
-  (2001) Bleu: a method for automatic evaluation of machine translation. In ACL, Cited by: §II-B.
-  (2018) End-to-end audiovisual speech recognition. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6548–6552. Cited by: §II-A, TABLE IV.
Sequence level training with recurrent neural networks. CoRR abs/1511.06732. Cited by: §I.
Self-critical sequence training for image captioning. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1179–1195. Cited by: §I.
-  (2018) Pushing the boundaries of audiovisual word recognition using residual networks and lstms. Comput. Vis. Image Underst. 176-177, pp. 22–32. Cited by: TABLE IV.
-  (2017) Combining residual networks with lstms for lipreading. ArXiv abs/1703.04105. Cited by: §II-A, TABLE IV.
-  (2014) CIDEr: consensus-based image description evaluation. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4566–4575. Cited by: §II-B.
Lipreading with long short-term memory. 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6115–6119. Cited by: §IV-A, V(a).
-  (2019) Multi-grained spatio-temporal modeling for lip-reading. ArXiv abs/1908.11618. Cited by: TABLE IV.
-  (2016) Sequence-to-sequence learning as beam-search optimization. In EMNLP, Cited by: §II-A.
-  (2018) LRW-1000: a naturally-distributed large-scale benchmark for lip reading in the wild. 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), pp. 1–8. Cited by: §I, §II-A, §IV-A, TABLE IV.
-  (2014) A review of recent advances in visual speech decoding. Image Vis. Comput. 32, pp. 590–605. Cited by: §II-A.