A Multimodal Machine Learning Framework for Teacher Vocal Delivery Evaluation

07/15/2021 ∙ by Hang Li, et al. ∙ 0

The quality of vocal delivery is one of the key indicators for evaluating teacher enthusiasm, which has been widely accepted to be connected to the overall course qualities. However, existing evaluation for vocal delivery is mainly conducted with manual ratings, which faces two core challenges: subjectivity and time-consuming. In this paper, we present a novel machine learning approach that utilizes pairwise comparisons and a multimodal orthogonal fusing algorithm to generate large-scale objective evaluation results of the teacher vocal delivery in terms of fluency and passion. We collect two datasets from real-world education scenarios and the experiment results demonstrate the effectiveness of our algorithm. To encourage reproducible results, we make our code public available at <https://github.com/tal-ai/ML4VocalDelivery.git>.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Teacher enthusiasm has been widely accepted by recent researches that is highly correlated with the high-quality instructions, which provides students with learning opportunity and fosters their learning and achievement [6, 13, 17, 11]. To evaluate teacher enthusiasm, multiple statistical algorithms focusing on counting and scoring different aspects of instruction behaviors, i.e., vocal delivery, facial expressions, have been employed as the basic indicators of enthusiastic teaching in their own systems [1, 3, 4, 14, 9]. Among these studies, vocal delivery is one of the most commonly accepted indicators due to its irreplaceability in student-teacher communication. Therefore, we focus on improving the existing vocal delivery evaluation (VDE) via the advanced machine learning algorithms.

Traditionally, VDE is conducted by human observers and the evaluation results face two challenges: (1) subjectivity: human annotators may have different understandings about evaluation rules; and (2) time-consuming: vocal delivery manual evaluation requires annotators to examine the vocal samples multiple times. To solve these two challenges, we propose a multimodal machine learning framework to conduct objective VDE in terms of both fluency and passion. The fluency indicator is designed to detect poor articulations between the words and topics, and the passion indicator is utilized to evaluate the variations of pitch, volume, and speed.

In summary, the contributions of this work are: (1) we alleviate the subjectivity problem in current VDE by utilizing the pairwise comparisons; (2) we propose a multimodal orthogonal fusing algorithm, which helps embeddings from different unimodal pre-trained models fuse in an informative way; (3) we demonstrate that our proposed method is able to provide accurate and objective evaluation results.

2 Label Generation via Pairwise Comparison

In our framework, in order to obtain reliable training labels for VDE, we design a two-step label generation algorithm via pairwise comparison to eliminate the discrepancies caused by the ambiguous descriptions to the anchors of some subjective perceptions such as passion [2, 12, 15].

Anchor Selection We collect a moderate-size unlabeled dataset via uniform sampling. After that, for each paired samples

, we ask human annotators to judge which sample is better fitting the requirements (e.g., “Is sample A more passionate than sample B?”). After collecting plenty of these comparing results, we model the probability of choosing

over by utilizing the Bradley-Terry model [2], i.e., where ,

is the estimated ranking score and

is standard deviation of

. Following the prior work by Tsukida and Gupta [16], the ranking scores

is obtained through maximum a posteriori estimation. After that, we carefully choose

anchor samples by percentiles that represent the ranking score distribution.

Comparison Labeling Once we obtain the anchor samples, we label the remaining samples based on their comparing results with selected anchors. More specifically, for a new sample , we first conduct its pairwise comparisons with each anchor in . Then, similar to the ranking score generation process in the anchor selection step, we learn the Bradley-Terry model from these comparison results and obtain the corresponding ranking score . Finally, we compare with the ranking scores, i.e., of our select anchors in and the final label is obtained by computing number of anchors ordered after , i.e., , where is an indicator function.

3 Multimodal Learning

Figure 1: The proposed multimodal learning framework.

The traditional evaluation of vocal delivery usually involves complicated considerations on multiple facets of speeches [1, 3, 4]. To make full use of these information in each speech sample, we propose a multimodal learning framework with three modules: a language encoder, an audio encoder, and a multimodal fusion block. The overall framework architecture is shown in Fig. 1.

Language Encoder

The pre-trained language models like BERT

[5], RoBERTa [10], BART [8] have been demonstrated to have strong capabilities in capturing semantic information. In our framework, we choose to use RoBERTa as our backbone model which accepts text token embeddings combining with their corresponding position embeddings as inputs. Following prior researches [10], we use the first token’s output representation as the extracted semantic sentence embedding.

Audio Encoder

Similar to language encoder, we use a pretrained audio neural networks (PAANs)

[7]

as our backbone module to extract acoustic features. The inputs of the audio encoder are the frame-level low-level descriptors and the output is a single vector

, which summarizes the acoustic features of the entire utterance.

Orthogonal Fusion Multimodal learning aims to exhibit and capture information from different modalities and therefore, we propose an orthogonal fusion method to enforce representations from different modalities to be dissimilar. Specifically, we design an additional orthogonal regularization penalty as follows: , where and are trainable parameters that project and to the same hidden space respectively. In the final objective functions, we use the fused representation , i.e., to optimize the VDE loss together with the regularization term .

4 Experiments

We evaluate teacher vocal delivery in two aspects: fluency and passion. We collect two datasets from real-world K-12 education scenarios: (1) the Passion dataset contains 18,000 teacher speech samples extracted from a third-party online class platform; and (2) the Fluency dataset includes 15,000 utterances and each sample is labeled based on its fluency level. The sample labels for these two datasets are obtained through pairwise comparisons discussed in Section 2. We choose two anchors, i.e., set

, which represent the 25% and 75% percentiles. Hence, samples are split into three groups: high, medium and low. In terms of model training, we exclude samples of medium group to reduce the ambiguity. 1,000 utterances are randomly sampled from each dataset and used as test data. Additionally, we perform a 20%/80% split over the remaining dataset to generate validation and train sets. We choose to use accuracy and macro F1-score as our evaluation metrics.

To validate the pairwise-comparing algorithm, we ask three experts to justify them. From the results, we find more than 95% of these positive and negative labeled samples are accepted by at least two experts. To assess the effectiveness of our approach, we carefully choose the following methods as our baselines: (1) RoBERTa: a strong large-scale pre-trained language model only uses text as input. (2) PANNs: a uni-modal pre-trained model that only uses audio signals as input. (3) Concat: a multimodal model which uses both pre-trained RoBERTa and PANNs to extract features and simply concatenates the representations of different modalities for classification. The detailed results for both fluency and passion datasets are shown in Table 1.

From Table 1, we have several observations: (1) by comparing RoBERTa and PANNs on Fluency dataset, we find language information is more important than audio for fluency evaluation; (2) we observe PANNs outperforms RoBERTa on Passion dataset, which is consistent with our expectation that acoustic features should be better in evaluating the passion of the utterance; (3) when comparing Concat with prior two unimodal models, we find it outperforms the two unimodal baselines by a great margin, which indicates the effectiveness of multimodal learning; (4) by comparing Ours to Concat, we find the model’s performance is further improved.

Task PAANs RoBERTa Concat Ours
Acc Acc Acc Acc
   Passion 0.775 0.723 0.763 0.714 0.808 0.758 0.846 0.805
   Fluency 0.654 0.628 0.788 0.777 0.838 0.828 0.872 0.862
Table 1: Model performances on two datasets. and indicate the accuracy and macro F1-score respectively.

5 Conclusion

In this work, we present an efficient machine learning approach to evaluate teacher vocal delivery for online classes. Experiments demonstrate that our framework achieves accurate evaluations in terms of both fluency and passion aspects. In the future, we would like to conduct further researches to the other facets of the teacher enthusiasm.

Acknowledgments

This work was supported in part by National Key R&D Program of China, under Grant No. 2020AAA0104500 and in part by Beijing Nova Program (Z201100006820068) from Beijing Municipal Science & Technology Commission.

References

  • [1] E. M. Bettencourt, M. H. Gillett, M. D. Gall, and R. E. Hull (1983) Effects of teacher enthusiasm training on student on-task behavior and achievement. American educational research journal 20 (3), pp. 435–450. Cited by: §1, §3.
  • [2] R. A. Bradley and M. E. Terry (1952) Rank analysis of incomplete block designs: i. the method of paired comparisons. Biometrika 39 (3/4), pp. 324–345. Cited by: §2, §2.
  • [3] F. J. Brigham, T. E. Scruggs, and M. A. Mastropieri (1992) Teacher enthusiasm in learning disabilities classrooms: effects on learning and behavior.. Learning Disabilities Research & Practice. Cited by: §1, §3.
  • [4] M. L. Collins (1978) Effects of enthusiasm training on preservice elementary teachers. Journal of Teacher Education 29 (1), pp. 53–57. Cited by: §1, §3.
  • [5] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. Cited by: §3.
  • [6] K. A. Feldman (2007) Identifying exemplary teachers and teaching: evidence from student ratings. In The scholarship of teaching and learning in higher education: An evidence-based perspective, pp. 93–143. Cited by: §1.
  • [7] Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley (2020)

    Panns: large-scale pretrained audio neural networks for audio pattern recognition

    .
    IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, pp. 2880–2894. Cited by: §3.
  • [8] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer (2020) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880. Cited by: §3.
  • [9] H. Li, Z. Wang, J. Tang, W. Ding, and Z. Liu (2020) Siamese neural networks for class activity detection. In

    International Conference on Artificial Intelligence in Education

    ,
    pp. 162–167. Cited by: §1.
  • [10] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) Roberta: a robustly optimized bert pretraining approach. CoRR abs/1907.11692. Cited by: §3.
  • [11] Z. Liu, G. Xu, T. Liu, W. Fu, Y. Qi, W. Ding, Y. Song, C. Guo, C. Kong, S. Yang, et al. (2020) Dolphin: a spoken language proficiency assessment system for elementary education. In Proceedings of The Web Conference 2020, pp. 2641–2647. Cited by: §1.
  • [12] L. Maystre and M. Grossglauser (2015) Fast and accurate inference of plackett-luce models. Technical report Cited by: §2.
  • [13] N. T. Moulding (2010) Intelligent design: student perceptions of teaching and learning in large social work classes. Higher Education Research & Development 29 (2), pp. 151–165. Cited by: §1.
  • [14] H. G. Murray (1983) Low-inference classroom teaching behaviors and student ratings of college teaching effectiveness.. Journal of educational psychology 75 (1), pp. 138. Cited by: §1.
  • [15] R. L. Plackett (1975) The analysis of permutations. Journal of the Royal Statistical Society: Series C (Applied Statistics) 24 (2), pp. 193–202. Cited by: §2.
  • [16] K. Tsukida and M. R. Gupta (2011) How to analyze paired comparison data. Technical report WASHINGTON UNIV SEATTLE DEPT OF ELECTRICAL ENGINEERING. Cited by: §2.
  • [17] M. Zeidner (2007) Test anxiety in educational contexts: concepts, findings, and future directions. In Emotion in education, pp. 165–184. Cited by: §1.