Multimodal learning for classroom activity detection

H Li, Y Kang, W Ding, S Yang, S Yang… - ICASSP 2020-2020 …, 2020 - ieeexplore.ieee.org
H Li, Y Kang, W Ding, S Yang, S Yang, GY Huang, Z Liu
ICASSP 2020-2020 IEEE International Conference on Acoustics …, 2020ieeexplore.ieee.org
Classroom activity detection (CAD) focuses on accurately classifying whether the teacher or
student is speaking and recording both the length of individual utterances during a class. A
CAD solution helps teachers get instant feedback on their pedagogical instructions. This
greatly improves educators' teaching skills and hence leads to students' achievement.
However, CAD is very challenging because (1) the CAD model needs to be generalized well
enough for different teachers and students;(2) data from both vocal and language modalities …
Classroom activity detection (CAD) focuses on accurately classifying whether the teacher or student is speaking and recording both the length of individual utterances during a class. A CAD solution helps teachers get instant feedback on their pedagogical instructions. This greatly improves educators’ teaching skills and hence leads to students’ achievement. However, CAD is very challenging because (1) the CAD model needs to be generalized well enough for different teachers and students; (2) data from both vocal and language modalities has to be wisely fused so that they can be complementary; and (3) the solution shouldn’t heavily rely on additional recording device. In this paper, we address the above challenges by using a novel attention based neural framework. Our framework not only extracts both speech and language information, but utilizes attention mechanism to capture long-term semantic dependence. Our framework is device-free and is able to take any classroom recording as input. The proposed CAD learning framework is evaluated in two real-world education applications. The experimental results demonstrate the benefits of our approach on learning attention based neural network from classroom data with different modalities, and show our approach is able to outperform state-of-the-art baselines in terms of various evaluation metrics.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果