TY - GEN
T1 - SELECTIVE MUTUAL LEARNING
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
AU - Tan, Ha Minh
AU - Vu, Duc Quang
AU - Lee, Chung Ting
AU - Li, Yung-Hui
AU - Wang, Jia Ching
N1 - Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - Mutual learning, the related idea to knowledge distillation, is a group of untrained lightweight networks, which simultaneously learn and share knowledge to perform tasks together during training. In this paper, we propose a novel mutual learning approach, namely selective mutual learning. This is the simple yet effective approach to boost the performance of the networks for speech separation. There are two networks in the selective mutual learning method, they are like a pair of friends learning and sharing knowledge with each other. Especially, the high-confidence predictions are used to guide the remaining network while the low-confidence predictions are ignored. This helps to remove poor predictions of the two networks during sharing knowledge. The experimental results have shown that our proposed selective mutual learning method significantly improves the separation performance compared to existing training strategies including independently training, knowledge distillation, and mutual learning with the same network architecture.
AB - Mutual learning, the related idea to knowledge distillation, is a group of untrained lightweight networks, which simultaneously learn and share knowledge to perform tasks together during training. In this paper, we propose a novel mutual learning approach, namely selective mutual learning. This is the simple yet effective approach to boost the performance of the networks for speech separation. There are two networks in the selective mutual learning method, they are like a pair of friends learning and sharing knowledge with each other. Especially, the high-confidence predictions are used to guide the remaining network while the low-confidence predictions are ignored. This helps to remove poor predictions of the two networks during sharing knowledge. The experimental results have shown that our proposed selective mutual learning method significantly improves the separation performance compared to existing training strategies including independently training, knowledge distillation, and mutual learning with the same network architecture.
KW - monophonic source separation
KW - Supervised speech separation
KW - time domain audio separation
UR - http://www.scopus.com/inward/record.url?scp=85131249186&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9746022
DO - 10.1109/ICASSP43922.2022.9746022
M3 - 會議論文篇章
AN - SCOPUS:85131249186
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 3678
EP - 3682
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2022 through 27 May 2022
ER -