Compressive Sensing-Based Speech Enhancement

Jia Ching Wang, Yuan Shan Lee, Chang Hong Lin, Shu Fan Wang, Chih Hao Shih, Chung Hsien Wu

Research output: Contribution to journalArticlepeer-review

49 Scopus citations

Abstract

This study proposes a speech enhancement method based on compressive sensing. The main procedures involved in the proposed method are performed in the frequency domain. First, an overcomplete dictionary is constructed from the trained speech frames. The atoms of this redundant dictionary are spectrum vectors that are trained by the K-SVD algorithm to ensure the sparsity of the dictionary. For a noisy speech spectrum, formant detection and a quasi-SNR criterion are first utilized to determine whether a frequency bin in the spectrogram is reliable, and a corresponding mask is designed. The mask-extracted reliable components in a speech spectrum are regarded as partial observations and a measurement matrix is constructed. The problem can therefore be treated as a compressive sensing problem. The K atoms of a K -sparsity speech spectrum are found using an orthogonal matching pursuit algorithm. Because the K atoms form the speech signal subspace, the removal of the noise projected onto these K atoms is achieved by multiplying the noisy spectrum with the optimized gain that corresponds to each selected atom. The proposed method is experimentally compared with the baseline methods and demonstrates its superiority.

Original languageEnglish
Pages (from-to)2122-2131
Number of pages10
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume24
Issue number11
DOIs
StatePublished - Nov 2016

Keywords

  • Compressive sensing (CS)
  • denoising
  • sparse representation
  • speech enhancement

Fingerprint

Dive into the research topics of 'Compressive Sensing-Based Speech Enhancement'. Together they form a unique fingerprint.

Cite this