Implementation of Sound Direction Detection and Mixed Source Separation in Embedded Systems

Jian Hong Wang, Phuong Thi Le, Weng Sheng Bee, Wenny Ramadha Putri, Ming Hsiang Su, Kuo Chen Li, Shih Lun Chen, Ji Long He, Tuan Pham, Yung-Hui Li, Jia Ching Wang

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, embedded system technologies and products for sensor networks and wearable devices used for monitoring people’s activities and health have become the focus of the global IT industry. In order to enhance the speech recognition capabilities of wearable devices, this article discusses the implementation of audio positioning and enhancement in embedded systems using embedded algorithms for direction detection and mixed source separation. The two algorithms are implemented using different embedded systems: direction detection developed using TI TMS320C6713 DSK and mixed source separation developed using Raspberry Pi 2. For mixed source separation, in the first experiment, the average signal-to-interference ratio (SIR) at 1 m and 2 m distances was 16.72 and 15.76, respectively. In the second experiment, when evaluated using speech recognition, the algorithm improved speech recognition accuracy to 95%.

Original languageEnglish
Article number4351
JournalSensors (Switzerland)
Volume24
Issue number13
DOIs
StatePublished - Jul 2024

Keywords

  • embedded systems
  • hybrid sound source separation
  • position detection
  • signal-to-interference ratio (SIR)
  • speech recognition

Fingerprint

Dive into the research topics of 'Implementation of Sound Direction Detection and Mixed Source Separation in Embedded Systems'. Together they form a unique fingerprint.

Cite this