Music emotion classification using double-layer support vector machines

Yu Hao Chin, Chang Hong Lin, Ernestasia Siahaan, I. Ching Wang, Jia Ching Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Scopus citations

Abstract

This paper presents a two-layer system for detecting emotion in music. The selected target emotion classes are angry, happy, sad, and peaceful. We presented an audio feature set comprising the following types of audio features: dynamics, rhythm, timbre, pitch, and tonality. With the feature set, a support vector machines (SVMs) is applied to each target emotion class with calm emotion as the background class to train a hyperplane. With the four hyperplanes trained from angry, happy, sad, and peaceful, each test clip can output four decision values. This decision values are regarded as the new features to train a second-layer SVMs for classifying the four target emotion classes. The experiment result shows that our double layer system has a good performance on music emotion classification.

Original languageEnglish
Title of host publicationICOT 2013 - 1st International Conference on Orange Technologies
Pages193-196
Number of pages4
DOIs
StatePublished - 2013
Event1st International Conference on Orange Technologies, ICOT 2013 - Tainan, Taiwan
Duration: 12 Mar 201316 Mar 2013

Publication series

NameICOT 2013 - 1st International Conference on Orange Technologies

Conference

Conference1st International Conference on Orange Technologies, ICOT 2013
Country/TerritoryTaiwan
CityTainan
Period12/03/1316/03/13

Keywords

  • Music emotion
  • support vector machine

Fingerprint

Dive into the research topics of 'Music emotion classification using double-layer support vector machines'. Together they form a unique fingerprint.

Cite this