Recognition of emotional content in music is an issue that arises recently. Music received by live applications are often exposed to noise, thus prone to reducing the recognition rate of the application. The solution proposed in this study is a robust music emotion recognition system for live applications. The proposed system consists of two major parts, i.e. subspace-based noise suppression and a hierarchical sparse representation classifier, which is based on sparse coding and a sparse representation classifier (SRC). The music is firstly enhanced by fast subspace based noise suppression. Nine classes of emotion are then used to construct a dictionary, and the vector of coefficients is obtained by sparse coding. The vector can be divided into nine parts, and each of which models a specific emotional class of a signal. Since the proposed descriptor can provide emotional content analysis of different resolutions for emotional music recognition, this work regards vectors of coefficients as feature representations. Finally, a sparse representation based classification method is employed for classification of music into four emotional classes. The experimental results confirm the highly robust performance of the proposed system in emotion recognition in live music.