This study proposes a novel multi-label music emotion recognition (MER) system. An emotion cannot be defined clearly in the real world because the classes of emotions are usually considered overlapping. Accordingly, this study proposes an MER system that is based on hierarchical Dirichlet process mixture model (HPDMM), whose components can be shared between models of each emotion. Moreover, the HDPMM is improved by adding a discriminant factor to the proposed system based on the concept of linear discriminant analysis. The proposed system represents an emotion using weighting coefficients that are related to a global set of components. Moreover, three methods are proposed to compute the weighting coefficients of testing data, and the weighting coefficients are used to determine whether or not the testing data contain certain emotional content. In the tasks of music emotion annotation and retrieval, experimental results show that the proposed MER system outperforms state-of-the-art systems in terms of F- score and mean average precision.