In recent studies, speech emotion recognition has been an intriguing and arduous area of research in human behavior analysis. The goal of this research area is to classify people's emotional states according to their speech tones. At present, the research area focuses on identifying the effectiveness of automatic classifiers of speech emotions to improve the classification efficiency in practical applications, e.g., for use in telecommunication services, identifying positive emotions (e.g., happy, surprise) and negative emotions (e.g., sad, angry, disgust, and fear), which can supply a large number of valid information for platform users and customers of telecommunication services.In this paper, the complex task of identifying positive and negative emotions in human voice data is investigated by using deep learning techniques. Five open emotion speech datasets are used to train multi-level models for positive and negative emotion recognition. The experimental results shows that our model can obtain good results for both positive and negative emotion speech data.