Speech separation has been utilized in many important applications such as automatic speech, mobile phones, hearing aids, and human-machine interactions. In particular, deep neural networks have been considered as a great potential for speech and music separation in recent years. In this paper, we propose a discriminative learning model to solve the single-channel speech separation. Firstly, deep clustering (DC) trains the embedding features. And then these features are utilized as the input for the deep neural network to directly isolate the component sources. The separation performances of the proposed model obtain 10.06 dB SDR, 16.50 dB SIR, 11.48 dB SAR, 9.06 dB SI-SNRi, 88% STOI, and 2.03 PESQ on the TSP dataset.