Code-switching is a common mode of language expression, which means that two or more languages are used interchangeably in a conversation. At present, the development of such code-switching technology in the field of speech recognition research is still limited by insufficient training corpus of text, which affects the system performance. This paper will use a neural network to train a generator to generate code-switching text to expand the corpus to achieve the purpose of improving the mixed recognition rate of Chinese and English. Our method is to use the Chinese and English texts in the SEAME corpus to train the BERT-BiLSTM-CRF model and use the model to know the code-switching position, generating sentences that conform to the characteristics of this corpus. The experimental results show that the method in this paper has better performance than other methods.