In this work, we propose a lightweight end-to-end music source separation deep learning model. Deep learning models for audio source separation based on time-domain have been proposed for end-to-end processing. However, the proposed models are complex and difficult to use when the computing resources of the device are limited. Additionally, long delays may be expected since long-term inputs are required to obtain adequate results for separation, making the models unsuitable for applications that require low latency. In the proposed model, Atrous Spatial Pyramid Pooling is used to reduce the number of parameters, and the receptive field preserving decoder is utilized to enhance the result of separation while the input context length is limited. The experimental results show that the proposed method obtains better results than previous methods while using 10% or fewer parameters.