Random forest is effective and accurate in making predictions for classification and regression problems, which constitute the majority of machine learning applications or systems nowadays. However, as the data are being generated explosively in this big data era, many machine learning algorithms, including the random forest algorithm, may face the difficulty in maintaining and processing all the required data in the main memory. Instead, intensive data movements (i.e., data swappings) between the faster-but-smaller main memory and the slowerbut-larger secondary storage may occur excessively and largely degrade the performance. To address this challenge, the emerging non-volatile memory (NVM) technologies are placed great hopes to substitute the traditional random access memory (RAM) and to build a larger-Than-ever main memory space because of its higher cell density, lower power consumption, and comparable read performance as traditional RAM. Nevertheless, the limited write endurance of NVM cells and the read-write asymmetry of NVMs may still limit the feasibility of performing machine learning algorithms directly on NVMs. Such dilemma inspires this study to develop an NVM-friendly bagging strategy for the random forest algorithm, in order to trade the 'randomness' of the sampled data for the reduced data movements in the memory hierarchy without hurting the prediction accuracy. The evaluation results show that the proposed design could save up to 72% of the write accesses on the representative traces with nearly no degradation on the prediction accuracy.