A cache hierarchy aware thread mapping methodology for GPGPUs

Bo Cheng Charles Lai, Hsien Kai Kuo, Jing Yang Jou

研究成果: 雜誌貢獻期刊論文同行評審

6 引文 斯高帕斯(Scopus)

摘要

The recently proposed GPGPU architecture has added a multi-level hierarchy of shared cache to better exploit the data locality of general purpose applications. The GPGPU design philosophy allocates most of the chip area to processing cores, and thus results in a relatively small cache shared by a large number of cores when compared with conventional multi-core CPUs. Applying a proper thread mapping scheme is crucial for gaining from constructive cache sharing and avoiding resource contention among thousands of threads. However, due to the significant differences on architectures and programming models, the existing thread mapping approaches for multi-core CPUs do not perform as effective on GPGPUs. This paper proposes a formal model to capture both the characteristics of threads as well as the cache sharing behavior of multi-level shared cache. With appropriate proofs, the model forms a solid theoretical foundation beneath the proposed cache hierarchy aware thread mapping methodology for multi-level shared cache GPGPUs. The experiments reveal that the three-staged thread mapping methodology can successfully improve the data reuse on each cache level of GPGPUs and achieve an average of 2.3× to 4.3× runtime enhancement when compared with existing approaches.

原文???core.languages.en_GB???
文章編號6747979
頁(從 - 到)884-898
頁數15
期刊IEEE Transactions on Computers
64
發行號4
DOIs
出版狀態已出版 - 1 4月 2015

指紋

深入研究「A cache hierarchy aware thread mapping methodology for GPGPUs」主題。共同形成了獨特的指紋。

引用此