This paper presents the pipeline performance model, a generic GPU performance model, which helps understand the performance of GPU code by using a code representation that is very close to the source code. The code is represented by a graph in which the nodes correspond to the source code instructions and the edges to data dependences between them. Furthermore, each node is enhanced with two latencies that characterize the instruction's time behavior on the GPU. This graph, together with a simple characterization of the GPU and the execution configuration, is used by a simulator to mimic the execution of the code. We validate the model on the micro-benchmarks used to determine the latencies and on a matrix multiplication kernel, both on an NVIDIA Fermi and an NVIDIA Pascal GPU. Initial results show that the simulated times follow the measured times, with acceptable errors, for a wide occupancy range. We argue that to achieve better accuracies it is necessary to further refine the model to take into account the complexity of memory access and warp scheduling, especially for more recent GPUs.
Original languageEnglish
Title of host publicationProceedings - 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2019
PublisherIEEE
Pages260-265
Number of pages6
ISBN (Electronic)9781728116440
ISBN (Print)9781728116440
DOIs
Publication statusPublished - 19 Mar 2019
EventPDP 2019: Euromicro International Conference on Parallel, Distributed and Network-Based Processing - Universita' degli studi di Pavia - Department of Electrical, Computer and Biomedical Engineering, Pavia, Italy
Duration: 13 Feb 201915 Feb 2019
https://www.pdp2019.eu/

Publication series

NameProceedings - 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2019

Conference

ConferencePDP 2019
CountryItaly
CityPavia
Period13/02/1915/02/19
Internet address

    Research areas

  • GPU, latencies, model, performance, pipeline

ID: 44278988