Verified Instruction-Level Energy Consumption Measurement for NVIDIA GPUs

02/18/2020 ∙ by Yehia Arafa, et al. ∙ Los Alamos National Laboratory New Mexico State University 0

GPUs are prevalent in modern computing systems at all scales. They consume a significant fraction of the energy in these systems. However, vendors do not publish the actual cost of the power/energy overhead of their internal microarchitecture. In this paper, we accurately measure the energy consumption of various PTX instructions found in modern NVIDIA GPUs. We provide an exhaustive comparison of more than 40 instructions for four high-end NVIDIA GPUs from four different generations (Maxwell, Pascal, Volta, and Turing). Furthermore, we show the effect of the CUDA compiler optimizations on the energy consumption of each instruction. We use three different software techniques to read the GPU on-chip power sensors, which use NVIDIA's NVML API and provide an in-depth comparison between these techniques. Additionally, we verified the software measurement techniques against a custom-designed hardware power measurement. The results show that Volta GPUs have the best energy efficiency of all the other generations for the different categories of the instructions. This work should aid in understanding NVIDIA GPUs' microarchitecture. It should also make energy measurements of any GPU kernel both efficient and accurate.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Applications that rely on graphics processor units (GPUs) have increased exponentially over the last decade, mainly due to the need for higher compute resources to perform general-purpose calculations. Thus, general-purpose graphics processor units (GPGPUs) commonly appear in different computing systems. The recent TOP500 list [1] shows that more than a third of the 153 systems debuting on the list are GPU-accelerated. Adding powerful devices such as GPUs has lead to an increase in these systems’ overall performance. But this increase in performance has also led to a dramatic increase in their electrical power consumption. Furthermore, the top supercomputer in the Green500 machines [2] is the DGX SaturnV (NVIDIA built). SaturnV has a power efficiency of 15 GFlops/watts and consumes 97 kW compared to 10,096 kW for Summit, the most powerful supercomputer on the Top500 list.

Power consumption is now a primary metric for evaluating systems performance. Especially with the development of embedded/integrated GPUs and their application in edge/mobile computation. Researchers have shown that large power consumption has a significant effect on reliability GPUs [3]. Also, with the ongoing increase in the number of GPU cores, GPU power management is crucial [4, 5]

. Hence, analyzing and predicting the power usage of the GPGPU’s hardware components remains an active area of research. Several monitoring systems (hardware & software) have been proposed in the literature to estimate the total power usage of GPUs 

[6, 7, 8, 9]. However, estimating energy consumption of the GPGPU’s internal hardware component is particularly challenging as the percentage of updates in the microarchitecture can be significant from one generation to another. Moreover, the dominant vendor in the market, NVIDIA, has never published the data on the actual energy cost of their GPUs’ microarchitecture.

In this paper, we accurately measure the energy consumption of almost all the instructions that can execute on modern NVIDIA GPGPUs. We run specially-designed micro-benchmarks on the GPU, monitor the change in the GPU’s power usage, and compute the energy for each instruction. Since the optimizations provided by the CUDA (NVCC) compiler [10] can affect the latency of each instruction [11]. We show the effect of the CUDA compiler’s high-level optimizations on the energy consumption of each instruction. We used NVIDIA’s assembly-like language, Parallel Thread Execution (PTX) [12], in writing these micro-benchmarks. With the machine independent PTX, we have control over the exact sequence of instructions executing in the code. Thus, the measurement technique introduced has minimum overhead and is portable across different architectures/generations.

The results show that Volta GPUs have the best energy efficiency among all the other generations for different categories of the instructions. On the other hand, Maxwell and Turing GPUs are power-hungry devices.

To compute the energy consumption, we use three different software techniques based on the NVIDIA Management Library (NVML) [6], which query the onboard sensors and read the power usage of the device. We implement two methods using the native NVML API, which we call the sampling monitoring approach (SMA), and Multi-Threaded Synchronized Monitoring (MTSM). The third technique uses the newly released CUDA component in the PAPI v.5.7.1 [13] API. Furthermore, we designed a hardware system to measure the power usage of the GPUs in real-time. The hardware measurement are considered as the ground truth to verify the different software measurement techniques. We used the hardware setup on Volta TITAN V GPU.

We compare the results of using MTSM and PAPI software techniques to the hardware measurement for each instruction. The comparison shows that the MTSM technique leads to the best results since it integrates the power readings and captures the start and the end of a kernel correctly.

To the best of our knowledge, we are the first to provide a comprehensive comparison of the energy consumption of each instruction (more than 40 instructions) in modern high-end NVIDIA GPGPUs. Furthermore, the compiler optimizations effect on the energy consumption of each instruction has not been explored before in the literature. Also, we are the first to provide an in-depth comparison between different NVML power monitoring software techniques.

In summary, the followings are the contributions of this paper:

  1. Accurate measurement of the energy consumption of almost all PTX instructions for four high-end NVIDIA GPGPUs from four different generations (Maxwell, Pascal, Volta, and Turing).

  2. Show the effect of CUDA compiler optimizations levels on the energy consumption of each instruction.

  3. Utilize three different software techniques to measure GPU kernels’ energy consumption.

  4. Verified the different software techniques against a custom-designed in-house hardware power measurement on the Volta TITAN V GPU.

The rest of this paper is organized as follows: Section II provide a brief background on NVIDIA GPUs’ internal architecture. Section III describes the micro-benchmarks used in the analysis. Section IV depicts the differences between the three software techniques. While Section V shows the in-house direct hardware power measurement design. In Section VI we present the results. Section VII shows the related work and finally, Section VIII concludes the paper.

Ii GPGPUs Architecture

Fig. 1: General NVIDIA GPUs architecture

GPUs consist of a large number of processors called Streaming Multiprocessor (SMX) in CUDA [14] terminology as shown in figure 1. These processors are mainly responsible for the computation part. They have several scalar cores, which has some computational resources, including fully pipelined integer Arithmetic Units (ALUs) for performing 32-bit integer instruction, Floating-Point units (FPU32) for performing floating-point operations, and Double-Precision Units (DPU) for 64-bit computations. Also, it includes Special Function Units (SFU) that executes intrinsic instructions, and Load and Store units (LD/ST) for calculations of source and destination memory addresses. In addition to the computational resources, each SMX is coupled with a certain number of warp schedulers, instruction dispatch units, instruction buffer(s), along with texture and shared memory units. Either each SMX has a private L1 memory, and all the SMXs share an L2 shared memory to cache the global addresses. The exact number of SMXs on each GPU varies with the generation and the computational capabilities of the GPU.

GPU applications typically consist of one or more kernels that can run on the device. All threads from the same kernel are grouped into a grid. The grid is made up of many blocks; each is composed of groups of 32 threads called warps. Grids and blocks represent a logical view of the thread hierarchy of a CUDA kernel. Warps execute instructions in a SIMD manner, meaning that all threads from the same warp execute the same instruction at any given time.

Iii PTX Microbenchmarks

We designed special microbenchmarks to stress the GPU and expose its hidden characteristics to be able to capture the power usage of each instruction correctly.

We used Parallel-Thread Execution (PTX) [12]

to write the micro-benchmarks. PTX is a virtual-assembly language used in NVIDIA’s CUDA programming environment. PTX provides an open-source machine-independent ISA. The PTX ISA itself does not run on the device but rather gets translated to another machine-dependent ISA named Source And Assembly (SASS). SASS is not open. NVIDIA does not allow writing native SASS instructions, unlike PTX, which provides a stable programming model for developers. There have been some research efforts 

[15, 16] to produce assembly tool-chains by reverse engineering and disassembling the SASS format to achieve better performance. Reading the SASS instructions can be done using CUDA binary utilities (cuobjdump[17]. The use of PTX helps control the exact sequence of instructions executing without any overhead. Since PTX is a machine-independent, the code is portable across different CUDA runtimes and GPUs.

Figure 2 shows the compilation workflow, which leverages the compilation trajectory of the NVCC compiler. Since the PTX can only contain the code which gets executed on the device (GPU), we pass the instrumented PTX device code to the NVCC compiler for linking at runtime with the host (CPU) CUDA C/C++ code. PTX optimizing assembler (ptxas) is first used to transform the instrumented machine-independent PTX code to a machine-dependent (SASS) instructions and put that in a CUDA binary file (.cubin). This binary file is used to produce a fatbinary file, which gets embedded in the host C/C++ code. An empty kernel gets initialized in the host code, which is then gets replaced by the instrumented PTX kernel, which has the same header and the same name inside the (.fatbin.c). The kernel is executed with one block one thread.

Fig. 2: An Overview of the Compilation Procedure.

Figure 3 shows an example of the instrumented PTX kernel for the unsigned Div instruction. Recently, Arafa  et al.  [11] presented a similar technique to find the instruction latency. They executed the instruction only once, and red the clk register before and after its execution. The design here is different since we need to capture the change in power usage, which would be unnoticeable if we execute the instruction only once. The key idea here is unrolling a loop and execute the same instruction millions of times and record the power then divide by the number of instructions to get the power consumption of the single instruction. The kernel in Figure 3 shows an example of the micro-benchmark of the unsigned div instruction. We begin by initializing the used registers, lines [3–5]. Since PTX is a virtual-assembly and gets translated to the SASS, there is no limit on the number of registers to use. Still, in the real SASS assembly, the number of registers is limited and will vary from one generation/architecture to another. When the limit exceeds, register variables will be spilled to memory, causing changes in performance. Line [10] sets the loop count to 1M iterations. The loop body, lines [13–27], is composed of 5 back-to-back unsigned div instructions with dependencies, to make sure that the compiler does not optimize any of them. We do a load-add-store operation on the output of the 5 div operation and begin the loop with new values each time to force the compiler to execute the instructions. Otherwise, the compiler would run the loop only the first time and squeeze the remaining iterations. We follow the same approach for all the instructions, and the kernel is the same, the only difference is the instruction itself.

We measure the energy of the kernel twice. First, with all the instructions and second, commenting out the instructions (lines [20–24]). We use Eq. 1 to calculate the energy of an instruction. Thus, we have eliminated both the steady-state power and any other overheads. Therefore, only the real energy of an instruction gets calculated.

(1)

Iii-a NVCC Compiler Optimization

The kernel is compiled with (–03) and without (–01) optimizations. This way, we capture the effect of the CUDA compiler’s higher levels of optimization flags on the energy consumption of each instruction. To make sure that in case of –03, the compiler does not optimize the instructions and squeeze them, we performed three different validations of the code. First, we made sure that the output of the kernel is correct. Line 28 of Figure 3, stores the output of the loop. We read it and validate its correctness. Second, we validate the clk register for each instruction against the work of Arafa et al.  [11]. Third, we inspect the SASS instructions. The loop bodies are small. The compiler unrolls the loop for –O3 and –O0. Furthermore, when commenting out the instructions, we dump the SASS code and verify that the difference is only the commented out instructions.

1 .visible .entry Div(. param .u64 Div_param_0){
2
3        .reg .b32   %r<15>;
4        .reg .b64   %rd<5>;
5        .reg .pred  %p<2>;
6        ld. param.u64    %rd1, [Div_param_0];
7        mov.u32         %r3, 3;
8        mov.u32         %r4, 4;
9        st.global.u32   [%rd4 + 12], 0;
10        mov.u32         %r15, -1000000;
11
12    BB0_1:
13        add.u32         %r4, %r4, 1;
14        add.u32         %r3, %r3, 1;
15
16        div.u32         %r9,  %r4,  %r3;
17        div.u32         %r10, %r3,  %r9;
18        div.u32         %r11, %r9,  %r10;
19        div.u32         %r12, %r10, %r11;
20        div.u32         %r13, %r11, %r12;
21
22        ld.global.u32   %r9, [%rd4 + 12];
23        add.u32         %r10, %r9,  %r13;
24        st.global.u32   [%rd4 + 12], %r10;
25        add.u32         %r15, %r15, 1;
26        setp.ne.u32     %p1,  %r15, 0;
27        @%p1 bra        BB0_1;
28        st.global.u32   [%rd4], %r15;
29         ret;
30 }
Fig. 3: Unsigned Div instruction microbenchmark kernel written in PTX.

Iv Software Measurement

NVIDIA provides an API named NVIDIA Management Library (NVML) [6], which offers direct access to the queries exposed via the command line utility, the NVIDIA System Management Interface or nvidia-smi. NVML allows the developers to query GPU device states such as GPU utilization, clock rates, GPU temperature etc. Additionally, it provides access to the board power draw by querying its instantaneous onboard sensors. The community has widely used NVML since its first release with the release of CUDA v4.1 in 2011. NVML comes with the NVIDIA display driver, and the SDK offers the API for its use.

We use NVML to read the device power usage while running the PTX micro-benchmarks and compute the energy of each instruction. There are several techniques for collecting power usage using NVML. We found that the methods do vary. Therefore, we provide an in-depth comparison of these techniques on the energy of the individual instructions.

(a) Integer Add kernel
(b) Unsigned Div kernel
Fig. 4: Add & Div kernels power usage vs time on TITAN V GPU
(a) Integer Add kernel
(b) Unsigned Div kernel
Fig. 5: Add & Div kernels with the exact start and end of the kernels annotated

Iv-a Sampling Monitoring Approach (SMA)

The C-based API provided by NVML can query the power usage of the device and provide the instantaneous power measurement. Therefore, it can be programmed to keep reading the hardware sensor with a certain frequency. This basic approach is popular and was used in other related works [18, 19, 20]. The nvmlDeviceGetPowerUsage() function is used to retrieve the power usage reading for the device, in milliwatts. This function is called and executed by the CPU. We configured the sampling frequency of reading the hardware sensors to its maximum, 66.7 Hz [19] (15 ms window between each call to the function).

We read the power sensor according to the sample interval in the background while the micro-benchmarks are running. Example of the output using this approach are shown in Figures 4(a) and 4(b). The two figures show the power consumption over time for integer Add and unsigned integer Div kernels in the TITAN V (Volta) GPU. The power usage jumps shortly after the launch of the kernel and decreases in steps after the kernel finishes execution until it reaches the steady-state. This is done in 22 sec and 33 sec windows for Add and Div respectively. If we calculate the two kernels actual elapsed time, it takes only 0.28 sec and 13 sec for the Add and the Div kernels, respectively. That is, the GPU does something before and after the actual kernel execution. Hence, identifying the window of the kernel is hard and would affect the output as the power consumption varies through time. One solution is to take the maximum reading between the two steady states, but this would be misleading for some kernels, especially the bigger ones. Therefore, we ignore this approach from reporting owing to these issues.

Iv-B Papi Api

Performance Application Programming Interface (PAPI) [13] provides an API to access the hardware performance counters found on most modern processors. We can get different performance metrics through either a simple programming interface from either C or Fortran programming languages. Researchers have used PAPI as a performance and power monitoring library for different hardware and software components [21, 22, 23, 24, 25]. It is also used as a middleware component in different profiling and tracing tools [26].

PAPI can work as a high-level wrapper for different components; for example, it uses the Intel RAPL interface [27] to report the power usage and energy consumption for Intel CPUs. Recently, PAPI version 5.7 added the NVML component, which supports both measuring and capping power usage on modern NVIDIA GPU architectures. It is essential to mention that installing PAPI with NVML is a tedious task and not straight forward.

The advantage of using PAPI is that the measurements are by default synchronized with the kernel execution. The target kernel is invoked between the papi_start, and the papi_end functions, and a single number, representing the power event we need to measure, is returned. The NVML component implemented in PAPI uses the function, getPowerUsage() which query nvmlDeviceGetPowerUsage() function. According to the documentation, this function is called only once when the papi_end is called. Thus, the power returned using this method is an instantaneous power when the kernel finishes execution. Although synchronizing with the kernel solves the SMA issues, taking the instantaneous measurement when the kernel finishes execution can provide non-accurate results especially, for large and irregular kernels as shown in Section VI. Note that PAPI provides an example that works like the SMA approach, which we refrain from this paper.

Iv-C Multi-Threaded Synchronized Monitoring (MTSM)

1:volatile ,
2:volatile atomic
3:procedure 
4:     time_t
5:     
6:     
7:     
8:     
9:     
10:     
11:     
12:     
13:     return
14:end procedure
15:procedure 
16:     
17:     
18:     
19:end procedure
20:procedure ():
21:     while flag do
22:         
23:     end while
24:end procedure
Algorithm 1 MTSM Approach

In MTSM, we identify the exact window of the kernel execution. We modified the SMA to synchronize the kernel execution. This way, only the power readings of the kernel are recorded. Since the host CPU monitors the NVML API, we use Pthreads for synchronization where one thread calls and monitors the kernel while the other thread records the power.

Algorithm 1 shows the MTSM. We initialize a volatile atomic variable (flag) to zero, which we use later to record the power readings according to the start and end of the target kernel. On line 6 we create a new thread (th1) which executes a function (func1) [line 17] in parallel. This function completes the power monitoring, depending on the atomic flag. This uses the NVML function, nvmlDeviceGetPowerUsage() which returns the device power in milli-watts. The readings of the power during the kernel window are recorded and saved in an array (power_readings), which is used later in computing the kernel energy. In lines [7–12], flip the flag value and start computing the elapsed time and the launch kernel, which means starting the power monitoring. At the end of the kernel execution, we record the elapsed time and change the flag. We use the CUDA synchronize function to make sure that the power is recorded correctly. We do not specify any reading sampling frequency for the NVML functions. Although this would give us redundant values, it would be more accurate. With this setup, we found that the power reading frequency is nearly .

Figures 5(a) and 5(b) show the corresponding kernels in Figures 4(a) and 4(b) after identifying the exact kernel execution window. The new graphs are annotated with the start and end of the kernel. We observe that the kernel does not start after the sudden rise in the power from the steady-state, rather after a couple of ms from this sudden increase in power consumption (see add kernel in Figure 5(a) for clarity). After the kernel finishes execution, the power remains high for a small-time, and then it starts descending in steps until it reaches the steady-state again. To compute the kernel’s energy, we calculated the area under the curve for the kernel using Eq. 2. We believe that this approach would provide the most accurate measurement since the power readings of only the kernel are recorded. Computing the energy as the area under the curve is more rigorous than just taking the last power reading multiplied by the time elapsed for the kernel, as is done in PAPI.

(2)
Fig. 6: Hardware Measurement Setup

V Hardware Measurement

GPUs drain power as leakage power and dynamic power. The leakage power is a constant power that the GPU consumes to maintain its operation. However, dynamic power is affected by the kernel’s instructions and operations. We take into account the two power components in this study.

The modern graphics cards have two primary power sources. The first power source is the direct DC power () supply, provided through the side of the card. The second one is the PCI-E ( and ) power, provided through the motherboard, we have designed a system to measure each power source in real-time. The hardware measurement are considered as the ground truth to verify the different software measurement techniques.

Figure 6 shows the experimental hardware setup with all the components. To capture the total power, we measure the current and voltage for each power source simultaneously. A clamp meter and a shunt series resistor are used for the current measurement. For voltage measurement, we use a direct probe on the voltage line using an oscilloscope to acquire the signals. Equation 3 is used to calculate the total hardware power drained by the GPU from the two different power sources.

Direct DC Power Supply Source: Power supply provides a voltage through a direct wired link. We use both a 6-pin and 8-pin PCI-E power connectors to deliver a maximum of . Thus, the direct DC power supply source is the main contributor to the card’s power. Figure 6 shows a clamp meter measuring the current of the direct power supply connection. The voltage of the power supply is measured using an oscilloscope probe. The current and voltage are acquired using an oscilloscope, as shown in Figure 6. Therefore, the Direct DC power supply source is calculated using simple multiplication. The third addition term in Eq. 3 shows the calculation of the power which is multiplying by . In which, is the voltage of the direct power supply.

Fig. 7: Circuit Diagram of Hardware Measurement

PCI-E Power Source: Graphics cards are connected to the motherboard through the PCI-E x16 slot connection. and voltages are provided through this slot. To accurately measure the power that goes through this slot, an intermediate power sensing technique should be installed between the card and the motherboard. We designed a custom made PCI-E riser board that measures the power supplied through the motherboard. Two in-series shunt resistors are used as a power sensing technique. As shown in Figure 7, each shunt resistor () is connected in series with and separately. Using the series property, the current that flows through the is the same current that goes to the graphics card. Therefore, we measure the voltages and which are across using oscilloscope. We then divide it with the value. The voltage level is measured using the riser board. We duplicate the same calculation technique for the voltage level, as shown in Eq. 3.

(3)

Vi Results

We run each instruction found in the latest PTX version, 6.4 [12] and show its energy consumption. We report the results of using MTSM and PAPI on four different NVIDIA GPGPUs from four different generations/architectures;

  • GTX TITAN X: A GPU from Maxwell architecture [28] with a compute capability of 5.2. It has 3584 cores that run on 151 MHz clock frequency.

  • GTX 1080 Ti: A GPU from Pascal architecture [29] with a compute capability of 6.1. It has 3584 cores that run on 1481 MHz clock frequency.

  • TITAN V: A GPU from Volta architecture [30] with a compute capability of 7.0. It has 5120 cores that run on 1200 MHz clock frequency.

  • TITAN RTX: A GPU from Turing architecture [31] with a compute capability of 7.5. It has 4608 cores that run on 1350 MHz clock frequency.

We used CUDA NVCC compiler version 10.1 [10] to compile and run the codes. CUDA compiler comes equipped with the C-based programmatic interface for monitoring and managing different GPU states and NVML library [6].

Table I shows an enumeration of the energy consumption of the various ALU instructions for the different GPUs. For simplicity, we used each GPU generation to refer to the GPUs. We denote the (–O3) version as Optimized and the (–O0) version as Non-Optimized.

Overall Volta GPUs have the lowest energy consumption per instruction among all the tested GPUs. Pascal preceded the Volta while Maxwell and Turing are very power hungry devices except for some categories of the instructions. Furthermore, Maxwell has very high energy consumption in Floating Single and Double Precision instructions.

In Half Precision (FP16) instructions, Volta and Turing have much better results than Pascal. Hence, this confirms that both architectures are suitable for approximate computing applications (e.g.

 , deep learning and energy-saving computing). We did not run FP16 instructions on

Maxwell as Pascal architecture was the first GPU that offered FP16 support. The same trend can be found in Multi Precision (MP) instructions where Volta and Pascal have better energy consumption compared to the two other generations. MP [32] instructions are essential in a wide variety of algorithms in computational mathematics (i.e.

 , number theory, random matrix problems, experimental mathematics). Also, it is used in cryptography algorithms and security.

Overall, the energy of Non-Optimized is always more than the Optimized. One reason is that the number of cycles at the O0 optimization level are more than the O3 level [11]. Thus, the instruction takes more time to finish execution.

PAPI vs. MTSM: The dominant tendency of the results is that PAPI readings are always more than the MTSM. Although the difference is not significant for small kernels, it can be up to 1 j for bigger kernels like Floating Single and Double Precision div instructions.

Vi-a Verification against the Hardware Measurement

Since Volta GPUs are the primary GPUs in data-centers. We verified the different software techniques (MTSM & PAPI) against the hardware setup on the Volta TITAN V GPU.

Compared to the ground truth hardware measurement, for all the instructions (each run ten times), the average Mean Absolute Percentage Error (MAPE) of MTSM Energy is 6.39, while the average Root Mean Square Error (RMSE) is 3.97. In contrast, PAPI average MAPE is 10.24 and the average RMSE is 5.04. Figure 8 shows the error of MTSM and PAPI relative to the hardware measurement for some of the instructions. The versification’s results prove that MTSM is more accurate than PAPI as it is closer to what has been measured using the hardware.

Fig. 8: Instructions-level verification of MTSM & PAPI against the HW on Volta TITAN V GPU. Int, F & D denote Integer, Double and Float instructions respectively.

Instruction Optimized Non-Optimized Maxwell Pascal Volta Turing Maxwell Pascal Volta Turing (1) Integer Arithmetic Instructions add / sub / min / max 0.0942 , 0.0461 0.0277 , 0.0200 0.0064 , 0.0012 0.0293 , 0.0281 1.2453 , 1.0264 0.6509 , 0.6203 0.2531 , 0.2384 0.8340 , 0.7905 mul / mad 3.0239 , 2.7309 0.2853 , 0.1727 0.0092 , 0.0014 0.0434 , 0.0233 4.5826 , 4.2959 3.6194 , 3.5986 0.5228 , 0.4912 0.9969 , 0.9675 {s} div 10.5921 , 6.7819 5.0270 , 4.9889 4.2489 , 4.0660 7.2119 , 6.5499 64.7649 , 64.6306 44.5100 , 44.5609 27.4008 , 27.0584 48.7700 , 48.4940 {s} rem 7.8512 , 6.6833 5.0539 , 4.9687 4.2100 , 4.0138 7.3197 , 6.7982 61.1036 , 61.3000 42.4800 , 42.0521 25.4413 , 25.1175 48.3881 , 47.9075 abs 0.800 , 0.747 0.2927 , 0.2349 0.0647 , 0.0621 0.3000 , 0.2710 2.1611 , 1.8841 1.2170 , 1.2448 1.4263 , 1.4013 2.3084 , 2.3880 {u} div 7.44783 , 6.2899 4.7398 , 4.5889 3.9254 , 3.8706 6.6068 , 6.038 52.5558 , 52.3220 36.2400 , 36.0963 20.4411 , 20.2517 35.8200 , 35.6736 {u} rem 7.5357 , 6.4006 4.8380 , 4.7603 3.9587 , 3.9471 6.8026 , 6.3093 50.6491 , 50.4818 35.0700 , 34.8370 19.6811 , 19.2906 35.1347 , 35.0062 (2) Logic and Shift Instructions and / or / not / xor 0.0942 , 0.0461 0.0277, 0.0200 0.0064 , 0.0012 0.0293 , 0.0281 1.2453 , 1.0264 0.6509 , 0.6203 0.2531 , 0.2384 0.8340 , 0.7905 cnot 0.3362 , 0.0343 0.3227 , 0.2423 0.0071 , 0.0077 0.2840 , 0.1011 2.0562 , 1.7680 1.8762 , 1.8498 2.3421 , 2.3174 3.9990 , 3.9181 shl / shr 0.0942 , 0.0461 0.0277 , 0.0200 0.0064 , 0.0012 0.0293 , 0.0281 1.2453 , 1.0264 0.6509 , 0.6203 0.2531 , 0.2384 0.8340 , 0.7905 (3) Floating Single Precision Instructions add / sub / min / max 0.0942 , 0.0461 0.0277 , 0.0200 0.0064 , 0.0012 0.0293 , 0.0281 1.2453 , 1.0264 0.6509 , 0.6203 0.2531 , 0.2384 0.8340 , 0.7905 mul / mad / fma 3.0239 , 2.7309 0.2778 , 0.2008 0.0021 , 0.0014 0.2933 , 0.2811 4.5826 , 4.2959 0.6509 , 0.6203 0.4874 , 0.4797 0.8340 , 0.7905 div 10.6203 , 9.4351 6.9934 , 6.8707 5.1096 , 5.0355 8.6425 , 7.9232 57.2252 , 56.6529 50.4741 , 49.9350 34.1050 , 33.3816 58.8700 , 58.6767 (4) Double Precision Instructions add / sub / min / max 2.3058 , 2.0061 1.8610 , 1.8606 0.3608 , 0.3567 2.6810 , 2.6176 3.3017 , 2.5143 2.0070 , 2.0586 0.5158 , 0.5114 4.6315 , 4.1623 div 30.0634 , 28.8160 19.6393 , 19.2843 3.7249 , 3.6828 25.7757 , 23.6016 101.3056 , 100.7807 50.3810 , 50.0800 31.0212 , 30.4121 67.4127 , 67.4025 (5) Half Precision Instructions add / sub / mul NA 2.9601 , 2.8788 0.0924 , 0.0624 0.3740 , 0.1220 NA 3.5727 , 3.4259 0.5027 , 0.4656 0.9972 , 0.9631 (6) Multi Precision Instructions add.cc / addc / sub.cc 0.3922 , 0.0791 0.3152 , 0.1492 0.0669 , 0.0535 0.1293 , 0.1065 1.2502 , 1.0270 0.6685 , 0.6317 0.5187 , 0.4938 0.9979 , 0.9680 subc 0.6934 , 0.3593 0.3655 , 0.3499 0.1006 , 0.089 0.4339 , 0.1677 2.1672 , 1.8927 1.2646 , 0.9002 0.9889, 0.952467 1.9107 , 1.8704 mad.cc / madc 1.1575 , 0.7697 0.7981 , 0.6768 0.0730 , 0.0631 0.1621 , 0.1357 4.7049 , 4.4307 3.7018 , 3.6865 0.5165 , 0.5043 0.9979 , 0.9657 (7) Special Mathematical Instructions rcp 6.4492 , 5.3416 4.1609 , 4.0320 2.4265 , 2.4270 4.3064 , 3.9514 18.6762 , 18.2830 13.1662 , 13.1460 10.2930 , 10.0538 19.3208 , 19.2601 sqrt 6.3630 , 5.3923 4.1114 , 4.0068 2.4349 , 2.4219 4.3816 , 4.0533 19.0402 , 18.6694 13.4900 , 13.4185 10.5023 , 10.2700 19.7800 , 19.6984 approx.sqrt 0.8527 , 0.4961 0.3598 , 0.2345 1.2311 , 1.2076 2.1648 , 2.0121 15.9024 , 15.5452 10.7200 , 10.6812 8.3867 , 8.2517 15.4991 , 15.4438 rsqrt 0.5174 , 0.303 0.2573 , 0.1163 1.2488 , 1.2432 2.2491 , 2.0898 15.0459 , 14.6802 10.6800 , 10.6920 8.3784 , 8.2320 15.8300 , 15.7700 sin / cos 0.3410 , 0.1507 0.1345 , 0.2742 0.5887 , 0.5867 1.0070, 0.9065 1.2927 , 0.8940 1.1390 , 0.8650 1.0046 , 0.9788 1.9340 , 1.8835 lg2 0.5075 , 0.3098 0.3618 , 0.2371 1.2357 , 1.2287 2.1451 , 2.1634 14.6789 , 15.0598 10.7646 , 10.6786 8.4058 , 8.2500 15.6505 , 15.6127 ex2 0.5147 , 0.3094 0.2383 , 0.3372 0.4798 , 0.4709 1.0188 , 0.6971 14.0001 , 13.6377 9.9840 , 9.9685 7.3144 , 7.2030 13.5252 , 13.4070 copysign 0.2099 , 0.1700 0.2989 , 0.2339 0.0910, 0.0880 0.1627 , 0.1379 3.8932 , 3.5953 3.1134 , 3.1020 2.3692 , 2.3490 4.0546 , 3.9487 (8) Integer Intrinsic Instructions mul24() / mad24() 0.3915 , 0.2939 0.2853 , 0.2727 0.2263 , 0.2119 0.3713 , 0.3415 6.7332 , 6.4263 4.8636 , 4.8636 2.3732 , 2.3249 4.6364 , 4.5942 sad() 0.0316 , 0.015 0.2523 , 0.1243 0.0075 , 0.0038 0.2428 , 0.0422 1.2495, 1.0277 0.6371 , 0.6646 0.5158 , 0.5029 1.0100 , 0.9757 popc() 0.074 , 0.057 0.1347 , 0.2674 0.3968 , 0.3990 0.0815 , 0.0603 2.0281 , 1.7728 1.89123 , 1.9133 1.4984 , 1.4601 2.8428 , 2.7949 clz() 0.0729 , 0.0479 0.2644 , 0.3339 0.5683 , 0.5657 0.3124 , 0.2817 2.0755, 1.7944 1.1670 , 1.2034 0.9145 , 0.8961 1.1554 , 1.4956 bfind() 0.0488 , 0.0374 0.2326 , 0.3081 0.2915 , 0.2902 0.0304 , 0.0052 1.1997 , 0.9821 0.5912 , 0.6185 0.4688 , 0.4582 0.8010 , 0.7546

TABLE I: Energy Consumption of GPU Instructions. {s} & {u} denote signed and unsigned instructions respectively. The first number in the results is PAPI and the second one is the MTSM. All the numbers are in (j).

Vii Related Work

In this section, we discuss some of the related work. Additional details can be found in [33]. Several works in the literature have proposed to analyze and estimate the energy consumption and the power usage of GPUs. Researchers have proposed several analytical methods [34, 35, 18] to indirectly model and predict the total GPU’s kernel power/energy. Likewise, several methods rely on cycle-accurate simulators [36].

The power/energy measurement can be carried out in two different approaches, a software-oriented solution, where the internal power sensors are queried using NVML, and the hardware-oriented solutions using an external hardware setup.

Software-oriented approaches: Arunkumar et al.  [18] used a direct NVML sampling motoring approach running in the background while using a special micro-benchmark to calculate basic compute/memory instructions energy consumption and feed that to their model. They run their evaluation on (Tesla K40) GPU. They intentionally disabled all compiler optimizations and compiled their micro-benchmark with S-O0 flag. Hence, they do not report any results for energy consumption if the optimization flags are working. Burtscher et al.  [19] analyzed the power reported by NVML for (Tesla K20) GPU.

Hardware-oriented approaches: Zhao et al.  [37] used an external power meter on an old GPU from Fermi [39] architecture, (GeForce GTX 470) where they designed a micro-benchmark to compute the energy of some PTX instructions and feed that into their model. The authors of [38] validates their roofline model by using PowerMon 2 [9] and a custom PCIe inter-poser to calculate the instantaneous power of (GTX 580) GPU.

Recently, Sen et al.  [40] provided an assessment to rate the quality and performance of the power profiling mechanisms using hardware and software techniques. They compared a hardware approach using PowerInsight [8], a commercial product from Penguin Computing [41] to the software NVML approach on a developed matrix multiplication CUDA benchmark. Kasichayanula et al.  [42] used NVML to calculate the energy consumption of some GPU units which drive their model and validate it with a Kill-A-Watt power meter. While these types of hardware power meters are cheap and straightforward to use, they do not give an accurate measurement, especially in HPC settings [33].

In a similar spirit, we follow the same line of research. Nevertheless, we focus on the energy consumption of individual instructions and the effect of CUDA compiler optimizations on the instructions. We also provide an assessment of the quality of the different software techniques and verify software results to a custom in-house hardware setup.

Viii Conclusions

In this paper, we accurately measure the energy consumption of various instructions that can be executed in modern NVIDIA GPUs. We also show the effect of different optimization levels found in the CUDA (NVCC) compiler on the energy consumption of each instruction. We provide an in-depth comparison of varying software techniques used to query the onboard internal GPU sensors and compare that to a custom-designed hardware power measurement. Overall, the paper provides an easy and straightforward way to measure the energy consumption of any NVIDIA GPU kernel. Additionally, our contributions will help modeling frameworks [43], and simulators [36] have a more precise predictions of GPUs’ energy/power consumption.

References

  • [1] Top500. [Online]. Available: https://www.top500.org/
  • [2] Green500. [Online]. Available: https://www.top500.org/green500/
  • [3] L. B. Gomez, F. Cappello, L. Carro, N. DeBardeleben, B. Fang, S. Gurumurthi, K. Pattabiraman, P. Rech, and M. S. Reorda, “Gpgpus: How to combine high computational power with high reliability,” in Proceedings of the Conference on Design, Automation & Test in Europe, ser. DATE 2014. 3001 Leuven, Belgium, Belgium: European Design and Automation Association, 2014, pp. 341:1–341:9.
  • [4] A. Pathania, Qing Jiao, A. Prakash, and T. Mitra, “Integrated cpu-gpu power management for 3d mobile games,” in 2014 51st ACM/EDAC/IEEE Design Automation Conference (DAC), June 2014, pp. 1–6.
  • [5] O. Kayiran, A. Jog, A. Pattnaik, R. Ausavarungnirun, X. Tang, M. Kandemir, G. Loh, O. Mutlu, and C. Das, “uc-states: Fine-grained gpu datapath power management,” in Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT, pp. 17–30, 2016.
  • [6] NVIDIA Management Library (NVML), 2019. [Online]. Available: https://docs.nvidia.com/pdf/NVMLAPIReferenceGuide.pdf
  • [7] J. W. Romein and B. Veenboer, “Powersensor 2: A fast power measurement tool,” in 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), April 2018, pp. 111–113.
  • [8] J. H. Laros, P. Pokorny, and D. DeBonis, “Powerinsight - a commodity power measurement capability,” in 2013 International Green Computing Conference Proceedings, June 2013, pp. 1–6.
  • [9] D. Bedard, M. Y. Lim, R. Fowler, and A. Porterfield, “Powermon: Finegrained and integrated power monitoring for commodity computer systems,” in Proceedings of the IEEE SoutheastCon 2010 (SoutheastCon), March 2010, pp. 479–484.
  • [10] CUDA Compiler Driver (NVCC) v10.1, 2019. [Online]. Available: https://docs.nvidia.com/cuda/pdf/CUDACompilerDriverNVCC.pdf
  • [11] Y. Arafa, A. A. Badawy, G. Chennupati, N. Santhi, and S. Eidenbenz, “Low overhead instruction latency characterization for nvidia gpgpus,” in 2019 IEEE High Performance Extreme Computing Conference (HPEC), Sep. 2019, pp. 1–8.
  • [12] Parallel Thread Execution ISA v6.4, 2019. [Online]. Available: https://docs.nvidia.com/cuda/pdf/ptxisa6.4.pdf
  • [13] Performance Application Programming Interface (PAPI), 2009. [Online]. Available: https://icl.utk.edu/papi/index.html
  • [14] CUDA Programming Guide, 2018. [Online]. Available: https://docs.nvidia.com/cuda/archive/9.0/cuda-c-programming-guide/index.html
  • [15] X. Zhang, G. Tan, S. Xue, J. Li, K. Zhou, and M. Chen, “Understanding the gpu microarchitecture to achieve bare-metal performance tuning,” in Proceedings of the 22Nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, ser. PPoPP ’17. New York, NY, USA: ACM, 2017, pp. 31–43.
  • [16] S. Gray, MaxAs: Assembler for NVIDIA Maxwell architecture, 2011. [Online]. Available: https://github.com/NervanaSystems/maxas
  • [17] CUDA Binary Utilities, 2019. [Online]. Available: https://docs.nvidia.com/cuda/pdf/CUDABinaryUtilities.pdf
  • [18] A. Arunkumar, E. Bolotin, D. Nellans, and C.-J. Wu, “Understanding the future of energy efficiency in multi-module gpus,” in Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019, ser. HPCA ’19. pp. 519–532.
  • [19] M. Burtscher, I. Zecena, and Z. Zong, “Measuring gpu power with the k20 built-in sensor,” in Proceedings of Workshop on General Purpose Processing Using GPUs, ser. GPGPU-7, 2014, pp. 28:28–28:36.
  • [20] Mariza Ferro et al., “Analysis of GPU Power Consumption Using Internal Sensor,” Zenodo July 2017. [Online]. Available: http://doi.org/10.5281/zenodo.833347
  • [21] D. Terpstra, H. Jagode, H. You, and J. Dongarra, “Collecting performance data with papi-c,” in Tools for High Performance Computing, 2010, pp. 157–173.
  • [22] A. D. Malony, S. Biersdorff, S. Shende, H. Jagode, S. Tomov, G. Juckeland, R. Dietrich, D. Poole, and C. Lamb, “Parallel performance measurement of heterogeneous parallel systems with gpus,” in 2011 International Conference on Parallel Processing, Sep. 2011, pp. 176–185.
  • [23] V. M. Weaver, M. Johnson, K. Kasichayanula, J. Ralph, P. Luszczek, D. Terpstra, and S. Moore, “Measuring energy and power with papi,” in 2012 41st International Conference on Parallel Processing Workshops, Sep. 2012, pp. 262–268.
  • [24] H. McCraw, D. Terpstra, J. Dongarra, K. Davis, and R. Musselman, “Beyond the cpu: Hardware performance counter monitoring on blue gene/q,” in Supercomputing, 2013, pp. 213–225.
  • [25] A. Haidar, H. Jagode, A. YarKhan, P. Vaccaro, S. Tomov, and J. Dongarra, “Power-aware computing: Measurement, control, and performance analysis for intel xeon phi,” in 2017 IEEE High Performance Extreme Computing Conference (HPEC), Sep. 2017, pp. 1–7.
  • [26] A. Agelastos, B. Allan, J. Brandt, P. Cassella, J. Enos, J. Fullop, A. Gentile, S. Monk, N. Naksinehaboon, J. Ogden, M. Rajan, M. Showerman, J. Stevenson, N. Taerat, and T. Tucker, “The lightweight distributed metric service: A scalable infrastructure for continuous monitoring of large scale computing systems and applications,” in SC ’14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis), Nov 2014, pp. 154–165.
  • [27] H. David, E. Gorbatov, U. R. Hanebutte, R. Khanna, and C. Le, “Rapl:Memory power estimation and capping,” in 2010 ACM/IEEE International Symposium on Low-Power Electronics and Design (ISLPED), Aug 2010, pp. 189–194.
  • [28] NVIDIA Maxwell GPU Architecture, 2014. [Online]. Available: https://international.download.nvidia.com/geforce-com/international/pdfs/GeForce-GTX-750-Ti-Whitepaper.pdf
  • [29] NVIDIA Pascal GPU Architecture, 2016. [Online]. Available: https://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf
  • [30] NVIDIA Volta GPU Architecture, 2017. [Online]. Available: https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf
  • [31] NVIDIA Turing GPU Architecture, 2018. [Online]. Available: https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/
  • [32] N. Emmart, “A study of high performance multiple precision arithmetic on graphics processing units,” Ph.D. dissertation, UMASS, 2018. [Online]. Available: https://scholarworks.umass.edu/dissertations_2/1164
  • [33] R. A. Bridges, N. Imam, and T. M. Mintz, “Understanding gpu power: A survey of profiling, modeling, and simulation methods,” in ACM Comput. Surv., vol. 49, no. 3, pp. 41:1–41:27, Sep. 2016.
  • [34] X. Ma, M. Dong, L. Zhong, and Z. Deng, “Statistical power consumption analysis and modeling for gpu-based computing,” IEEE Micro, vol. 31, no. 2, p. 50–59, Mar. 2011.
  • [35] X. Ma, M. Dong, L. Zhong, and Z. Deng, “An integrated gpu power and performance model,” in Proceedings of the 37th Annual International Symposium on Computer Architecture, ser. ISCA ’10. New York, NY, USA: ACM, 2010, pp. 280–289.
  • [36] J. Leng, T. Hetherington, A. ElTantawy, S. Gilani, N. S. Kim, T. M.Aamodt, and V. J. Reddi, “Gpuwattch: Enabling energy optimizations in gpgpus,” in Proceedings of the 40th Annual International Symposium on Computer Architecture, ser. ISCA ’13, 2013
  • [37] Q. Zhao, H. Yang, Z. Luan, and D. Qian, “Poigem: A programming-oriented instruction level gpu energy model for cuda program,” in Proceedings of the 13th International Conference on Algorithms and Architectures for Parallel Processing, Springer, 2013, pp. 129–142
  • [38] C. M. Wittenbrink, E. Kilgariff, and A. Prabhu, “Fermi gf100 gpu architecture,” IEEE Micro, vol. 31, no. 2, p. 50–59, Mar. 2011.
  • [39] J. W. Choi, D. Bedard, R. Fowler, and R. Vuduc, “A roofline model of energy,” in 2013 IEEE 27th International Symposium on Parallel and Distributed Processing, May 2013, pp. 661–672.
  • [40] S. Sen, N. Imam, and C. Hsu, “Quality assessment of gpu power profiling mechanisms,” in 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), May 2018, pp. 702–711.
  • [41] Penguin Computing, 2012. [Online]. Available: https://www.penguincomputing.com/company/press-releases
  • [42] K. Kasichayanula, D. Terpstra, P. Luszczek, S. Tomov, S. Moore, and G. D. Peterson, “Power aware computing on gpus,” in 2012 Symposium on Application Accelerators in High Performance Computing, July 2012, pp. 64–73.
  • [43] Y. Arafa, A. A. Badawy, G. Chennupati, N. Santhi, and S. Eidenbenz, “Ppt-gpu: Scalable gpu performance modeling,” IEEE Computer Architecture Letter, , vol. 18, no. 1, pp. 55–58, Jan 2019.