Validation of hardware events for successful performance pattern identification in High Performance Computing

10/11/2017
by   Thomas Röhl, et al.
FAU
0

Hardware performance monitoring (HPM) is a crucial ingredient of performance analysis tools. While there are interfaces like LIKWID, PAPI or the kernel interface perf_event which provide HPM access with some additional features, many higher level tools combine event counts with results retrieved from other sources like function call traces to derive (semi-)automatic performance advice. However, although HPM is available for x86 systems since the early 90s, only a small subset of the HPM features is used in practice. Performance patterns provide a more comprehensive approach, enabling the identification of various performance-limiting effects. Patterns address issues like bandwidth saturation, load imbalance, non-local data access in ccNUMA systems, or false sharing of cache lines. This work defines HPM event sets that are best suited to identify a selection of performance patterns on the Intel Haswell processor. We validate the chosen event sets for accuracy in order to arrive at a reliable pattern detection mechanism and point out shortcomings that cannot be easily circumvented due to bugs or limitations in the hardware.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/10/2021

NumaPerf: Predictive and Full NUMA Profiling

Parallel applications are extremely challenging to achieve the optimal p...
12/22/2021

Supporting RISC-V Performance Counters through Performance analysis tools for Linux (Perf)

Increased attention to RISC-V in Cloud, Data Center, Automotive and Netw...
05/12/2020

Understanding Memory Access Patterns Using the BSC Performance Tools

The growing gap between processor and memory speeds results in complex m...
07/13/2018

Tools for Analyzing Parallel I/O

Parallel application I/O performance often does not meet user expectatio...
09/10/2021

An Effective Early Multi-core System Shared Cache Design Method Based on Reuse-distance Analysis

In this paper, we proposed an effective and efficient multi-core shared-...
07/29/2021

Learning how to listen: Automatically finding bug patterns in event-driven JavaScript APIs

Event-driven programming is widely practiced in the JavaScript community...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

0.1 Introduction and related work

Hardware performance monitoring (HPM) was introduced for the x86 architecture with the Intel Pentium in 1993 ryan1993inside . Since that time, HPM gained more and more attention in the computer science community and consequently a lot of HPM related tools were developed. Some provide basic access to the HPM registers with some additional features like LIKWID likwid , PAPI mucci1999papi or the kernel interface perf_event perfevent . Furthermore, some higher level analysis tools gather additional information by combining the HPM counts with application level traces. Popular representatives of that analysis method are HPCToolkit adhianto2010hpctoolkit , PerfSuite kufrin2005perfsuite , Open|Speedshop schulz2008open or Scalasca geimer2010scalasca . The intention of these tools is to advise the application developer with educated optimization hints. To this end, the tool developers use performance metrics that represent a possible performance limitation, such as saturated memory bandwidth or instructions paths. The hardware metrics may be combined with information on the application level, e.g. scaling characteristics, dependence of performance on the problem size, or static code analysis, to arrive at a signature. A performance signature then points towards one or more performance patterns, as described in patterns and refined in patterns-poster . The purpose of the patterns concept is to facilitate the identification of performance-limiting bottlenecks.

C. Guillen uses in guillen the term execution properties

instead of performance pattern. She defines execution properties as a set of values gathered by monitoring and related thresholds. The properties are arranged in decision trees for compute- and memory-bound applications as well as trees related to I/O and other resources. This enables either a guided selection of the analysis steps to further identify performance limitation or automatic tool-based analysis. Based on the path in the decision tree, suggestions are given for what to look for in the application code. A combination of the structured performance engineering process in

patterns-poster with the decision trees in guillen defines a good basis for (partially) automated performance analysis tools.

One main problem with HPM is that none of the main vendors for x86 processors guarantees event counts to be accurate or deterministic. Although many HPM interfaces exist, only little research has been done on validating the hardware performance events. However, users tend to trust the returned HPM counts and use them for decisions about code optimization. One should be aware that HPM measurements are only guideposts until the HPM events are known to have guaranteed behavior. Moreover, analytic performance models can only be validated if this is the case.

The most extensive event validation analysis was done by Weaver et al. Weaver2013

using a self-written assembly validation code. They test determinism and overcounting for the following events: retired instructions, retired branches, retired loads and stores as well as retired floating-point operations including scalar, packed, and vectorized instructions. For validating the measurements the dynamic binary instrumentation tool

Pin Luk:2005:PBC:1064978.1065034 was used. The main target of that work was not to identify the right events needed to construct accurate performance metrics but to find the sources of non-determinism and over/undercounting. It gives hints on how to reduce over- or undercounting and identify deterministic events for a set of architectures.

D. Zaparanuks et al. 4919635 determined the error of retired instructions and CPU cycle counts with two microbenchmarks. Since the work was released before the perf_event interface perfevent was available for PAPI, they tested the deprecated interfaces perfmon2 eranian2006perfmon2 and perfctr pettersson2003perfctr as the basis for PAPI. They use an “empty” microbenchmark to define a default error using different counter access methods. For subsequent measurements they use a simple loop kernel with configurable iterations, define a model for the code and compare the measurement results to the model. Moreover, they test whether the errors change for increasing measurement duration and for a varying number of programmed counter registers. Finally, they give suggestions which back-end should be used with which counter access pattern to get the most accurate results.

In the remainder of this section we recommend HPM event sets and related derived metrics that represent the signature of prototypical examples picked out of the performance patterns defined in patterns-poster . In the following sections the accuracy of the chosen HPM events and their derived metrics is validated. Our work can be seen as a recommendation for tool developers which event sets match the selected performance patterns in the best way and how reliable they are.

0.2 Identification of signatures for performance patterns

Performance patterns help to identify possible performance problems in an application. The measurement of HPM events is one part of the pattern’s signature. There are patterns that can be identified by HPM measurements alone, but commonly more information is required, e.g., scaling behavior or behavior with different data set sizes. Of course, some knowledge about the micro-architecture is also required to select the proper event sets for HPM as well as to determine the capabilities of the system. For x86 systems, HPM is not part of the instruction set architecture (ISA), thus besides a few events spanning multiple micro-architectures, each processor generation defines its own list of HPM events. Here we choose the Intel Haswell EP platform (E5-2695 v3) for HPM event selection and verification. The general approach can certainly be applied to other architectures.

In order to decide which measurement results are good or bad, the characteristics of the system must be known. C. Guillen established thresholds in guillen with four different approaches: hardware characteristics, expert knowledge about hardware behavior and performance optimization, benchmarks and statistics. With decision trees but without source code knowledge it is possible to give some loose hints how to further tune the code. With additional information about the software code and run time behavior, the list of hints could be further reduced.

The present work is intended to be a referral for which HPM events provide the best information to specify the signatures of the selected performance patterns. The patterns target different behaviors of an application and/or the hardware and therefore are classified in three groups:

bottlenecks, hazards and work-related patterns. The whole list of performance patterns with corresponding event sets for the Intel Haswell EP micro-architecture can be found at patterns-wiki . For brevity we restrict ourselves to three patterns: bandwidth saturation, load imbalance and false sharing of cache lines. For each pattern, we list possible signatures and shortcomings concerning the coverage of a pattern by the event set. The analysis method is comparable to the one of D. Zaparanuks et al. 4919635 but uses a set of seven assembly benchmarks and synthetic higher level benchmark codes that represent often used algorithms in scientific applications. But instead of comparing the raw results, we use derived metrics, combining multiple counter values, for comparison as these metric results are commonly more interesting for tool users.

Bandwidth saturation

A very common bottleneck is bandwidth saturation in the memory hierarchy, notably at the memory interface but also in the L3 cache on earlier Intel designs. Proper identification of this pattern requires an accurate measurement of the data volume, i.e., the number of transferred cache lines between memory hierarchy levels. From data volume and run time one can compute transfer bandwidths, which can then be compared with measured or theoretical upper limits.

Starting with the Intel Nehalem architecture, Intel separates a CPU socket in two components, the core and the uncore. The core embodies the CPU cores and the L1 and L2 caches. The uncore covers the L3 cache as well as all attached components like memory controllers or the Intel QPI socket interconnect. The transferred data volume to/from memory can be monitored at two distinct uncore components. A CPU socket in an Intel Haswell EP machine has at most two memory controllers (iMC) in the uncore, each providing up to four memory channels. The other component is the Home Agent (HA) which is responsible for the protocol side of memory interactions.

Starting with the Intel Sandy Bridge micro-architecture, the L3 cache is segmented, with one segment per core. Still one core can make use of all segments. The data transfer volume between the L2 and L3 caches can be monitored in two different ways: One may either count the cache lines that are requested and written back by the L2 cache, or the lookups for data reads and victimized cache lines that enter the L3 cache segments. It is recommended to use the L2-related HPM events because the L3 cache is triggered by many components besides the L2 caches. Moreover, the Intel Haswell EP architecture has up to L3 cache segments which all need to be configured separately. Bandwidth bottlenecks between L1 and L2 cache or L1 and registers are seldom and thus ignored in this pattern.

Load imbalance

The main characterization of this pattern is that different threads have to process different working sets between synchronization points. For data-centric workloads the data volume transferred between the L1 and L2 caches for each thread may be an indicator: since the working sets have different sizes, it is likely that smaller working sets also require less data. However, the assumption that working set size is related to transferred cache lines is not expressive enough to fully identify the pattern, since the amount of required data could be the same for each thread while the amount of in-core instructions differs. Retired instructions, on the other hand, are just as unreliable as data transfers because parallelization overhead often comprises spin-waiting loops that cause abundant instructions without doing “work.” Therefore, for better classification, it is desirable to count “useful” instructions that perform the actual work the application has to do. None of the two x86 vendors provides features to filter the instruction stream and count only specific instructions in a sufficiently flexible way. Moreover, the offered hardware events are not sufficient to overcome this shortcoming by covering most “useful” instructions like scalar/packed floating-point operations, SSE driven calculations or string related operations. Nevertheless, filtering on some instruction groups works for Intel Haswell systems, such as long-latency instructions (div, sqrt,…) or AVX instructions. Consequently, it is recommended to measure the work instructions if possible but also the data transfers can give a first insight.

False cache line sharing

False cache line sharing occurs when multiple cores access the same cache line while at least one is writing to it. The performance pattern thus has to identify bouncing cache lines between multiple caches. There are codes that require true cache line sharing, like producer/consumer codes, but we are referring to common HPC codes where cache line sharing should be as minimal as possible. In general, the detection of false cache line sharing is very hard when restricting the analysis space only to hardware performance measurements. The Intel Haswell micro-architecture offers two options for counting cache line transfers between private caches: There are L3 cache related OPs events for intra- and inter-socket transfers, but the HPM event for intra-socket movement may undercount with SMT enabled by as much as according to erratum HSW150 in haswell-specs . The alternative is the offcore response unit. By setting the corresponding filter bits, the L3 hits with hitm snoops (hit a modified cache line) to other caches on the socket and the L3 misses with hitm snoops to remote sockets can be counted. The specification update haswell-specs also lists an erratum for the offcore response unit (HSW149) but the required filter options for shared cache lines are not mentioned in it. There are no HPM events to count the transfers of shared cache lines at the L2 cache. In order to clearly identify whether a code triggers true or false cache line sharing, further information like source code analysis is required.

0.3 Useful event sets

Pattern Desired events Available events
Bandwidth saturation Data volume transferred to/from memory from/to the last level cache; data volume transferred between L2 and L3 cache iMC:UNC_M_CAS_COUNT.RD, iMC:UNC_M_CAS_COUNT.WR, HA:UNC_H_IMC_READS.NORMAL, HA:UNC_H_BYPASS_IMC.TAKEN, HA:UNC_H_IMC_WRITES.ALL, L2_LINES_IN.ALL, L2_TRANS.L2_WB, CBOX:LLC_LOOKUP.DATA_READ, CBOX:LLC_VICTIMS.M_STATE
Load imbalance Data volume transferred at all cache levels; number of “useful” instructions L1D.REPLACEMENT, L2_TRANS.L1D_WB, L2_LINES_IN.ALL, L2_TRANS.L2_WB, AVX_INSTS.CALC, ARITH.DIVIDER_UOPS
False sharing of cache lines All transfers of shared cache lines for the L2 and L3 cache; all transfers of shared cache lines between the last level caches of different CPU sockets MEM_LOAD_UOPS_L3_ HIT_RETIRED.XSNP_HITM, MEM_LOAD_UOPS_L3_ MISS_RETIRED.REMOTE_HITM, OFFCORE_RESPONSE: LLC_HIT:HITM_OTHER_CORE, OFFCORE_RESPONSE: LLC_MISS:REMOTE_HITM
Table 1: Desired events, available events and comments for three performance patterns on the Intel Haswell EP micro-architecture. A complete list can be found at patterns-wiki

Table 1 defines a range of HPM event sets that are best suitable for the described performance patterns regarding the HPM capabilities of the Intel Haswell EP platform. The assignment of HPM events for the pattern signatures is based on the Intel documentation (sdm , haswell-uncore ). Some events are not mentioned in the default documentation; they are taken from Intel’s performance monitoring database perfmondb . Although the events were selected with due care, there is no official guarantee for the accuracy of the counts by the manufacturer. The sheer amount of performance monitoring related errata for the Intel Haswell EP architecture haswell-specs reduces the confidence even further. But this encourages us even more to validate the chosen event sets in order to provide tool developers and users a reliable basis for their performance analysis.

0.4 Validation of performance patterns

Many performance analysis tools use the HPM features of the system as their main source of information about a running program. They assume event counts to be correct, and some even generate automated advice for the developer. Previous research in the field of HPM validation focuses on singular events like retired instructions but does not verify the results for other metrics that are essential for identifying performance patterns. Proper verification requires the creation of benchmark code that has well-defined and thoroughly understood performance features and, thus, predictable event counts. Since optimizing compilers can mutilate the high level code, the feasible solutions are either to write assembly benchmarks or to perform code analysis of the assembly code created by the compiler.

The LIKWID tool suite likwid includes the likwid-bench microbenchmarking framework, which provides a set of assembly language kernels. They cover a variety of streaming access schemes. In addition the user can extend the framework by writing new assembly code loop bodies. likwid-bench takes care of loop counting, thread parallelism, thread placement, ccNUMA page placement and performance (and bandwidth) measurement. It does not, however, perform hardware event counting. For the HPM measurements we thus use likwid-perfctr, which is also a part of the LIKWID suite. It uses a simple command line interface but provides a comprehensive set of features for the users. Likwid-perfctr supports almost all interesting core and uncore events for the supported CPU types. In order to relieve the user from having to deal with raw event counts, it supports performance groups, which combine often used event sets and corresponding formulas for computing derived metrics (e.g., bandwidths or FLOP rates). Moreover, likwid-perfctr provides a Marker API to instrument the source code and restrict measurements to certain code regions. Likwid-bench already includes the calls to the Marker API in order to measure only the compute kernel. We have to manually correct some of the results of likwid-bench to represent the obvious and hidden data traffic (mostly write-allocate transfers) that may be measured with likwid-perfctr.

L2_load

L2_store

L2_copy

L2_stream

L2_daxpy

L2_triad

L3_load

L3_store

L3_copy

L3_stream

L3_daxpy

L3_triad

MEM_load

MEM_store

MEM_copy

MEM_stream

MEM_daxpy

MEM_triad

HA_load

HA_store

HA_copy

HA_stream

HA_daxpy

HA_triad

Error in %
Figure 1: Verification tests for cache and memory traffic using a set of micro benchmarking kernels written in assembly. We show the average, minimum and maximum error in the delivered HPM counts for a collection of streaming kernels with data in L2, L3 and in memory.

The first performance pattern for the analysis is the bandwidth saturation pattern. For this purpose, likwid-perfctr already provides three performance groups called L2, L3 and MEM likwid . A separate performance group was created to measure the traffic traversing the HA. Based on the raw counts, the groups define derived metrics for data volume and bandwidth. For simplicity we use the derived metric of total bandwidth for comparison as it both includes the data volume in both directions and the run time. In Fig. 1 the average, minimal and maximal errors of runs with respect to the exact bandwidth results are presented for seven streaming kernels and data in L2 cache, L3 cache and memory. The locality of the data in the caching hierarchy is ensured by streaming accesses to the vectors fitting only in the relevant hierarchy level. The first two kernels (load and store) perform pure loading and storing of data to/from the CPU core to the selected cache level or the memory. A combination of both is applied in the copy test. The last three tests are related to scientific computing and well understood. They range from the linear combination of two vectors called ddot calculating , a stream triad with formula to a vector triad computing .

The next pattern we look at is load imbalance. Since load imbalance requires a notion of “useful work” we have to find a way to measure floating-point operations. Unfortunately, the Intel Haswell architecture lacks HPM events to fully represent FLOP/s. For the Intel Haswell architecture, Intel has documented a HPM event AVX_INSTS.ALL (Event 0xC6, Umask 0x07) which captures all AVX instructions including data movement and calculations perfmondb . With the help of likwid-bench we could further refine the event to count loads (Umask 0x01), stores (Umask 0x02) and calculations (Umask 0x04) separately. Consequently, the FLOP/s performed with AVX operations can be counted. All performance patterns that require the filtering of the instruction stream for specific instructions can use the event AVX_INSTS.CALC for floating-point operations using the AVX vectorization extension. Due to its importance, the event is verified using the likwid-bench utility with assembly benchmarks that are based on AVX instructions only. Note that the use of these specific Umasks is an undocumented feature and may change with processor generations or even mask revisions. Moreover, we have found no way to count SSE or scalar floating-point instructions.

triad_avx

Error in %
Figure 2: Verification tests for the AVX floating point event using a set of microbenchmarking kernels with pure AVX code.

Fig. 2 shows the minimum, maximum and average error for measuring AVX FLOP/s. The average error for all tests is below . As the maximal error is the event can be seen as sufficiently accurate for pure AVX code. Using the counter with non-AVX codes always returns 0.

Coming back to performance patterns, we now verify the load imbalance pattern using an upper triangular matrix vector multiplication code running with two threads. Since the accuracy of the cache and memory traffic related HPM events have been verified already, we use the only available floating-point operation related event AVX_INSTS.CALC

. There is one shortcoming worth noting: If the code contains half-wide loads, the HPM event shows overcounting. The compiler frequently uses half-wide loads to reduce the probability of “split loads,” i.e., AVX loads that cross a cache line boundary if 32-byte alignment cannot be guaranteed. Experiments have shown that the event

AVX_INSTS.CALC includes the vinsertf128 instruction as a calculation operation. In order to get reliable results, split AVX loads should be avoided. This is not a problem with likwid-bench as no compiler is involved and the generated assembly code is under full control. The upper triangular matrix is split so that each of the two threads operates on half of the matrix. The matrix has a size of and the multiplication is performed times. The first thread processes the top rows with totally elements, while the second one works on the remaining elements. This distribution results in a work load imbalance for the threads of .

Event/Metric Thread 0 Thread 1 Ratio Error
Process elements
AVX floating point ops
L2 data volume [GByte]
L3 data volume [GByte]
Memory data volume [GByte]
Table 2: Verification of the load imbalance pattern using an upper triangular matrix vector multiplication code

Table 2 lists the verification data for the code. The AVX calculation instruction count fits to a high degree the work load ratio of . The L2 data volume has the highest error, mainly caused by repeatedly fetching the input and output vector not included in the work load balance model. This behavior also occurs for the L3 and memory data volume but to a lesser extent as the cache lines of the input vector commonly stay in the caches. In order to get the memory data volume per core, the offcore response unit was used.

The false sharing of cache lines pattern is difficult to verify as it is not easy to write code that shows a predictable number of inter-core cache line transfers. A minimal amount of shared cache lines exist in almost every code thus HPM results unequal zero cannot be accepted as clear signature. To measure the behavior, a producer and consumer code was written, thus we verify the amount of falsely shared cache lines by using a true sharing cache line code. The producer writes to a consecutive range of memory that is read afterwards by the consumer. In the next iteration the producer uses the subsequent range of memory to avoid invalidation traffic. The memory range is aligned so that a fixed amount of cache lines is used in every step. The producer and consumer perform iterations in each of the runs. For synchronizing the two threads, a simple busy-waiting loop spins on a shared variable with long enough sleep times to avoid high access traffic for the synchronization variable. When using pthread conditions and a mutex lock instead, the measured values are completely unstable.

Amount of shared cache lines per step Transferred cache lines according to model Avg. amount of intra-socket transferred shared cache lines Error [] Avg. amount of inter-socket transferred shared cache lines Error []
Table 3: Verification tests for false sharing of cache lines using a producer/consumer code. The producer and consumer thread are located on the same CPU socket.

Table 3 shows the measurements for HPM events fitting best to the traffic caused by false sharing of cache lines. The table lists the amount of cache lines that are written by the producer thread. Since the consumer reads all these lines, the amount of transferred cache lines should be in the same range. The measurements using the events in Tab. 1 show a big discrepancy between the counts in the model and the measured transfers. For small counts of transferred cache lines, the results are likely to be distorted by the shared synchronization variable, but the accuracy should improve with increasing transfer sizes. Since the erratum HSW150 in haswell-specs states an undercounting by as much as , the intra-socket measurements could be too low. But even when scaling up the measurements the HPM event for intra-socket cache line sharing is not accurate.

For the inter-socket false sharing, the threads are distributed over the two CPU sockets in the system. The results in Tab. 3 show similar behavior as in the intra-socket case. The HPM events for cache line sharing provide a qualitative classification for the performance pattern’s signature but no quantitative one. The problem is mainly to define a threshold for the false-sharing rate of the system and application. Further research is required to create suitable signature for this performance pattern.

0.5 Conclusion

The performance patterns defined in patterns-poster provide a comprehensive collection for analyzing possible performance degradation on the node level. They address possible hardware bottlenecks as well as typical inefficiencies in parallel programming. We have listed suitable event sets to identify the bandwidth saturation, load imbalance, and false sharing patterns with HPM on the Intel Haswell architecture. Unfortunately the hardware does not provide all required events, such as, e.g., scalar/packed floating-point operations, or they are not accurate enough like, e.g., the sharing of cache lines at the L3 level. Moreover, a more fine-grained and correct filtering of instructions would be helpful for pattern-based performance analysis.

Using a selection of streaming loop kernels we found the error for the bandwidth-related events to be small on average (), with a maximum undercounting of about for the L3 traffic. The load imbalance pattern was verified using an upper triangular matrix vector multiplication. Although the error for the L1 to L2 cache traffic is above , the results reflect the correct load imbalance of roughly , indicating the usefulness of the metrics. Moreover, we have managed to identify filtered events that can accurately count AVX floating-point operations under some conditions. FLOP/s and traffic data are complementary information for identifying load imbalance. The verification of the HPM signature for the false sharing pattern failed due to large deviations from the expected event counts for the two events used. More research is needed here to arrive at a useful procedure, especially for distinguishing unwanted false cache line sharing from traffic caused by intended updates.

The remaining patterns defined in patterns-poster need to be verified as well to provide a well-defined HPM analysis method for performance patterns ready to be included in performance analysis tools. We provide continuously updated information about suitable events for pattern identification in the Wiki on the LIKWID website111https://github.com/RRZE-HPC/likwid.

Bibliography

  • (1) Adhianto, L., Banerjee, S., Fagan, M., Krentel, M., Marin, G., Mellor-Crummey, J., and Tallent, N. R. HPCToolkit: Tools for performance analysis of optimized parallel programs. Concurrency and Computation: Practice and Experience 22, 6 (2010), 685–701.
  • (2) Eranian, S. Perfmon2: a flexible performance monitoring interface for Linux. In Ottawa Linux Symposium (2006), Citeseer, pp. 269–288.
  • (3) Geimer, M., Wolf, F., Wylie, B. J., Ábrahám, E., Becker, D., and Mohr, B. The Scalasca performance toolset architecture. Concurrency and Computation: Practice and Experience 22, 6 (2010), 702–719.
  • (4) Gleixner, T., and Molnar, I. Linux 2.6.32: perf_event.h. http://lwn.net/Articles/310260/, Dec. 2008.
  • (5) Guillen, C. Knowledge-based performance monitoring for large scale HPC architectures. Dissertation (2015), http://mediatum.ub.tum.de?id=1237547.
  • (6) Intel. Intel 64 and IA-32 Architectures Software Developer Manuals. http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html, June 2015.
  • (7) Intel. Intel Open Source Technology Center for PerfMon. https://download.01.org/perfmon/, Sept. 2015.
  • (8) Intel. Intel Xeon Processor E3-1200 v3 Product Family Specification Update. http://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e3-1200v3-spec-update.pdf, Aug. 2015.
  • (9) Intel. Intel Xeon Processor E5 v3 Family Uncore Performance Monitoring. https://www-ssl.intel.com/content/dam/www/public/us/en/zip/xeon-e5-v3-uncore-performance-monitoring.zip, June 2015.
  • (10) Kufrin, R. Perfsuite: An accessible, open source performance analysis environment for linux. In 6th International Conference on Linux Clusters: The HPC Revolution (2005), vol. 151, Citeseer, p. 05.
  • (11) Luk, C.-K., Cohn, R., Muth, R., Patil, H., Klauser, A., Lowney, G., Wallace, S., Reddi, V. J., and Hazelwood, K. Pin: Building customized program analysis tools with dynamic instrumentation. SIGPLAN Not. 40, 6 (June 2005), 190–200.
  • (12) Mucci, P. J., Browne, S., Deane, C., and Ho, G. PAPI: A portable interface to hardware performance counters. In In Proceedings of the Department of Defense HPCMP Users Group Conference (1999), pp. 7–10.
  • (13) Pettersson, M. Linux x86 performance-monitoring counters driver, 2003.
  • (14) Roehl, T. Performance patterns for the Intel Haswell EP/EN/EX architecture. https://github.com/RRZE-HPC/likwid/wiki/PatternsHaswellEP, Sept. 2015.
  • (15) Ryan, B. Inside the Pentium. BYTE Magazine Volume 18, Number 6 (1993), 102–104.
  • (16) Schulz, M., Galarowicz, J., Maghrak, D., Hachfeld, W., Montoya, D., and Cranford, S. Open| SpeedShop: An open source infrastructure for parallel performance analysis. Scientific Programming 16, 2-3 (2008), 105–121.
  • (17) Treibig, J., Hager, G., and Wellein, G. LIKWID: A lightweight performance-oriented tool suite for x86 multicore environments. In Proceedings of PSTI2010, the First International Workshop on Parallel Software Tools and Tool Infrastructures (San Diego CA, 2010).
  • (18) Treibig, J., Hager, G., and Wellein, G. Pattern driven node level performance engineering. http://sc13.supercomputing.org/sites/default/files/PostersArchive/tech_posters/post254s2-file2.pdf, Nov. 2013. SC13 poster.
  • (19) Treibig, J., Hager, G., and Wellein, G. Performance patterns and hardware metrics on modern multicore processors: Best practices for performance engineering. In Euro-Par 2012: Parallel Processing Workshops, vol. 7640 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2013, pp. 451–460.
  • (20) Weaver, V., Terpstra, D., and Moore, S. Non-determinism and overcount on modern hardware performance counter implementations. In Performance Analysis of Systems and Software (ISPASS), 2013 IEEE International Symposium on (April 2013), pp. 215–224.
  • (21) Zaparanuks, D., Jovic, M., and Hauswirth, M. Accuracy of performance counter measurements. In Performance Analysis of Systems and Software, 2009. ISPASS 2009. IEEE International Symposium on (April 2009), pp. 23–32.