Simplifying heterogeneous migration between x86 and ARM machines

12/02/2021
by   Nikolaos Mavrogeorgis, et al.
0

Heterogeneous computing is the strategy of deploying multiple types of processing elements within a single workflow, and allowing each to perform the tasks to which is best suited. To fully harness the power of heterogeneity, we want to be able to dynamically schedule portions of the code and migrate processes at runtime between the architectures. Also, migration includes transforming the execution state of the process, which induces a significant overhead that offsets the benefits of migrating in the first place. The goal of my PhD is to work on techniques that allow applications to run on heterogeneous cores under a shared programming model, and to tackle the generic problem of creating a uniform address space between processes running on these highly diverse systems. This would greatly simplify the migration process. We will start by examining a common stack layout between x86 and ARM binaries, focusing on these two widely spread architectures, and later we will try to generalize to other more diverse execution environments. On top of that, the performance and energy efficiency of the above effort compared to current approaches will be benchmarked.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

02/06/2019

Exploration of Performance and Energy Trade-offs for Heterogeneous Multicore Architectures

Energy-efficiency has become a major challenge in modern computer system...
10/27/2021

Xar-Trek: Run-time Execution Migration among FPGAs and Heterogeneous-ISA CPUs

Datacenter servers are increasingly heterogeneous: from x86 host CPUs, t...
01/12/2013

Dynamic Transparent General Purpose Process Migration For Linux

Process migration refers to the act of transferring a process in the mid...
01/14/2021

Checkpoint, Restore, and Live Migration for Science Platforms

We demonstrate a fully functional implementation of (per-user) checkpoin...
04/17/2018

Mage: Online Interference-Aware Scheduling in Multi-Scale Heterogeneous Systems

Heterogeneity has grown in popularity both at the core and server level ...
09/10/2021

Solver-based Gradual Type Migration

Gradually typed languages allow programmers to mix statically and dynami...
05/16/2019

Auto-tuning of dynamic load balancing applied to 3D reverse time migration on multicore systems

Reverse time migration (RTM) is an algorithm widely used in the oil and ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In both industry and academia there is the belief that in order to achieve higher performance, energy efficiency and lower cost, attention must be given to specialized hardware (Jouppi et al., 2017). Combining this special-purpose hardware (e.g. GPUs, TPUs, accelerators) with general-purpose hardware (e.g. CPUs), and reconfigurable integrated circuits (e.g. FPGAs), leads to heterogeneous architectures, allowing mixed workloads to be executed in the most suitable hardware (AMD, 2011). The different processing units may share the same ISA, like in the big.LITTLE architecture (Jeff, 2012), or even employ heterogeneous-ISAs. A heterogeneous-ISA approach yields even more diversity to exploit, like different code density, register depth and width or decode instruction complexity (Venkat and Tullsen, 2014). For example, smartNIC and smartSSD processors are increasingly adopted in the data center (Le et al., 2017; Kang et al., 2013), and they are usually paired with x86 host processors, which could lead to large-scale live migration between architectures.

Whatever the type of heterogeneity supported, a shared memory programming (SMP) model is usually the most convenient solution for the programmer. Moreover, bus standards like OpenCAPI (Consortium, 2021) have emerged, aiming to provide improved coherency and synchronization in heterogeneous systems, favoring SMP programming models. Therefore, we will focus on the setting of SMP models in heterogeneous-ISA environments.

An approach like Popcorn (Barbalace et al., 2017), allows applications to migrate during runtime at equivalence points of computation in the program (Lyerly, 2016), but requires first the stack transformation of the program between architectures. This creates an overhead to the migration, which grows linearly with the number of stack frames, and the number of variables, but also poses additional engineering effort for creating the transformation toolchain. Also, the above technique generates two fat binaries, with some metadata necessary for migration, which are more difficult to deploy in embedded systems. In order to harness the power of various heterogeneous techniques, it is necessary to have efficient process migration at runtime, otherwise the migration costs will offset the benefits of heterogeneity.

One effective way to exploit heterogeneity would be to circumvent the stack transformation at runtime by imposing a uniform stack layout between the target architectures. The focus currently is between the x86-64 (Devices, 2012) and ARMv8 (Limited, 2020) target ISAs. The first fundamental part of this idea, is to keep the Application Binary Interface (ABI) the same between the architectures. On top of that, we need to explore the effect of various optimizations (target-dependent or not) that affect spills and refills in memory. To this end, we modify  the LLVM’s backend infrastructure (Lattner, 2008), controlling the emitted assembly instructions, and tune the optimization flags accordingly.

The first difference from existing works like bit.LITTLE, is that we focus on heterogeneous-ISA architectures, instead of homogeneous ones. In the context of ISA heterogeneity, some of the current approaches require the transformation at runtime of the stack from one architecture to another, in order to migrate a thread, like in the case of Popcorn software (Barbalace et al., 2015) or Venkat’s work (Bhat et al., 2016). We will try to avoid this transformation. Ultimately, our goal would be to extend this work to multiple architectures, instead of relying solely on the 64-bit versions of x86 and ARM, or on other commonly used combinations, as described in Section 5.

2. Overview of the proposed work

Our initial focus is to find ways to overcome the stack transformation overhead at runtime. The way an application interacts with its stack, doing load and store instructions, is determined by various factors, such as the ABI, the optimizations done at compile-time and the target-dependent code generation.

The first step, was to modify LLVM’s backend for the x86-64 and Aarch64 targets. First of all, the number of registers available in the architecture (register depth) affects the register pressure, i.e. the number of simultaneously live variables at an instruction) and, consequently, the frequency of spills and refills in the stack (Braun and Hack, 2009). Therefore, we modified the architecture backend, to get the same register depth. We have, also, mapped the various registers one-to-one, taking into consideration their occasional target specific functionality, e.g. the platform register in ARMv8. Moreover, the procedure calling convention dictates the number of registers that will be spilled at the boundary of the stack frame as arguments. In the same way, it enumerates the callee-saved registers, which will also be saved in the stack frame in case they are to be modified. Hence, we implemented the same calling conventions for both targets.

However, even if these differences are completely resolved, we still have to observe the dynamic behaviour of the executed application, and its interaction with the stack. Ideally, we aim to generalize our solution to other architectures, with different register depths, pointer sizes and other architecture specifics.

3. Preliminary results

A main concern is to ensure that forcing some characteristics on an architecture does not impact performance a lot during program execution, given that we are missing some architecture-specific optimizations. For instance, we have removed 15 of the general purpose registers (GPRs) of the Aarch64 LLVM target, which had 32 GPRs, to match the 17 GPRs of the x86-64 architecture (including the stack pointer). So far, we have run experiments only on integer benchmarks, for the C language. Also, we focused on compute- and  memory-intensive benchmarks. Running the NAS Parallel Benchmarks (Bailey et al., 1991), we measure the overhead of removing the registers for Aarch64 and show some sample plots for one benchmark in Figure 1. All in all, we observed no more than 6% of overhead in the -O1 flag for the benchmark suite.

Figure 1. BT results for a modified AArch64 target

4. Work to be done

Now we will describe the necessary changes to the LLVM backends of the targets. Where a register in an architecture does not have a reciprocal to the other architecture (e.g. there is no platform register in x86-64), we plan to either emulate the same behaviour, or ignore it if it has no important role   to our prototype.

In addition, there are some fundamental differences between our targets that lead to stack mismatch during execution. For example, the x86 architecture is CISC and includes operations between the memory and the registers. On the contrary, the ARM architecture is RISC and employs the load-compute-store philosophy, which creates additional register pressure, especially now that we have reduced the number of ARMv8 registers to match the number of x86-64 registers. Therefore, we need to empirically identify stack layout differences at runtime and pinpoint the causes of these differences.

Following the above, we may have to mitigate those differences, either at the intermediate representation (IR) level (e.g. with a flag like mem2reg, to reduce unnecessary loads), or at a target-dependent level. If tuning the flags is not sufficient, we may again need to modify the code generation of the respective backend, to emulate the desired behaviour.

After this work has matured enough, we will try to generalize our approach by analyzing the fundamental properties of an architecture that shape its stack at runtime. Consequently, we will examine the feasibility of  an architecture design and/or a specification of a more universal ABI, that allows us to expand to more diverse targets, like CPU/DPU systems. In order to address architectures with smaller number of registers, we may have to emulate some registers by consuming more stack area.

Finally, we will check experimentally that implementing a similar address space does not lead to performance issues, for execution on only one architecture.

5. Related Work

Having a unified address space in heterogeneous environments, is not a new idea. AMD started originally the Heterogeneous System Architecture (HSA) framework, supporting a single address space accessible to both CPU and GPU, through virtual address translation (Kyriazis, 2012). Still, it is the programmer’s responsibility to identify and schedule the workloads statically. In the Popcorn compiler toolchain and state transformation runtime (Barbalace et al., 2017), a separate binary is created for each architecture, which is augmented with metadata by the middle- and back-end LLVM passes, necessary for the state transformation. When a migration occurs, the runtime converts all function activations from the source ISA format to the destination ISA format (Lyerly, 2016). In another case, Venkat(Venkat and Tullsen, 2014) and DeVuyst (DeVuyst et al., 2012) use dynamic binary translation for one fat binary, until the binary reaches a migration point and state transformation takes place. These last efforts also included smaller register widths, like 32 bits, but we foresee that architectures with at least 64 bits will dominate, hence we focus on them for now.

6. Conclusion

This paper presents the outline of our proposed approach, for simplifying heterogeneous ISA migration. Our plan is to come up with solutions that create a uniform address space between the processes, whereas eliminating the overhead of state transformation at runtime. Preliminary results indicate that enforcing a similar stack layout, through an identical ABI between x86-64 and Aarch64 targets, does not deteriorate performance significantly, while it paves the way for actually creating identical stack layouts. An ultimate goal would be to extend this work, in order to ease portability of migration to various targets, other than x86-64 and ARMv8.

Acknowledgements.
The work has been supervised by Dr Antonio Barbalace at the University of Edinburgh. The PhD started in 2020 and thesis submission is expected in 2023.

References

  • (1)
  • AMD (2011) AMD. 2011. Apu Overview. (2011), 1 – 2.
  • Bailey et al. (1991) D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, L. Dagum, R. A. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. S. Schreiber, H. D. Simon, V. Venkatakrishnan, and S. K. Weeratunga. 1991. The nas parallel benchmarks. International Journal of High Performance Computing Applications 5, 3 (1991), 63–73. https://doi.org/10.1177/109434209100500306
  • Barbalace et al. (2017) Antonio Barbalace, Robert Lyerly, Christopher Jelesnianski, Anthony Carno, Ho-Ren Chuang, Vincent Legout, and Binoy Ravindran. 2017. Breaking the boundaries in heterogeneous-ISA datacenters. ACM SIGARCH Computer Architecture News 45, 1 (2017), 645–659.
  • Barbalace et al. (2015) Antonio Barbalace, Marina Sadini, Saif Ansary, Christopher Jelesnianski, Akshay Ravichandran, Cagil Kendir, Alastair Murray, and Binoy Ravindran. 2015. Popcorn: bridging the programmability gap in heterogeneous-ISA platforms. In Proceedings of the Tenth European Conference on Computer Systems. 1–16.
  • Bhat et al. (2016) Sharath K Bhat, Ajithchandra Saya, Hemedra K Rawat, Antonio Barbalace, and Binoy Ravindran. 2016. Harnessing energy efficiency of heterogeneous-ISA platforms. ACM SIGOPS Operating Systems Review 49, 2 (2016), 65–69.
  • Braun and Hack (2009) Matthias Braun and Sebastian Hack. 2009. Register spilling and live-range splitting for ssa-form programs.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    5501 LNCS (2009), 174–189.
    https://doi.org/10.1007/978-3-642-00722-4_13
  • Consortium (2021) OpenCAPI Consortium. 2021. OpenCAPI Open Interface Architecture. https://opencapi.org/about
  • Devices (2012) Advanced Micro Devices. 2012. AMD64 Technology AMD64 Architecture Programmer ’ s Manual Volume 4 : 128-Bit and 256-Bit. 4, 26568 (2012).
  • DeVuyst et al. (2012) Matthew DeVuyst, Ashish Venkat, and Dean M. Tullsen. 2012. Execution Migration in a Heterogeneous-ISA Chip Multiprocessor. SIGPLAN Not. 47, 4 (March 2012), 261–272. https://doi.org/10.1145/2248487.2151004
  • Jeff (2012) Brian Jeff. 2012. Big. LITTLE system architecture from ARM: saving power through heterogeneous multiprocessing and task context migration. In Proceedings of the 49th Annual Design Automation Conference. ACM, 1143–1146.
  • Jouppi et al. (2017) Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Daniel Killebrew, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon. 2017.

    In-Datacenter Performance Analysis of a Tensor Processing Unit. In

    Proceedings of the 44th Annual International Symposium on Computer Architecture (Toronto, ON, Canada) (ISCA ’17). Association for Computing Machinery, New York, NY, USA, 1–12.
    https://doi.org/10.1145/3079856.3080246
  • Kang et al. (2013) Y. Kang, Y. Kee, E. L. Miller, and C. Park. 2013. Enabling cost-effective data processing with smart SSD. In 2013 IEEE 29th Symposium on Mass Storage Systems and Technologies (MSST). 1–12. https://doi.org/10.1109/MSST.2013.6558444
  • Kyriazis (2012) George Kyriazis. 2012. Heterogeneous system architecture: A technical review. AMD, Inc (2012), 1–18. http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Heterogeneous+System+Architecture+:+A+Technical+Review#0
  • Lattner (2008) Chris Lattner. 2008. Introduction to the llvm compiler system. In Proceedings of International Workshop on Advanced Computing and Analysis Techniques in Physics Research, Erice, Sicily, Italy. 19.
  • Le et al. (2017) Yanfang Le, Hyunseok Chang, Sarit Mukherjee, Limin Wang, Aditya Akella, Michael M. Swift, and T. V. Lakshman. 2017. UNO: Uniflying Host and Smart NIC Offload for Flexible Packet Processing. In Proceedings of the 2017 Symposium on Cloud Computing (Santa Clara, California) (SoCC ’17). Association for Computing Machinery, New York, NY, USA, 506–519. https://doi.org/10.1145/3127479.3132252
  • Limited (2020) Arm Limited. 2020. Armv8-A Instruction Set Architecture Non-Confidential Proprietary Notice. 1 (2020).
  • Lyerly (2016) Robert F. Lyerly. 2016. Popcorn linux: A compiler and runtime for state transformation between heterogeneous-ISA architectures.
  • Venkat and Tullsen (2014) Ashish Venkat and Dean M Tullsen. 2014. Harnessing ISA diversity: Design of a heterogeneous-ISA chip multiprocessor. In 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA). IEEE, 121–132.