Finally, how many efficiencies supercomputers have? And, what do they measure?

01/05/2020
by   János Végh, et al.
0

Using an extremely large number of processing elements in computing systems leads to unexpected phenomena, such as different efficiencies of the same system for different tasks, that cannot be explained in the frame of classical computing paradigm. The simple non-technical (but considering the temporal behavior of the components) model, introduced here, enables us to set up a frame and formalism, needed to explain those unexpected experiences around supercomputing. Introducing temporal behavior into computer science also explains why only the extreme scale computing enabled us to reveal the experienced limitations. The paper shows, that degradation of efficiency of parallelized sequential systems is a natural consequence of the classical computing paradigm, instead of being an engineering imperfectness. The workload, that supercomputers run, is much responsible for wasting energy, as well as limiting the size and type of tasks. Case studies provide insight, how different contributions compete for dominating the resulting payload performance of a computing system, and how enhancing the interconnection technology made computing+communication to dominate in defining the efficiency of supercomputers. Our model also enables to derive predictions about supercomputer performance limitations for the near future, as well as it provides hints for enhancing supercomputer components. Phenomena experienced in large-scale computing show interesting parallels with phenomena experienced in science, more than a century ago, and through their studying a modern science was developed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset