What Supercomputing Will Be Like in the Coming Years

What Supercomputing Will Be Like in the Coming Years

Mehmet Dalkilic
DOI: 10.4018/978-1-7998-7156-9.ch020
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

This chapter is an abridged sort of “vision statement” on what supercomputing will be in the future. The main thrust of the argument is that most of the problem lies in the trafficking of data, not the computation. There needs to be a worldwide effort to put into place a means to move data efficiently and effectively. Further, there likely needs to be a fundamental shift in our model of computation where the computation is stationary and data moves to movement of computation to the data or even as the data is moving.
Chapter Preview
Top

Introduction

Indiana University has had a history of possessing some of the most powerful supercomputers for of any University world-wide. So, it is fitting to do exercise some visioning into the future of supercomputing holds for faculty who routinely use a supercomputer. The newest supercomputer, named Big Red 200 (BR200), coinciding with Indiana University’s 200th, anniversary is among the first next-generation supercomputer based on HPE’s Cray Shasta architecture. BR200 operates at nearly 6 petaFlops which, at the time of this writing, makes it the 32nd most powerful supercomputer in operation world-wide and the most powerful university-owned and operated AI supercomputer in the U.S.

A Simple Observation From Feynman

Richard Feynman (1918-1988) was a Nobel winning physicist who worked not only in physics, but in other areas as well, i.e., computing (Feynman et al., 1998). In this work, he makes a prescient prediction which we will paraphrase where t is time:

978-1-7998-7156-9.ch020.m01

This means, in words, as scientists move toward building increasingly more powerful computing machines, the time to move the data will be the limiting factor, not the time of the computation. This problem was seen several years back e.g., (Coughlin, 2018). A projected timeline showing the disparity between the growth of data in zettabytes

Figure 1.

Amount of data produced and predicted vs. when current machine learning and AI algorithms were developed. It is estimated that all of human speech every spoken could be captured with ~42 zettabytes.

978-1-7998-7156-9.ch020.f01

and when the current, most popular AI/ML algorithms were created is shown in Figure 1. The difference is startling when one realizes that the most popular data reduction technique, PCA (Principle Component Analysis) is 120 years old. Even with Moore’s Law still reasonably true, for the next couple of years, the fact remains that data growth will continue. The traditional means of moving data to the computation rather than moving (multiple) computations to the data will have to change. The infrastructure that researchers currently have moves data at nearly 2/3 the speed of light for short distances, but, almost amusingly, using physical transportation of the data remains faster than sending it electronically. What does this mean for the supercomputer?

Complete Chapter List

Search this Book:
Reset