Recent Trends in High Speed Computing: Past, Present, and Future

Recent Trends in High Speed Computing: Past, Present, and Future

Copyright: © 2024 |Pages: 11
DOI: 10.4018/979-8-3693-3132-3.ch019
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In recent years, the issue of internet congestion management and modelling has been a hot topic of study. Following the development of the transfer control protocol (TCP) and mathematical models, which treat the underlying resource allocation in congestion control as an optimization issue, this study has focused on modelling and analysis of congestion control methods. On this basis, subsequent research has suggested and evaluated numerous primal, dual, and primal-dual algorithms.
Chapter Preview
Top

Introduction

Computer graphics and virtual reality, which connect people via games and link computer science to medicine, for example, where virtual environments may be used for practising surgery; robotics, which connect objects and robots; and scientific simulations, which connect computing and science. Interaction counting, despite its widespread usage, is a significant bottleneck in many systems. It can take one of two forms: 1) interactions may only matter within subsets of objects (e.g. collisions, where only neighbouring objects are relevant), in which case strategies such as spatial partitioning or bounding volume hierarchies are commonly used to find these subsets before performing interaction counting; or 2) all interactions matter (e.g. gravitational forces), in which case techniques such as spatial partitioning or bounding volume hierarchies are commonly used to find these subsets before performing interaction counting. Furthermore, it is often the case that the objects in issue are moving and collision detection must be performed in short time increments, such as when computing collision-free routes by a robot. Apart from the aforementioned methods, time and spatial coherence is also utilised in such circumstances, i.e., the fact that objects do not move too much in a short period of time is used. In either of these situations, however, smaller groups of items would still be subjected to interaction calculations, and the standard method is to iterate over each object and compare it to the others, resulting in an O(N2) complexity Waltman,2013-Traga, 2015.

We've heard about the usage of accelerators in High-Performance Computing (HPC) in recent years. More than 100 accelerated systems were featured in the list of the 500 most recent supercomputers in the world on January 1, 2016, accounting for 143 petaflops, accounting for more than a third of the overall FLOPS list. To satisfy the processing requirements of today's and future Big-Data and Exascale applications, a long-term High-Performance Computing (HPC) system is required. In an HPC system, there are two methods to achieve efficient processing time: creating a faster processor and calculating in parallel utilising many processors. When building a quicker processor using the first approach, the producer must decrease the electrical route so that the current is lower and the signal is shorter. Unfortunately, semiconductor technology relies on lithography methods that are rapidly approaching their limitations. The newest processor chips have achieved a technology of 45 nanometers, and when the manufacturing process is decreased, the risk of manufacturing mistakes increases, and the dependability decreases, thus the only way to speed up a calculation that is still open is to use parallelism. The software algorithm is decomposed into multiple lines that are executed concurrently in this manner. Each lane is handled by a single processor, and the results are then gathered once more.

As a result, several approximation modularity-based methods exist, one of which, the Louvain algorithm, is widely regarded as one of the quickest and most successful algorithms. Following the introduction of this technique, several better approaches for the Louvain algorithm were developed. Other goal functions were also suggested since modularity is known to have an issue known as the resolution limit, in which modularity may be impossible to identify certain tiny communities. The Louvain Prune method, a novel easy-to-implement basic acceleration technique for the Louvain algorithm, is the paper's major contribution. This allows the identification of communities in synthetic networks and huge actual networks to be done more quickly while retaining quality. The Louvain method and the inefficient process seen in early tests with huge actual networks are first described. The Louvain Prune algorithm is then proposed based on this. Finally, we provide data showing that our proposed method, the Louvain Prune algorithm, substantially lowers computing time by up to 90% while maintaining almost the same quality on synthetic and actual networks Gach, 2014-J.Elseberg, 2012.

Complete Chapter List

Search this Book:
Reset