Neural Network Inversion-Based Model for Predicting an Optimal Hardware Configuration: Solving Computationally Intensive Problems

Neural Network Inversion-Based Model for Predicting an Optimal Hardware Configuration: Solving Computationally Intensive Problems

Mirvat Mahmoud Al-Qutt, Heba Khaled, Rania El Gohary
Copyright: © 2021 |Pages: 23
DOI: 10.4018/IJGHPC.2021040106
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Deciding the number of processors that can efficiently speed-up solving a computationally intensive problem while perceiving efficient power consumption constitutes a major challenge to researcher in the HPC high performance computing realm. This paper exploits machine learning techniques to propose and implement a recommender system that recommends the optimal HPC architecture given the problem size. An approach for multi-objective function optimization based on neural network (neural network inversion) is employed. The neural network inversion approach is used for forward problem optimization. The objective functions in concern are maximizing the speedup and minimizing the power consumption. The recommendations of the proposed prediction systems achieved more than 89% accuracy for both validation and testing set. The experiments were conducted on 2500 CUDA core on Tesla K20 Kepler GPU Accelerator and Intel(R) Xeon(R) CPU E5-2695 v2.
Article Preview
Top

Introduction

The main innovation in this research is to exploit neural network inversion as a machine learning technique to recommend architecture for solving MF problem. The recommended architecture executes “SKIP Brute-Force” by (Faheem, 2010) to solve planted MF (15, 4). The recommended architecture aims to maximize speedup and minimizes power consumption within given time and power constraints. The recommended architecture employs different homogeneous parallel paradigms with several hardware architectures: TESLA K20 Kepler GPUs ACCELERATOR, each GPU has 13 Multiprocessors each of 192 CUDA Cores which using CUDA and MPI as hybrid programing paradigm and CPU Multi-core based on XEON dual physical processor, each has 12 core which using hybrid MPI and OpenMP programming paradigm

The problem parameters are:

  • Number of sequences (T): varies from 25 up to 2400.

  • Sequence Length (N): varies from 20 up to 240.

  • Problem Size (NxT): varies from 6 K up to 48 K.

The combination between various input problem size and different hardware architecture configuration generates 140 running case, each contains: T, N, problem size, specified architecture, power consumption and execution time, the exploited architectures are shown in Table 1, training dataset are described in tables from Table 2 to Table 8

Table 1.
Recommending System exploited architectures
Architecture specification
CPU Based architecture16*24 XEON dual physical processor
8*24 XEON dual physical processor
4*24 XEON dual physical processor
2*24 XEON dual physical processor
1*24 XEON dual physical processor
GPU Based Architecture1 NVidia TESLA K20 Kepler GPU CUDA Core
2 NVidia TESLA K20 Kepler GPU CUDA Core

Sequencing technology recent developments permit efficient and cost-effective acquisition of genomic data (Xiong, Zhongming, Arnold, and Yu, 2009). DNA motifs usually are considered as Transcription Factor Binding Sites (TFBS) where proteins are attached to regulate the expression of genes (Yu, Mani, Cao, and Brenner, 2010; Hu, Yu, Taylor, Chinnaiyan, Qin, 2010). Although there are several algorithms available to tackle this problem, MF is recognized as NP- complete problem (Nondeterministic Polynomial Time Order Problem). There is also different Accelerators, (S/W) and (H/W), have been developed to accelerate MF algorithms.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing