Evaluation of the Distributed Strategies for Data Parallel Deep Learning Model in TensorFlow

Evaluation of the Distributed Strategies for Data Parallel Deep Learning Model in TensorFlow

Aswathy Ravikumar, Harini Sriraman
DOI: 10.4018/978-1-6684-9804-0.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Distributed deep learning is a branch of machine intelligence in which the runtime of deep learning models may be dramatically lowered by using several accelerators. Most of the past research reports the performance of the data parallelism technique of DDL. Nevertheless, additional parallelism solutions in DDL must be investigated, and their performance modeling for specific applications and application stacks must be reported. Such efforts may aid other researchers in making more informed judgments while creating a successful DDL algorithm. Distributed deep learning strategies are becoming increasingly popular as they allow for training complex models on large datasets in a much shorter time than traditional training methods. TensorFlow, a popular open-source framework for building and training machine learning models, provides several distributed training strategies. This chapter provides a detailed evaluation of the different TensorFlow strategies for medical data. The TensorFlow distribution strategy API is utilized to perform distributed training in TensorFlow.
Chapter Preview
Top

Introduction

Deep learning has emerged as an effective method for dealing with a wide variety of challenging issues in a variety of fields, including machine vision, natural language processing, and recognition of speech, to name a few (Baby, 2014; Harini et al., 2022; John et al., 2021; Ravikumar et al., 2022; Robin et al., 2021). As deep learning models have increased in size and complexity over the past few years, it has become increasingly clear how essential it is to make use of distributed computing platforms to effectively train these models. According to Abadi et al. (2016), TensorFlow, considered one of the most renowned deep learning frameworks, provides customers with a distributed computing API. Thanks to this API, users can train large models in a distributed manner using several machines.

TensorFlow is widely employed in a wide variety of fields because to the numerous benefits it offers in deep learning (Muhammad Jaleed Khan et al., 2018; Zhang & Wei, 2020). This is because TensorFlow can be applied to a large range of problems. This is because Google was the one that initially built TensorFlow. On the other hand, TensorFlow is highly restricted for a single processing node, particularly when the size of the data set rises (Andrade & Trabasso, 2017; Bekeneva et al., 2020). As the size of the data set grows, the limits of TensorFlow become more and more apparent. How to overcome this barrier for TensorFlow to expand efficiently on ultra-large-scale systems, enhance the training pace of deep learning through parallelism, minimize the time required for training, improve the accuracy of both training and testing and apply deep learning to new domains. Sk et al. (2017) and Zhang et al. (2019) found that it significantly impacts the solving of challenging situations, which can have far-reaching implications.

Distributed data parallelism and distributed strategies are important concepts in machine learning when working with large datasets that cannot be processed on a single machine (Dean et al., n.d.). In TensorFlow, these concepts are implemented to enable training of machine learning models on distributed systems. Distributed data parallelism is a technique used to distribute the training of a model across multiple devices or machines. This is achieved by breaking the input data into multiple pieces and processing them in parallel on each device or machine. The model weights are then averaged across all devices or machines to produce the final model. Distributed strategies in TensorFlow are a set of tools and techniques used to distribute training and inference across multiple devices or machines. These strategies include data parallelism, model parallelism, and parameter servers. Data parallelism involves distributing the input data across multiple devices or machines and training the model on each device or machine. Model parallelism(Chen et al., 2019) involves distributing the model across multiple devices or machines and training different parts of the model on each device or machine. Parameter servers (Li et al., n.d.)involve separating the model parameters from the computation and storing them on separate devices or machines. Using distributed data parallelism and distributed strategies in TensorFlow can greatly speed up the training of machine learning models and enable training on large datasets that would otherwise not be feasible on a single machine.

In this chapter, we will evaluate various distributed strategies for data parallelism in TensorFlow. Data parallelism is a common technique used in distributed deep learning, where large datasets are partitioned across multiple machines and each machine trains the model on a subset of the data. We will explore different strategies for data parallelism in TensorFlow, including synchronous and asynchronous training, and evaluate their performance on a large-scale deep learning model. Our aim is to provide insights into the strengths and weaknesses of each strategy, and help users choose the most appropriate strategy for their specific use case.

Complete Chapter List

Search this Book:
Reset