An Overview of the Last Advances and Applications of Greedy Randomized Adaptive Search Procedure

An Overview of the Last Advances and Applications of Greedy Randomized Adaptive Search Procedure

Airam Expósito Márquez, Christopher Expósito-Izquierdo
DOI: 10.4018/978-1-5225-2857-9.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

One of the most studied methods to get approximate solutions in optimization problems are the heuristics methods. Heuristics are usually employed to find good, but not necessarily optima solutions. The primary purpose of the chapter at hand is to provide a survey of the Greedy Randomized Adaptive Search Procedures (GRASP). GRASP is an iterative multi-start metaheuristic for solving complex optimization problems. Each GRASP iteration consists of a construction phase followed by a local search procedure. In this paper, we first describe the basic components of GRASP and the various elements that compose it. We present different variations of the basic GRASP in order to improve its performance. The GRASP has encompassed a wide range of applications, covering different fields because of its robustness and easy to apply.
Chapter Preview
Top

1. Introduction

Over the last few decades, a wide range of effective solution techniques have been developed to solve many challenging optimization problems arisen in multitude of application areas. Optimization problems are common in various domains as diverse as biology, economics, transportation, medicine, industry, and logistics, among others. Solving optimization problems in these domains gives rise to significant environmental, economic, and social impacts. Some representative examples of optimization problems are routing problems, scheduling operations in manufacturing areas, facility location problems in public and private sectors, network optimization in telecommunications, assignment problems, segmentation of images to detect potential tumours, problems associated with supply chain management, etc.

Broadly speaking, in the field of optimization problems, we are concerned with finding solutions which are optimal or near-optimal with respect to some given set of optimization criteria. Depending on the application field, the criteria are usually defined by practitioners, stakeholders, or decision makers, among others. From a practical perspective, in most of cases it is not possible to enumerate all the feasible solutions of the optimization problem under analysis and evaluate them on the basis of the imposed criteria. This is due to the fact that the volume of potential solutions often grows exponentially as the dimensions of the incumbent optimization problem. Thus, we cannot expect to be able to solve problem instances of realistic dimensions to optimality in most of cases. For this reason, depending on the size of the problem instance or the available computational resources -expressed in terms of time and memory-, we must conform with computing approximate solutions. In this regard, a great effort has gone into designing, implementing, and validating new approximate optimization techniques over the last years with the aim of reporting high-quality solutions of real-world optimization problems within reasonable computational time.

The computational complexity of most of real-world optimization problems gives rise to exact methods -those which report optima solutions in finite time- are not suitable for being applied in practice due to their high computational requirements. In fact, some optimization problems could never have an optimization technique to solve them (Sudkamp & Cotterman, 1988). For these reasons, these problems -termed intractable- demand new intelligent approaches to be solved effciently. In order to cope with these optimization problems, alternative approximate techniques have been progressively developed over the last decades. These techniques do not guarantee the optimality of the solutions reported but they are able to provide at least a feasible solution in reasonable computational time to fulfill the requirements imposed by the decision maker.

The approximate optimization techniques have found inspiration from diverse sources. Some of them are termed nature-inspired algorithms due to the fact that they borrow the foundations behind natural phenomena. Representative examples are Genetic Algorithms (Holland, 1975), Ant Colony Optimization Algorithms (Dorigo et al., 1996), Particle Swarm Optimization Algorithms (Kennedy & Eberhart, 1995), and Artificial Bee Colony Algorithms (Karaboga, 2005), among others. In these cases, the aforementioned algorithms are aimed at mimicing some evolutionary ideas associated with natural selection and genetics, behaviour of ants to find paths between the home colony and food sources, behaviour of particles in nature, and behaviour of a honey bee swarm, respectively. In the previous cases, the algorithms are based on evolving a population of solutions of the problem under study. However, alternative algorithms are single-solution-based approaches due to the fact that they work with only one solution. In all the cases, a general classification of the algorithms can be proposed according with the way the solution to manage is provided. One or several solutions provided by some procedure are transformed during the search process by using exploration operators in iterative algorithms, whereas values of the decision variables are assigned to an initial empty solution until a feasible solution is generated in greedy algorithms.

Complete Chapter List

Search this Book:
Reset