Constrained Nonlinear Optimization in Information Science

Constrained Nonlinear Optimization in Information Science

DOI: 10.4018/978-1-5225-7368-5.ch053
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter provides an overview of constrained optimization methods. Background, theory, and examples are provided. Coverage includes Lagrange multipliers for equality constrained optimization with a Cobb-Douglass example from information science. The authors also provide Karush-Kuhn-Tucker for inequality-constrained optimization and a production example for smart phones with inequalities. An overview and discussion of numerical methods and techniques is also provided. The authors also provide a brief list of technology available to assist in solving these constrained nonlinear optimization problems.
Chapter Preview
Top

Background

The general constrained nonlinear programming (NLP) problem is to find x* as to optimize 978-1-5225-7368-5.ch053.m06subject to the constraints of the problem shown in equation (2).

  • Maximize or Minimize 978-1-5225-7368-5.ch053.m07

subject to

978-1-5225-7368-5.ch053.m08
(2) for i = 1,2,…,m.

Classical constrained optimization appeared with equality constrains and Lagrange multiplier named for Joseph Lagrange in the late 1700’s. It was almost two hundred years later when Kuhn-Tucker (1951) presented their famous Kuhn-Tucker (KT) conditions. Scholars later found that Karsuh (1939) had done considerable work on his thesis in the area of constrained optimization and thus, was added his name was added to create the Karsh-Kuhn-Tucker (KKT) conditions. Bellman (1952; 1957) created dynamic programming in the 1950’s to handle sequential constraints in optimization.

Complete Chapter List

Search this Book:
Reset