Lista

download Lista

of 6

description

MultiObjective Optimization

Transcript of Lista

  • UNIVERSIDADE FEDERAL DE MINAS GERAIS

    Programa de Pos-Graduacao em Engenharia Eletrica

    Multiobjective Optimization - Exercises

    Prof. Frederico Gadelha Guimaraes

    1 Basic concepts

    Exercise 1Define local minimum and global minimum.

    Exercise 2Define unimodal and multimodal function.

    Exercise 3Write the necessary and sufficient conditions for characterizing a local minimum for

    i. an unconstrained optimization problem.

    ii. a constrained optimization problem.

    Exercise 4Let f(x) = 100(x2 x21)2 + (1 x1)2 subject to g(x) = x21 + x22 2. Verify if the necessaryconditions for a local minimum are satisfied at (1, 1)T .

    Exercise 5It is known that the objective function

    f(x) = 2x21 + x1x2 + x22 + x2x3 + x

    23 6x1 7x2 8x3 + 9

    has a local minimum at x = (6/5, 6/5, 17/5)T .

    i. Verify if the necessary conditions are valid at x.

    ii. Verify if x is also the global minimum.

    Exercise 6Calculate the first and second derivatives of the function below at x = 0:

    f(x) = x41 + x1x2 + (1 + x2)2

    i. Show that H(0) is not definite positive.

    ii. Determine the local minimum of the function.

    1

  • Programa de Pos-Graduacao em Engenharia Eletrica PPGEE/UFMG 2

    2 Karush-Kuhn-Tucker conditions

    Exercise 7Consider the following constrained minimization problem:

    min f(x) = x21 x22

    subject to

    x1 + x2 3x1 2x1 0x2 0

    i. Draw the feasible region and some level curves of the objective function.

    ii. Identify the solution of the problem.

    iii. Show geometrically that the Karush-Kuhn-Tucker conditions are satisfied at the solution.

    iv. Show geometrically that the Karush-Kuhn-Tucker are also satisfied at (2, 1)T and explainwhy, since this is not the solution of the problem.

    Exercise 8Consider the maximization of the function f(x) = x2, 1 x 2. Show that the Karush-Kuhn-Tucker conditions are met at x = 1, x = 0, and x = 2, although the global optimum is x = 2.Discuss.

    Exercise 9Solve the minimization problem below analytically using the optimality conditions for constrainedproblems.

    min f(x) = x1x2sujeito a

    {g(x) = x21 + 3x

    22 3

    h(x) = x1 + 3x2 9 = 0

    Exercise 10Consider the following problem:

    min f(x) = x41 + x42 + 12x

    21 + 6x

    22 x1x2 x1 x2

    subject to

    g1(x) = x1 + x2 6g2(x) = 2x1 x2 3x1 0; x2 0

    i. Write the equations for the optimality conditions.

    ii. show that (3, 3)T is the only solution.

    3 One-dimensional minimization

    Exercise 11Derive the one-dimensional minimization problem for the following case:

    min f(x) = (x21 x2)2 + (1 x1)2

    from the starting point x1 = (2, 2)T along the search direction d1 = (1.00, 0.25)T .Exercise 12

    Find the minimum of f(x) = x(x 1.5) in the interval [0.0, 1.0] to within 10% of the exact valueusing:

  • Programa de Pos-Graduacao em Engenharia Eletrica PPGEE/UFMG 3

    i. the dichotomous search method.

    ii. the interval halving method.

    Exercise 13Given the initial interval [a, b], it is possible to calculate the number of iterations required by theGolden Section method such that (b a) . Show how to calculate the number of iterationsrequired.

    Exercise 14Find the minimum of the function

    f() = 0.65 0.751 + 2

    0.65 tan1 1

    using the secant method with an initial step size of t0 = 0.1, 1 = 0.0 and = 0.01. Plot thegraph of the function in the interval [0, 3] and identify its minimum.

    4 Unconstrained methods

    Exercise 15Let f(x) = x21 + 25x

    22 and x0 = (2, 2)

    T .

    i. Apply the Gradient method to minimize f(x).

    ii. Apply the Newton method to minimize f(x).

    Exercise 16Let the minimization of f(x) = x31 + x1x2 x21x22 and x0 = (1, 1)T . A computational programcarefully programmed to execute the Newton method on this problem was not successful. Discussthe possible reasons for this failure.

    Exercise 17Derive the update formula of the Newton method.

    Exercise 18Compare the gradients of

    f(x) = 100(x2 x21)2 + (1 x1)2at the point (0.5, 0.5)T given by the following methods:

    i. analytical differentiation.

    ii. forward difference method.

    iii. central difference method.

    Exercise 19Show that an element of the Hessian matrix can be approximated with forward finite differencesas:

    2f

    xixj

    x

    f(x+ iei + jej) f(x+ iei) f(xj + jej) + f(x)ij

    Exercise 20What are the advantages of quasi-Newton optimization methods?

    Exercise 21Consider the minimization of f(x) = (x1 + 2x2 7)2 + (2x1 + x2 5)2. If a base simplex isdefined by the vertices

    x1 =

    [ 22

    ]; x2 =

    [ 30

    ]; x3 =

    [ 11

    ]find a sequence of four improved vectors using reflection, expansion and/or contraction.

  • Programa de Pos-Graduacao em Engenharia Eletrica PPGEE/UFMG 4

    Exercise 22Consider the minimization of the objective function f(x) = 100(x2 x21)2 + (1 x1)2 from theinitial point (1.2, 1.0)T .

    i. Perform two iterations of the steepest descent method.

    ii. Perform two iterations of the Fletcher-Reeves method.

    iii. Perform two iterations of the BFGS method.

    5 Constrained methods

    Exercise 23Consider the following problem:

    min f(x) = 5x1

    subject to

    {g1(x) = x1 + x2 0g2(x) = x

    21 + x

    22 4 0

    i. Draw the feasible region and determine the solution geometrically.

    ii. Write a barrier function that could be used to solve this problem.

    iii. Write a penalty function that could be used to solve this problem.

    iv. Verify the Karush-Kuhn-tucker conditions at the solution.

    v. Verify that these conditions are not satisfied at any other feasible point.

    Exercise 24Constrast Penalty methods and Augmeted Lagrangian methods, highlighting the pros and cons ofeach approach.

    Exercise 25In Penalty function methods we utilize a single parameter to penalize all the constraints. Whatare the advantages of using one parameter for each function? Suggest an update scheme for theseparameters.

    Exercise 26Let the problem: min f(x) = x3, subject to h(x) = x 1 = 0; for which the optimal solution isx = 1.

    i. Write a penalty function that transforms the original constrained problem into an unconstrai-ned problem.

    ii. Calculate the solution of the problem for u = 1, 10, 100.

    iii. Make u and show that the solution converges to x = 1.Exercise 27

    Consider the following problem

    min f(x) = x21 + x22

    subject to

    {g1(x) = 2x1 + x2 2 0g2(x) = 1 x2 0

    i. Determine the optimal solution.

    ii. Choose a penalty function, make u0 = 1 and x0 = (2, 6)T , and determine x1 by using the

    Gradient method.

  • Programa de Pos-Graduacao em Engenharia Eletrica PPGEE/UFMG 5

    iii. Choose a penalty function, make u0 = 1 and x0 = (2, 6)T , and determine x2 by using the

    Conjugate Gradient method.

    Exercise 28Let the objective function f(x) = 6x21 + 4x1x2 + 3x

    22 be minimized subject to the constraint

    h(x) = x1 + x2 = 5.

    i. Write the augmented Lagrangian function for this problem.

    ii. Making u0 = 2 and 0 = 0, perform three iterations of the Augmented Lagrangian method,finding the minimumfrom direct application of the first order optimality condition.

    iii. Making u0 = 20 and 0 = 0, perform three iterations of the Augmented Lagrangian method,finding the minimum from direct application of the first order optimality condition.

    Exercise 29Show that the Lagrangian and the Augmented Lagrangian have the same critical points.

    Exercise 30Show that the update formula for the Lagrange multiplier associated with an inequality constraintis given by:

    i,k+1 = i,k + uk max

    [gi(xk+1),i,k

    uk

    ]Exercise 31

    Show that the Augmented Lagrangian function:

    A(x, , , u) = (x, , ) +u

    2p(x)

    can be written in the equivalent form below:

    A(x, , , u) = f(x) +u

    2

    i

    [max

    (gi(x),i

    u

    )]2+j

    [hj(x) +

    ju

    ]2Exercise 32

    Consider the problem:

    min f(x) = x1 x2subject to

    {g1(x) = 3x

    21 2x1x2 + x22 1 0

    i. Generate the approximating LP problem at x0 = (2, 2)T .ii. Solve the approximating LP problem using graphical method and find whether the resulting

    solution is feasible or not.

    Exercise 33Find the solution of the following problem using the MATLAB function fmincon with the startingpoint x0 = (1.0, 1.0)

    T .

    min f(x) = x21 + x22

    subject to

    g1(x) = 4 x1 x22 0g2(x) = 3x2 x1 0g3(x) = 3x2 x1 0

  • Programa de Pos-Graduacao em Engenharia Eletrica PPGEE/UFMG 6

    6 Multiobjective problems

    Exercise 34Define Pareto-optimal solution, Pareto-optimal set and Pareto front.

    Exercise 35Show that the weighted sum method (Pw formulation) might not produce all the efficient solutionsin some cases.

    Exercise 36Discuss the disadvantages of the -constrained method.

    Exercise 37Show that it is possible to have the Karush-Kuhn-Tucker conditions satisfied at non Pareto-optimalpoints.

    Exercise 38Determine the set of Pareto-optimal solutions of the following multiobjective problem:

    minf1(x) = x1 + x2f2(x) = x

    21

    subject to

    g1(x) = x

    21 + x

    22 4

    g2(x) = x2 1g3(x) = x1 x2 2 0

    Exercise 39Determine the set of Pareto-optimal solutions of the following multiobjective problem:

    minf1(x) = x21 x22f2(x) = (x1 3)2 + (x2 3)2