python - ScipyOptimizer gives incorrect optimization result -
i running non-linear optimization problem in openmdao, know optimal solution of (i want verify solution). using slsqp
driver configuration of scipyoptimizer
openmdao.api
.
i have 3 design variables a, b , c, respective design-spaces (amin amax , on) , single objective function z. said, know optimal values 3 design variables (let's call them asol, bsol , csol) yield minimum value of z (call zsol).
when run problem, value z larger zsol, signifying not optimal solution. when assign csol c , run problem , b design variables, value of z closer zsol , lesser got earlier (in 3 design variable scenario).
why observing behavior? shouldn't scipyoptimizer
give same solution in both cases?
edit: adding code..
from openmdao.api import indepvarcomp, group, problem openmdao.api import scipyoptimizer class rootgroup(group): def __init__(self): super(rootgroup, self).__init__() self.add('desvar_f', indepvarcomp('f', 0.08)) self.add('desvar_twc', indepvarcomp('tool_wear_compensation', 0.06)) self.add('desvar_v', indepvarcomp('v', 32.0)) # more config (adding components, connections etc.) class turningproblem_singlepart(problem): def __init__(self): super(turningproblem_singlepart, self).__init__() self.root = rootgroup() self.driver = scipyoptimizer() self.driver.options['optimizer'] = 'slsqp' self.driver.add_desvar('desvar_f.f', lower=0.08, upper=0.28) self.driver.add_desvar('desvar_twc.tool_wear_compensation', lower=0.0, upper=0.5) self.driver.add_desvar('desvar_v.v', lower=32.0, upper=70.0) self.driver.add_objective('inverse_inst.comp_output') # other config
this code gives me incorrect result. when remove desvar_twc
both classes, , assign optimal value (from solution have), correct result i.e. answer objective function lesser previous scenario.
without seeing actual model, can't sure. however, not case local optimizer's solution independent of starting condition in general. case if problem convex. guess problem not convex, , you'r running local optima.
you can try around using cobyla optimizer instead of slsqp, in experience can manage jump on local optima better. if problem bumpy, suggest switch nsga-ii or alpso pyopt-sparse library. these heuristic based optimizers job of finding "biggest hill", though don't climb way top of (they don't converge tightly). heuristic algorithms more expensive gradient based methods.
Comments
Post a Comment