Description

  1. Options for the user input file with their default values
  2. Code organization
  3. Compatibility
  4. Options for the DOTcvp modules
  5. Options for NLP and MINLP solvers

1. Toolbox input options

option description type options default value
Initialization
data.name name of the problem string -- 'no_name'
data.compiler compiler for the dynamic, sensitivities, and Jacobian string ['None'|'FORTRAN'] 'None'
Settings for IVP (ODEs, sensitivities)
data.odes.Def_FORTRAN option needed only for FORTRAN parameters definition string -- {}
data.odes.parameters constant parameters before ODE string [structure] -- {}
data.odes.Def_MATLAB option needed only for MATLAB parameters definition string [structure] -- {}
data.odes.res problem definition in ODE form string [structure] -- {''}
data.odes.ic vector of initial conditions real [vector] -- [ ]
data.odes.NUMs number of state variables (y) integer -- size(data.odes.res,2)
data.odes.t0 initial time real -- 0.0
data.odes.tf final time real -- 10.0
data.odes.NonlinearSolver [reference to SUNDIALS] string ['Newton'|'Functional'] 'Newton'
data.odes.LinearSolver [reference to SUNDIALS] string ['Dense'|'Diag'|'Band'
|'GMRES'|'BiCGStab'|'TFQMR']
'Dense'
data.odes.LMM Linear Multistep Method [reference to SUNDIALS] string ['Adams'|'BDF'] 'Adams'
data.odes.MaxNumStep maximum number of steps [reference to SUNDIALS] integer -- 500
data.odes.RelTol IVP relative tolerance level [reference to SUNDIALS] real [positive] -- 1*10^(-7)
data.odes.AbsTol IVP absolute tolerance level [reference to SUNDIALS] real [positive] -- 1*10^(-7)
data.sens.SensAbsTol absolute tolerance for sensitivity variables [reference to SUNDIALS] real [positive] -- 1*10^(-7)
data.sens.SensMethod [reference to SUNDIALS] string ['Staggered'|'Staggered1'
|'Simultaneous']
'Staggered'
data.sens.SensErrorControl sensitivities correction [reference to SUNDIALS] string ['on'|'off'] 'on'
NLP definition
data.nlp.RHO number of time intervals integer -- 10
data.nlp.problem problem definition string ['min'|'max'] 'min'
data.nlp.J0 cost function (performance index) string -- ''
data.nlp.u0 initial value for control values real [vector] -- []
data.nlp.lb lower bounds for control values real [vector] -- []
data.nlp.ub upper bounds for control values real [vector] -- []
data.nlp.p0 initial values for time-independent parameters real [vector] -- []
data.nlp.lbp lower bounds for time-independent parameters real [vector] -- []
data.nlp.ubp upper bounds for time-independent parameters real [vector] -- []
data.nlp.solver NLP or MINLP solver which will be used for the optimization string ['FMINCON'|'IPOPT'|'SRES'
|'DE'|'ACOMI'|'MISQP'|'MITS']
'FMINCON'
data.nlp.SolverSettings insert the name of the file that contains settings for NLP or MINLP solver, if does not exist use ['None'] string -- ['None']
data.nlp.NLPtol NLP tolerance level real -- 1*10^(-5)
data.nlp.GradMethod gradient method used for determinist NLP or MINLP solvers string ['SensitivityEq'|'FiniteDifference'
|'None']
'SensitivityEq'
data.nlp.MaxIter maximum number of function evaluations integer [positive] -- 1000
data.nlp.MaxCPUTime maximum CPU time of the optimization (60*60*0.25) = 15 minutes real [positive] -- 60*60*0.25
data.nlp.approximation control approximation string ['PWC'|'PWL'] 'PWC'
data.nlp.FreeTime set 'on' if free time is considered string ['on'|'off'] 'off'
data.nlp.t0Time initial size of the time intervals, e.g. [data.odes.tf/data.nlp.RHO] or for the each time interval separately [dt1 dt2 dt3] real [vector positve] -- [data.odes.tf/data.nlp.RHO]
data.nlp.lbTime lower bound of the time intervals real [positive] -- 0.01
data.nlp.ubTime upper bound of the time intervals real [positive] -- data.odes.tf
data.nlp.NUMc number of control variables (u) integer [positive] -- size(data.nlp.u0,2)
data.nlp.NUMi number of integer variables (u) taken from the last control variables, if not equal to 0 you need to use some MINLP solver ['ACOMI'|'MISQP'|'MITS'] integer [0 or positive] -- 0
data.nlp.NUMp number of time-independent parameters (p) integer [0 or positive] -- size(data.nlp.p0,2)
Equality constraints (ECs)
data.nlp.eq.status if equality constraints are presented string ['on'|'off'] 'off'
data.nlp.eq.NEC number of active ECs integer [0 or positive] -- 1
data.nlp.eq.eq equality constraints string [structure] -- {''}
data.nlp.eq.time time when equality constraints are active string [structure] -- data.nlp.RHO
data.nlp.eq.PenaltyFun 'on' or 'off' the penalty function string ['on'|'off'] 'off'
data.nlp.eq.PenaltyCoe penalty function for equality constraints real -- 1.0
Inequality /path/ constraints (INECs)
data.nlp.ineq.status if inequality constraints are presented string ['on'|'off'] 'off'
data.nlp.ineq.NEC number of active inequality constraints integer [0 or positive] -- 2
data.nlp.ineq.InNUM how many inequality constraints are 'more' else 'less' integer [0 or positive] -- 1
data.nlp.ineq.eq inequality constraints string [structure] -- {''}
data.nlp.ineq.PenaltyFun 'on' or 'off' the penalty function string ['on'|'off'] 'off'
data.nlp.ineq.PenaltyCoe penalty function for the inequality constraints real [vector] -- [1.0 1.0]
Options for setting of the final output
data.options.intermediate display of the intermediate results string ['on'|'off'] 'off'
data.options.display display of the figures string ['on'|'off'] 'on'
data.options.title display of the titles string ['on'|'off'] 'on'
data.options.state display of the state trajectory string ['on'|'off'] 'on'
data.options.control display of the control trajectory string ['on'|'off'] 'on'
data.options.ConvergCurve display of the convergence curve string ['on'|'off'] 'on'
data.options.Pict_Format save figures as string ['eps'|'wmf'|'both'] 'eps'
data.options.report save data in the dat file string ['on'|'off'] 'on'
data.options.commands additional commands, e.g. 'figure(1),.. ' string -- {''}
data.options.trajectories how many state trajectories will be displayed string -- size(data.odes.res,2)
data.options.profiler test the performance of the toolbox string ['on'|'off'] 'off'
data.options.multistart set 1 if the multistart is off, otherwise you have to put here some integer value integer [positive] -- 1
data.options.action what will be done string ['single-optimization'|'re-optimization'
|'hybrid-strategy'|'simulation']
'single-optimization'

2. Toolbox code

The toolbox code is organized as follows:
dotcvp structure
The organization of the toolbox code is shown in the figure above where the toolbox files have the name: 'dotcvp_' and the temporary files: 'temp_'. Note that the temporary files are generated and later deleted automatically, because they are problem dependent. Finally it is needed to emphasize that DOTcvp contains packages that are an open-source.

3. Toolbox compatibility

piece-wise constant ['PWC'] control
obtained by sensitivity equations
solvers
deterministic stochastic hybrid
IPOPT FMINCON MISQP DE SRES ACOmi MITS
fixed size
of time intervals
unconstrained ok ok ok ok ok ok ok
equality constraint ok ok ok ok ok ok ok
inequality constraint ok ok ok ok ok ok ok
free size
of time intervals
unconstrained ok ok ok ok ok ok ok
equality constraint ok ok ok ok ok ok ok
inequality constraint ok ok ok ok ok ok ok

piece-wise linear ['PWL'] control
obtained by sensitivity equations
solvers
deterministic stochastic hybrid
IPOPT FMINCON MISQP DE SRES ACOmi MITS
fixed size
of time intervals
unconstrained ok
equality constraint ok
inequality constraint ok
free size
of time intervals
unconstrained
equality constraint
inequality constraint
WINDOWS (MATLAB 7.0-7.7 [2008b]) ok ok ok ok ok ok ok
WINDOWS (MATLAB 7.8 and the newest) ok ok ok ok

If the user does not have any experiences with dynamic optimization, we recommend to initially solve the problem with FMINCON as NLP solver. If the solver finds a solution but the user is not sure about its quality, he/she should compare it with the solutions obtained with the other deterministic solvers. If the different local solvers convergence to different solutions, and/or the user thinks that these solutions are local ones, then stochastic and hybrid methods should be used. Stochastic and hybrid solvers are very powerful and robust, but the price to pay is longer computation times. Also, all stochastic and hybrid methods should be run a few times in order to increase the chances of getting near global solutions.

4. Options for the DOTcvp modules

Single optimization

It consists on solving the master NLP problem in a single pass (i.e. a single optimization run). For many problems this approach is sufficient. But, very frequently, and especially for nonlinear dynamic optimization problems, the user finds (or suspects) that the problem is multimodal (multiple local optima), so it is necessary to use more robust approaches, as outlined below. The settings for this module are presented here.

Hybrid strategy

Hybrid optimization is characterized by the combination of a stochastic global method plus a deterministic local method. This procedures meets and compromise between the robustness of global methods and the efficiency of local ones. In any case, this approach will be almost always more costly (in CPU time) than the single optimization procedures using local methods. That is the price to pay for the increased robustness.
function [data] = dotcvp_hybrid_strategy_default(data)

switch data.option.HybridStrategy.method
    case{'stochastic'}
        % first step [a stochastic solver]: please fill basic settings for
        % a stochastic solver, otherwise they will be taken from the user
        % input file

        data.option.HybridStrategy.IVPRelTol    = 1e-005; % IVP relative tolerance level
        data.option.HybridStrategy.IVPAbsTol    = 1e-005; % IVP absolute tolerance level
        data.option.HybridStrategy.NLPTol       = 1e-003; % NLP tolerance level
        data.option.HybridStrategy.NLPSolver    = 'DE';   % chose a solver ['FMINCON'|'IPOPT'|'SRES'|'DE'|'ACOMI'|'MISQP'|'MITS']
        data.option.HybridStrategy.NLPsettings  = 'None'; % insert the file name that contains settings for NLP solver, otherwise 'None'
        data.option.HybridStrategy.MaxIter      = 50;    % maximum number of iterations
        data.option.HybridStrategy.MaxCPUTime   = 60*10;   % maximum CPU time of the optimization (60*60*0.25) = 15 minutes
        data.option.HybridStrategy.intermediate = 'off';  % ['on'|'off'|'silent'] display of the intermediate results

    case{'deterministic'}
        % second step [a deterministic solver]: all settings for a
        % deterministic solver are taken from the user input file, but the
        % user can introduce some new or special settings, which rewrites
        % those from the input file. The name of the structure is the same
        % as the name in the user input file, e.g. data.nlp.RHO, etc.

end
end

Sucessive re-optimization

Sucessive re-optimization can be used to speed up the convergence for problems where a high discretization level is desired (e.g. those where the control profiles behave wildly). This procedure runs several successive single optimizations automatically increasing the control discretization after each run.
function [data] = dotcvp_reoptimization_default(data)

data.option.reoptimization.NRO        = 4;      % compute the number of refining optimizations
data.option.reoptimization.Increasing = 3;      % how quickly will the mesh refinement be increased
data.option.reoptimization.NLPtol     = 1e-003; % initial NLP tolerance level
data.option.reoptimization.IVPRelTol  = 1e-005; % initial IVP relative tolerance level
data.option.reoptimization.IVPAbsTol  = 1e-005; % initial IVP absolute tolerance level
data.option.reoptimization.SensAbsTol = 1e-005; % initial absolute tolerance for sensitivity variables

% Note that values of NLP and IVP tolerances will be increased with the
% mesh refinement algorithm until they reach the final tolerances defined
% by the user in the initial file.

end

Simulation

Simulation module serves for problem simulation and state trajectories generation. It is useful to use this module in combination, f.e. with sbml2dotcvp module to input check.
function [data] = dotcvp_simulation_default(data)

data.options.simulation.IVPsolver = 'ode23s'; % ['ode45'|'ode23'|'ode23t'|'ode23tb'|'ode113'|'ode15s'|'ode23s']
data.options.simulation.grid      = 'yes';    % ['yes'|'no'] display of the grid in the figure(s)
data.options.simulation.legend    = 'yes';    % ['yes'|'no'] display of the legend in the figure
data.options.simulation.semilogx  = 'no';     % ['yes'|'no'] if the logarithmic (base 10) scale is used for the X-axis
data.options.simulation.output    = 'one';    % ['one'|'all'] how many state figures will be depicted

end

5. NLP and MINLP options

The following are the default values for the different NLP solvers. The corresponding m-files can be edited by the user in order to change these defaults if needed.

IPOPT

function [data] = dotcvp_ipopt_default(data)

data.options.ipopt.jac_c_constant               = 'no';             % indicates whether all equality constraints are linear ['yes','no']
data.options.ipopt.hessian_approximation        = 'limited-memory'; % indicates what Hessian information is to be used ['exact','limited-memory']
data.options.ipopt.mu_strategy                  = 'adaptive';       % update strategy for barrier parameter ['adaptive','monotone']
data.options.ipopt.nlp_scaling_method           = 'gradient-based'; % select the technique used for scaling the NLP ['none','user-scaling','gradient-based','equilibration-based']
data.options.ipopt.print_options_documentation  = 'no';             % switch to print all algorithmic options ['yes','no']
%data.options.ipopt.print_level                  = 5;               % [print_level] Output verbosity level [2 <= 5 <= 12 ]
data.options.ipopt.acceptable_dual_inf_tol      = data.nlp.NLPtol;  % "Acceptance" threshold for the dual infeasibility [0 < 1e+10 < +inf]
data.options.ipopt.acceptable_constr_viol_tol   = data.nlp.NLPtol;  % "Acceptance" threshold for the constraint violation [0 < 0.01 < +inf]
data.options.ipopt.ma27_pivtol                  = data.nlp.NLPtol;  % pivot tolerance for the linear solvers [0 < 1e-08 < 1]
data.options.ipopt.acceptable_tol               = data.nlp.NLPtol;  % "Acceptable" convergence tolerance (relative) [0 < 1e-06 < +inf]
data.options.ipopt.mu_min                       = data.nlp.NLPtol;  % minimum value for barrier parameter [0 < 1e-11 < +inf]
data.options.ipopt.bound_push                   = data.nlp.NLPtol;  % desired minimum absolute distance from the initial point to bound [0 < 0.01 < +inf]
data.options.ipopt.bound_frac                   = data.nlp.NLPtol;  % desired minimum relative distance from the initial point to bound [0 < 0.01 <= 0.5]
data.options.ipopt.tol                          = data.nlp.NLPtol;  % desired convergence tolerance (relative) [0 < 1e-08 < +inf]
data.options.ipopt.mu_init                      = 10^(-8);          % initial barrier parameter [0 < 0.1 < +inf]
data.options.ipopt.max_iter                     = data.nlp.MaxIter; % maximal number of iterations allowed (Integer, >=0)
data.options.ipopt.derivative_test              = 'none';           % enable derivative checker ['none','first-order','second-order']
data.options.ipopt.check_derivatives_for_naninf = 'no';             % indicates whether it is desired to check for Nan/Inf in derivative matrices ['no','yes']

end

FMINCON

function [data] = dotcvp_fmincon_default(data)

data.options.fmincon.MaxTime         = data.nlp.MaxCPUTime; % maximum CPU time [sec]
data.options.fmincon.GradObj         = 'on';                % objective function gradients defined by the user
data.options.fmincon.GradConstr      = 'on';                % constraints gradients defined by the user
data.options.fmincon.TolFun          = data.nlp.NLPtol;     % termination tolerance on the function value
data.options.fmincon.TolCon          = data.nlp.NLPtol;     % termination tolerance on the constraint violation
data.options.fmincon.TolX            = data.nlp.NLPtol;     % termination tolerance on the function value
data.options.fmincon.Hessian         = 'off';               % hessian information
data.options.fmincon.DerivativeCheck = 'off';               % compare user-supplied derivatives to FD derivatives
data.options.fmincon.MaxIter         = data.nlp.MaxIter;    % maximum number of iterations
data.options.fmincon.MaxFunEvals     = 10*data.nlp.MaxIter; % maximum number of function evaluations

end

MISQP

function [data] = dotcvp_misqp_default(data)

data.options.misqp.NLPtol  = data.nlp.NLPtol;  % NLP tolerance level
data.options.misqp.itermax = data.nlp.MaxIter; % maximum number of iterations

end

DE

function [data] = dotcvp_de_default(data)

data.options.de.VTR          = -Inf;                      % for minimization "Value To Reach" (stop when objective function < VTR)
data.options.de.F            = 0.7;                       % DE-stepsize F from interval [0, 2] (recommended 0.7/0.8)
data.options.de.CR           = 1.0;                       % crossover probability constant from interval [0, 1]
data.options.de.strategy     = 3;                         % recommended 3 or 8
data.options.de.max_time     = data.nlp.MaxCPUTime;       % maximum CPU time [sec]
data.options.de.NLPtol       = data.nlp.NLPtol;           % NLP tolerance level
data.options.de.itermax      = data.nlp.MaxIter;          % maximum number of iterations (generations)
data.options.de.MaxFunEvals  = inf;                       % maximum function evaluation
data.options.de.intermediate = data.options.intermediate; % display of the intermediate results

end

SRES

function [data] = dotcvp_sres_default(data)

data.options.sres.mm          = 'min';                             % ['max'|'min'] (for maximization or minimization)
data.options.sres.lambda      = 150;                               % population size (number of offspring) (100 to 200)
data.options.sres.G           = 100000000;                         % maximum number of generations
data.options.sres.mu          = round(data.options.sres.lambda/7); % parent number (mu/lambda usually 1/7), before: roundn(lambda/7,0)
data.options.sres.pf          = 0.45;                              % pressure on fitness in [0 0.5] try around 0.45
data.options.sres.varphi      = 1;                                 % expected rate of convergence (usually 1)
data.options.sres.MaxTime     = data.nlp.MaxCPUTime;               % maximum CPU time [sec]
data.options.sres.itermax     = data.nlp.MaxIter;                  % maximum number of iterations
data.options.sres.MaxFunEvals = inf;                               % maximum number of function evaluations
data.options.sres.NLPtol      = data.nlp.NLPtol;                   % NLP tolerance level
data.options.sres.delta       = 0.75;                              % allowed the time variation for time grid adaptation [0 < 0.75 < 1]

end

ACOmi

function [data] = dotcvp_acomi_default(data)

data.options.acomi.report     = 0;                   % create a report file containing all solution information: [1/0] = [Yes/No]
data.options.acomi.acc        = 10^(-3);             % accuracy for both: object-function & constraints
data.options.acomi.maxeval    = data.nlp.MaxIter;    % maximum number of function evaluations
data.options.acomi.maxtime    = data.nlp.MaxCPUTime; % maximum CPU time [sec]
data.options.acomi.fex        = -inf;                % stopping criteria: if objective function = fex -> stop
data.options.acomi.loc_solver = 1;                   % use local solver at within ACOMI: [1/0] = [Yes/No]
data.options.acomi.loc_type   = 'MISQP';             % select a local solver in ACOMI: ['MISQP']
data.options.acomi.loc_start  = 1;                   % 10 seconds multistart of the local solver before (!) ACOMI: [1/0] = [Yes/No]
%data.options.acomi.oracle     = -10^(12);            % oracle parameter for penalty function [optional]; with constraints: 10^(12)

end

MITS

function [data] = dotcvp_mits_default(data)

data.options.mits.NLPtol   = data.nlp.NLPtol;     % NLP tolerance level
data.options.mits.max_iter = data.nlp.MaxIter;    % maximum number of iterations
data.options.mits.MaxTime  = data.nlp.MaxCPUTime; % maximum CPU time [sec]
data.options.mits.fex      = -inf;                % stopping criteria: if f = fex --> stop
data.options.mits.locstart = 0;                   % if true: MITS starts with a local solver run from the initial guess

end