Multiparametric programming


This example requires the Multi-Parametric Toolbox (MPT).

YALMIP can be used to calculate explicit solutions of linear and quadratic programs by interfacing the Multi-Parametric Toolbox (MPT). This chapter assumes that the reader is familiar with parametric programming and the basics of MPT.

Standard example.

Consider the following simple quadratic program in the decision variable z, solved for a particular value on a parameter x.

A = randn(15,3);
b = rand(15,1);
E = randn(15,2);

z = sdpvar(3,1);
x = [0.1;0.2];

F = set(A*z <= b+E*x);
obj = (z-1)'*(z-1);

sol = solvesdp(F,obj);
double(z)
ans =
   -0.1454
   -0.1789
   -0.0388

To obtain the parametric solution with respect to x, we call the function solvemp.m, and tell the solver that x is a parametric variable. Moreover, we must add constraints to declare a bound on the region where we want to compute the parametric solution, the so called exploration set.

x = sdpvar(2,1);
F = set(A*z <= b+E*x) + set(-1 <= x <= 1)
sol = solvemp(F,obj,[],x);

The first output is an MPT structure. In accordance with MPT syntax, the optimizer for the parametric value  is given by the following code.

xx = [0.1;0.2];
[i,j] = isinside(sol{1}.Pn,xx)
sol{1}.Bi{j}*xx + sol{1}.Ci{j}
ans =
   -0.1454
   -0.1789
   -0.0388

By using more outputs from solvemp, it is possible to simplify things considerably.

[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,obj,[],x);

The function now returns solutions using YALMIPs nonlinear operator framework. To retrieve the solution for a particular parameter, simply use assign and double in standard fashion.

assign(x,[0.1;0.2]);
double(Optimizer)

Some of the plotting capabilities of MPT are overloaded for the piecewise functions.

plot(Valuefunction)

More about the output later.

MPC example

Define numerical data for a linear system, prediction matrices, and variables for current state x and the future control sequence U, for an MPC problem with horizon 5.

N = 5;
A = [2 -1;1 0];
B = [1;0];
C = [0.5 0.5];
[H,S] = create_CHS(A,B,C,N);
x = sdpvar(2,1);
U = sdpvar(N,1);

The future output predictions are linear in the current state and the control sequence.

Y = H*x+S*U;

We wish to minimize a quadratic cost, compromising between small input and outputs.

objective = Y'*Y+U'*U;

The input variable has a hard constraint, and so does the output at the terminal state.

F = set(1 > U > -1);
F = F+set(1 > Y(N) > -1); 

We seek the explicit solution U(x) over the exploration set |x|≤1.

F = F + set(5 > x > -5);

The explicit solution U(x) is obtained by calling solvemp with the parametric variable x as the fourth argument. Additionally, since we only are interested in the first element of the solution U, we use a fifth input to communicate this.

[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,objective,[],x,U(1));

The explicit solution can, e.g, be plotted (see the MPT manual informationon the solution structure)

sol{1}

ans = 

 Pn: [1x13 polytope]
 Fi: {1x13 cell}
 Gi: {1x13 cell}
 activeConstraints: {1x13 cell}
 Phard: [1x1 polytope]
figure
plot(sol{1}.Pn)
figure
plot(Valuefunction)
figure
plot(Optimizer)

The following piece of code calculates the explicit MPC controller with an worst-case L cost instead (i.e. worst case control/output peak along the predictions).

A = [2 -1;1 0];
B = [1;0];
C = [0.5 0.5];
N = 5;
[H,S] = create_CHS(A,B,C,N);
t = sdpvar(1,1);
x = sdpvar(2,1);
U = sdpvar(N,1);
Y = H*x+S*U;
F = set(-1 < U < 1);
F = F+set(-1 < Y(N) < 1); 
F = F + set(-5 < x < 5);
[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,norm([Y;U],inf),[],x);

This example enables us to introduce the overloading of the projection functionality in MPT. In MPC, only the first input, U(1), is of interest. What we can do is to project the whole problem to the parametric variable x, the objective function t, and the variable of interest, U(1). The following piece of code projects the problem to the reduced set of variables, and solves the multi-parametric LP in this reduced space (not necessarily an efficient approach though, projection is very expensive, and we can easily end up with a problem with many constraints in the reduced variables.)

A = [2 -1;1 0];
B = [1;0];
C = [0.5 0.5];
N = 5;
[H,S] = create_CHS(A,B,C,N);
t = sdpvar(1,1);
x = sdpvar(2,1);
U = sdpvar(N,1);
Y = H*x+S*U;
F = set(-1 < U < 1);
F = F+set(-1 < Y(N) < 1); 
F = F + set(-5 < x < 5);
F = F + set(-t < [Y;U] < t);

F = projection(F,[x;t;U(1)]);

[sol_p,diagnostics_p,Zsol_p,Valuefunction_p,Optimizer_p] = solvemp(F,t,[],x,[U(1);t]);

To check that the two solutions actually coincide, we compare the value functions and conclude that they are essentially the same.

[pass,tol] = mpt_ispwabigger(sol{1},sol_p{1})
pass =
     0
tol =
  1.3484e-014

figure;
plot(Valuefunction);
plot(Valuefunction_p):

Since the objective function is t, the optimal t should coincide with the value function. The optimal t in the projected problem was requested as the second optimizer

plot(Optimizer(2));

In the code above, we used the pre-constructed function create_CHS.m, which generates numerical data for defining predictions from current state and future input signals. The problem can however be solved without using this function. We will show two alternative methods.

The first approach explicitly generates future states using the given current state and future inputs (note that the solution not is exactly the same as above due to a small difference in indexing logic.)

x  = sdpvar(2,1);
U  = sdpvar(N,1);
x0 = x; % Save parametric state
F  =  set(-5 < x0 < 5); % Exploration space
objective= 0;
for k = 1:N    

    % Feasible region
    F = F + set(-1 < U(k) < 1);
    % Add stage cost to total cost
    objective = objective + (C*x)'*(C*x) + U(k)'*U(k);

    % Explicit state update
    x = A*x + B*U(k);  
end
F = F + set(-1 < C*x < 1); % Terminal constraint

[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,objective,[],x0,U(1));

The second method introduced new decision variables for both future states and connect these using equality constraints.

x  = sdpvar(2,N+1);
U  = sdpvar(N,1);
F  =  set(-5 < x0 < 5); % Exploration space
objective = 0;
for k = 1:N
    
    % Feasible region
    F = F + set(-1 < U(k) < 1);
  
    % Add stage cost to total cost
    objective = objective + (C*x(:,k))'*(C*x(:,k)) + U(k)'*U(k);
    % Implicit state update
    F = F + set(x(:,k+1) == A*x(:,k) + B*U(k));  
end
F = F + set(-1 < C*x(:,end) < 1); % Terminal constraint

[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,objective,[],x(:,1),U(1));

This second method is less efficient, since it has a lot more decision variables. However, in some cases, it is the most convenient way to model a system (see the PWA examples below for similar approaches), and the additional decision variables are removed anyway during a pre-solve process YALMIP applies before calling the multi-parametric solver.

Parametric programs with binary variables and equality constraints

YALMIP extends the parametric algorithms in MPT by adding a layer to enable binary variables and equality constraints. We can use this to find explicit solutions to, e.g., predictive control of PWA (piecewise affine) systems.

Let us find the explicit solution to a predictive control problem, where the gain of the system depends on the sign of the first state. This will be a pretty advanced example, so let us start slowly by defining the some data.

yalmip('clear')
clear all
% Model data
A = [2 -1;1 0];
B1 = [1;0];  % Small gain for x(1) > 0 
B2 = [pi;0]; % Larger gain for x(1) < 0 
C = [0.5 0.5];
nx = 2; % Number of states
nu = 1; % Number of inputs

% Prediction horizon
N = 4;

To simplify the code and the notation, we create a bunch of state and control vectors in cell arrays

% States x(k), ..., x(k+N)
x = sdpvar(repmat(nx,1,N),repmat(1,1,N));
% Inputs u(k), ..., u(k+N) (last one not used)
u = sdpvar(repmat(nu,1,N),repmat(1,1,N));
% Binary for PWA selection
d = binvar(repmat(2,1,N),repmat(1,1,N));

We now run a loop to add constraints on all states and inputs. Note the use of binary variables define the PWA dynamics, and the recommended use of bounds to improve big-M relaxations)

F   = set([]);
obj = 0;

for k = N-1:-1:1    

    % Strenghten big-M (improves numerics)
    bounds(x{k},-5,5);
    bounds(u{k},-1,1);
    bounds(x{k+1},-5,5);
    
    % Feasible region
    F = F + set(-1 < u{k}     < 1);
    F = F + set(-1 < C*x{k}   < 1);
    F = F + set(-5 < x{k}     < 5);
    F = F + set(-1 < C*x{k+1} < 1);
    F = F + set(-5 < x{k+1}   < 5);
    % PWA Dynamics
    F = F + set(implies(d{k}(1),x{k+1} == (A*x{k}+B1*u{k})));
    F = F + set(implies(d{k}(2),x{k+1} == (A*x{k}+B2*u{k})));
    F = F + set(implies(d{k}(1),x{k}(1) > 0));
    F = F + set(implies(d{k}(2),x{k}(1) < 0));

    % It is EXTREMELY important to add as many
    % constraints as possible to the binary variables
    F = F + set(sum(d{k}) == 1);
  
    % Add stage cost to total cost
    obj = obj + norm(x{k},1) + norm(u{k},1);

end

The parametric variable here is the current state x{1}. In this optimization problem, there are a lot of variables that we have no interest in. To tell YALMIP that we only want the optimizer for the current state u{1}, we use a fifth input argument.

[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,obj,[],x{1},u{1});

To obtain the optimal control input for a specific state, we use double and assign as usual.

assign(x{1},[-1;1]);
double(Optimizer)
ans =
    0.9549

The optimal cost at this state is available in the value function

double(Valuefunction)
ans =
    4.2732

To convince our self that we have a correct parametric solution, let us compare it to the solution obtained by solving the problem for this specific state.

sol = solvesdp(F+set(x{1}==[-1;1]),obj);
double(u{1})
ans =
    0.9549
double(obj)
ans =
    4.2732

Dynamic programming with LTI systems

The capabilities in YALMIP to work with piecewise functions and parametric programs enables easy coding of dynamic programming algorithms. The value function with respect to the parametric variable for a parametric linear program is a convex PWA function, and this is the function returned in the fourth output. YALMIP creates this function internally, and also saves information about convexity etc, and uses it as any other nonlinear operator (see more details below). For binary parametric linear programs, the value function is no longer convex, but a so called overlapping PWA function. This means that, at each point, it is defined as the minimum of a set of convex PWA function. This information is also handled transparently in YALMIP, it is simply another type of nonlinear operator. The main difference between the two function classes is that the second class requires introduction of binary variables when used.

Note, the algorithms described in the following sections are mainly intended for with (picewise) linear objectives. Dynamic programming with quadratic objective functions give rise to problems that are much harder to solve, although it is supported.

To illustrate how easy it is to work with these PWA functions, we can solve predictive control using dynamic programming, instead of setting up the whole problem in one shot as we did above. As a first example, we solve a standard linear predictive control problem.  To fully understand this example, it is required that you are familiar with predictive control, dynamic programming and parametric optimization.

yalmip('clear')
clear all
% Model data
A = [2 -1;1 0];
B = [1;0];
C = [0.5 0.5];
nx = 2; % Number of states
nu = 1; % Number of inputs

% Prediction horizon
N = 4;
% States x(k), ..., x(k+N)
x = sdpvar(repmat(nx,1,N),repmat(1,1,N));
% Inputs u(k), ..., u(k+N) (last one not used)
u = sdpvar(repmat(nu,1,N),repmat(1,1,N));

Now, instead of setting up the problem for the whole prediction horizon, we only set it up for one step, solve the problem parametrically, take one step back, and perform a standard dynamic programming value iteration.

J{N} = 0;

for k = N-1:-1:1    
    
    % Feasible region
    F =     set(-1 < u{k}     < 1);
    F = F + set(-1 < C*x{k}   < 1);
    F = F + set(-5 < x{k}     < 5);
    F = F + set(-1 < C*x{k+1} < 1);
    F = F + set(-5 < x{k+1}   < 5);   
    % Dynamics
    F = F + set(x{k+1} == A*x{k}+B*u{k});
    % Cost in value iteration
    obj = norm(x{k},1) + norm(u{k},1) + J{k+1}

    % Solve one-step problem    
    [sol{k},diagnost{k},Uz{k},J{k},Optimizer{k}] = solvemp(F,obj,[],x{k},u{k});
end

Notice the minor changes needed compared to the one shot solution. Important to understand is that the value function at step k will be a function in x{k}, hence when it is used at k-1, it will be a function penalizing the predicted state. Note that YALMIP automatically keep track of convexity of the value function. Hence, for this example, no binary variables are introduced along the solution process (this is why we safely can omit the use of bounds).

To study the development of the value function, we can easily plot them.

for k = N-1:-1:1
 plot(J{k});hold on
end

The optimal control input can also be plotted.

clf;plot(Optimizer{1})

Of course, you can do the same thing by working directly with the numerical MPT data.

for k = N-1:-1:1
 mpt_plotPWA(sol{k}{1}.Pn,sol{k}{1}.Bi,sol{k}{1}.Ci);hold on
end

Why not solve the problem with a worst-case L cost! Notice the non-additive value iteration! Although this might look complicated using a mix of maximums, norms and piecewise value-functions , the formulation fits perfectly into the nonlinear operator framework in YALMIP.

J{N} = 0;

for k = N-1:-1:1    
    
    % Feasible region
    F =     set(-1 < u{k}     < 1);
    F = F + set(-1 < C*x{k}   < 1);
    F = F + set(-5 < x{k}     < 5);
    F = F + set(-1 < C*x{k+1} < 1);
    F = F + set(-5 < x{k+1}   < 5);   
    % Dynamics
    F = F + set(x{k+1} == A*x{k}+B*u{k});
    % Cost in value iteration
    obj = max(norm([x{k};u{k}],inf),J{k+1})

    % Solve one-step problem    
    [sol{k},diagnost{k},Uz{k},J{k},Optimizer{k}] = solvemp(F,obj,[],x{k},u{k});
end

Dynamic programming with PWA systems

As mentioned above, the difference between the value function from a parametric LP and a parametric LP with binary variables is that convexity of the value function no longer is guaranteed. In practice, this means that (additional) binary variables are required when the the value function is used in an optimization problem. This is done automatically in YALMIP through the nonlinear operator framework, but it is extremely important to realize that the value function from a binary problem is much more complex than the one resulting from a standard parametric problem. Nevertheless, working with the objects is easy, as the following example illustrates. In this case, we solve the dynamic programming problem for our PWA system

yalmip('clear')
clear all
% Model data
A = [2 -1;1 0];
B1 = [1;0];
B2 = [pi;0]; % Larger gain for negative first state
C = [0.5 0.5];
nx = 2; % Number of states
nu = 1; % Number of inputs

% Prediction horizon
N = 4;
% States x(k), ..., x(k+N)
x = sdpvar(repmat(nx,1,N),repmat(1,1,N));
% Inputs u(k), ..., u(k+N) (last one not used)
u = sdpvar(repmat(nu,1,N),repmat(1,1,N));
% Binary for PWA selection
d = binvar(repmat(2,1,N),repmat(1,1,N));

Just as in the LTI case, we set up the problem for the one step case, and use the value function from the previous iteration.

J{N} = 0;

for k = N-1:-1:1    

    % Strenghten big-M (improves numerics)
    bounds(x{k},-5,5);
    bounds(u{k},-1,1);
    bounds(x{k+1},-5,5);
    
    % Feasible region
    F =     set(-1 < u{k}     < 1);
    F = F + set(-1 < C*x{k}   < 1);
    F = F + set(-5 < x{k}     < 5);
    F = F + set(-1 < C*x{k+1} < 1);
    F = F + set(-5 < x{k+1}   < 5);  
    % PWA Dynamics
    F = F + set(implies(d{k}(1),x{k+1} == (A*x{k}+B1*u{k})));
    F = F + set(implies(d{k}(2),x{k+1} == (A*x{k}+B2*u{k})));
    F = F + set(implies(d{k}(1),x{k}(1) > 0));
    F = F + set(implies(d{k}(2),x{k}(1) < 0));
    F = F + set(sum(d{k}) == 1);
    % Cost in value iteration
    obj = norm(x{k},1) + norm(u{k},1) + J{k+1}
    % Solve one-step problem
    [sol{k},diagnost{k},Uz{k},J{k},Optimizer{k}] = solvemp(F,obj,[],x{k},u{k});
end

Plotting the final value function clearly reveals the nonconvexity.

clf;plot(J{1})

Behind the scenes and advanced use

The first thing that might be a bit unusual to the advanced user is the piecewise functions that YALMIP returns in the fourth and fifth output from solvemp . In principle, they are specialized pwf objects. To create a PWA value function after solving a multi-parametric LP, the following command is used.

Valuefunction = pwf(sol,x,'convex')

The pwf command will recognize the MPT solution structure and create a PWA function based on the fields Pn, Bi and Ci. The dedicated nonlinear operator is implemented in the file pwa_yalmip.m . The nonlinear operator will exploit the fact that the PWA function is convex and implements an efficient epi-graph representation. In case the PWA function is used in a non-convex fashion (i.e. the automatic convexity propagation fails), a MILP implementation is also available.

If the field Ai is non-empty (solution obtained from a multi-parametric QP), a corresponding PWQ function is created (pwq_yalmip.m ).

To create a PWA function representing the optimizer, two things have to be changed. To begin with, YALMIP searches for the Bi and Ci fields, but since we want to create a PWA function based on Fi and Gi fields, the field names have to be changed. Secondly, the piecewise  optimizer is typically not convex, so a general PWA function is created instead (requiring 1 binary variable per region if the variable later actually is used in an optimization problem.)
sol.Ai = cell(1,length(sol.Ai));
sol.Bi = sol.Fi;
sol.Ci = sol.Gi;
Optimizer = pwf(sol,x,'general')

A third important case is when the solution structure returned from solvemp is a cell with several MPT structures. This means that a multi-parametric problem with binary variables was solved, and the different cells represent overlapping solutions. One way to get rid of the overlapping solutions is to use the MPT command mpt_removeOverlaps.m and create a PWA function based on the result. Since the resulting PWA function typically is non-convex, we must create a general function.
sol_single = mpt_removeOverlaps(sol);
Valuefunction = pwf(sol_single,x,'general')

This is only recommended if you just intend to plot or investigate the value function. Typically, if you want to continue computing with the result, i.e. use it in another optimization problem, as in the dynamic programming examples above, it is recommended to keep the overlapping solutions. To model a general PWA function, 1 binary variable per region is needed. However, by using a dedicated nonlinear operator for overlapping convex PWA functions, only one binary per PWA function is needed.

Valuefunction = pwf(sol,x,'overlapping')

As we have seen in the examples above, the PWA and PWQ functions can be plotted. This is currently nothing but a simple hack. Normally, when you apply the plot command, the corresponding double values are plotted. However, if the input is a simple scalar PWA or PWQ variable, the underlying MPT structures will be extracted and the plot commands in MPT will be called.