1 | clc |
---|
2 | echo on |
---|
3 | |
---|
4 | % A recent and devloping extension in YALMIP is support |
---|
5 | % for nonlinear operators such as min, max, norm and more. |
---|
6 | % |
---|
7 | % Although nonlinear, and often non-differentiable, the resulting |
---|
8 | % optmization problem is in many cases still convex, and |
---|
9 | % can be modelled using suitable additional variables and constraits. |
---|
10 | % |
---|
11 | % If these operators are used, YALMIP will derive a suitable |
---|
12 | % convex model and solve the resulting problem. It may also happen |
---|
13 | % that YALMIP fails to build a convex model, since the rules to detect |
---|
14 | % convexity only are sufficient but not necessary. |
---|
15 | % |
---|
16 | % These extended operators should only be used of you now how |
---|
17 | % to model them manually, why it can be done and when it can be done. |
---|
18 | |
---|
19 | pause % Strike any key to continue. |
---|
20 | yalmip('clear') |
---|
21 | clc |
---|
22 | |
---|
23 | % To begin with, define some scalars |
---|
24 | sdpvar x y z |
---|
25 | |
---|
26 | % Nonlinear expressions are easily defined |
---|
27 | p = min(x+y+norm([x;y],1),x)-max(min(x,y),max(x,y)); |
---|
28 | pause |
---|
29 | |
---|
30 | % The result is a linear variable, but it is special. |
---|
31 | % This can be seen when displayed (note the "derived") |
---|
32 | p |
---|
33 | pause |
---|
34 | |
---|
35 | % These expressions can be used as any other expression |
---|
36 | % in YALMIP. The difference is when optmization problems |
---|
37 | % are solved. YALMIP will start by trying to expand the |
---|
38 | % definitions of the derived variables, and try to maintain |
---|
39 | % convexity while doing so. |
---|
40 | pause |
---|
41 | |
---|
42 | % Let us solve the linear regression again (from DEMO 2) |
---|
43 | a = [1 2 3 4 5 6]; |
---|
44 | t = (0:0.2:2*pi)'; |
---|
45 | x = [sin(t) sin(2*t) sin(3*t) sin(4*t) sin(5*t) sin(6*t)]; |
---|
46 | y = x*a'+(-4+8*rand(length(x),1)); |
---|
47 | a_hat = sdpvar(1,6); |
---|
48 | residuals = y-x*a_hat'; |
---|
49 | pause % Strike any key to continue. |
---|
50 | |
---|
51 | |
---|
52 | % Minimize L1 error (uses ABS operator) |
---|
53 | solvesdp([],sum(abs(residuals))); |
---|
54 | double(a_hat) |
---|
55 | pause |
---|
56 | |
---|
57 | % Minimize Linf error (uses both MAX and ABS) |
---|
58 | solvesdp([],max(abs(residuals))); |
---|
59 | double(a_hat) |
---|
60 | pause |
---|
61 | |
---|
62 | % Minimize L1 error even easier (uses NORM operator) |
---|
63 | % NOTE : This is much faster than explicitely |
---|
64 | % introducing the absolute values. |
---|
65 | solvesdp([],norm(residuals,1)); |
---|
66 | double(a_hat) |
---|
67 | pause |
---|
68 | |
---|
69 | % Minimize Linf error even easier (uses NORM operator) |
---|
70 | solvesdp([],norm(residuals,inf)); |
---|
71 | double(a_hat) |
---|
72 | pause |
---|
73 | |
---|
74 | % Regularized solution! |
---|
75 | solvesdp([],1e-2*norm(a_hat)+norm(residuals,inf)); |
---|
76 | double(a_hat) |
---|
77 | pause |
---|
78 | |
---|
79 | % Minimize Linf with performance constraint on L2 |
---|
80 | % |
---|
81 | % First, get the best possible L2 cost |
---|
82 | solvesdp([],norm(residuals)); |
---|
83 | optL2 = double(norm(residuals)); |
---|
84 | |
---|
85 | % Now optimize Linf with performance deteriation constraint on L2 |
---|
86 | F = set(norm(residuals) < optL2*1.2); |
---|
87 | obj = norm(residuals,inf); |
---|
88 | pause |
---|
89 | |
---|
90 | solvesdp(set(norm(residuals) < optL2*1.2),norm(residuals,inf)); |
---|
91 | |
---|
92 | double(a_hat) |
---|
93 | pause |
---|
94 | |
---|
95 | % Well, you get the picture... |
---|
96 | % |
---|
97 | % Here is an example were the convexity check (correctly) fails |
---|
98 | solvesdp(set(norm(residuals) < norm(residuals,1)),norm(residuals,inf)) |
---|
99 | pause |
---|
100 | |
---|
101 | % ...and here is an example where YALMIP fails to prove convexity |
---|
102 | % (this problem is convex) |
---|
103 | solvesdp([],norm(max(0,residuals))) |
---|
104 | pause |
---|
105 | |
---|
106 | % The rules for convexity preserving operations are currently very simple, |
---|
107 | % and will most likely be improved in a future version. |
---|
108 | % |
---|
109 | % Still, rather complicated construction are possible. |
---|
110 | sdpvar x y z |
---|
111 | F = set(max(1,x)+max(y^2,z)<3)+set(max(1,-min(x,y))<5)+set(norm([x;y],2)<z); |
---|
112 | sol = solvesdp(F,max(x,z)-min(y,z)-z); |
---|
113 | pause |
---|
114 | |
---|
115 | % The nonlinear operators currently supported are |
---|
116 | % |
---|
117 | % ABS : Absolute value of matrix |
---|
118 | % MIN : Minimum of column values |
---|
119 | % MAX : Minimum of column values |
---|
120 | % SUMK : Sum of k largest (eigen-) values |
---|
121 | % SUMABSK : Sum of k largest (by magniture) (eigen-) values |
---|
122 | % GEOMEAN2 : (Almost) Geometric mean of (eigen-) values (used for determinant maximization). |
---|
123 | % |
---|
124 | % Adding new operators is rather straightforward and is |
---|
125 | % described in the HTML-manual |
---|
126 | pause |
---|
127 | echo off |
---|
128 | |
---|