[94] | 1 | %--------------------------------------------------------- |
---|
| 2 | % |
---|
| 3 | % Copyright 1997 Marian Stewart Bartlett |
---|
| 4 | % This may be copied for personal or academic use. |
---|
| 5 | % For commercial use, please contact Marian Bartlett |
---|
| 6 | % (marni@salk.edu) for a commercial license. |
---|
| 7 | % |
---|
| 8 | % Image representations by Marian Bartlett. Revised 7/14/03 |
---|
| 9 | % ICA code by Tony Bell. |
---|
| 10 | % The ICA method is patented by Bell & Sejnowski, at the Salk Institute. |
---|
| 11 | % |
---|
| 12 | % Please cite Bartlett, M.S. (2001) Face Image Analysis by |
---|
| 13 | % Unsupervised Learning. Boston: Kluwer Academic Publishers. |
---|
| 14 | % |
---|
| 15 | % -------------------------------------------------------- |
---|
| 16 | |
---|
| 17 | |
---|
| 18 | Based on Bartlett, Movellan, & Sejnowski (2002). Face Recognition by |
---|
| 19 | Independent Component analysis. IEEE Transactions on Neural Networks |
---|
| 20 | 13(6) p. 1450-1464, and |
---|
| 21 | |
---|
| 22 | Bartlett, M.S. (2001) Face Image Analysis by Unsupervised |
---|
| 23 | Learning. Boston: Kluwer Academic Publishers. |
---|
| 24 | |
---|
| 25 | This directory contains 2 matlab scripts for finding the ICA representation |
---|
| 26 | of a set of images for recognition: |
---|
| 27 | |
---|
| 28 | 1. Arch1.m: Gets representation of train and test images under architecture I |
---|
| 29 | 2. Arch2.m: Gets representation of train and test images under architecture II |
---|
| 30 | |
---|
| 31 | Read through the comments of these scripts before attempting to run them. |
---|
| 32 | |
---|
| 33 | The above scripts call the following 6 MATLAB files for running infomax ica. |
---|
| 34 | Written by Tony Bell http://www.cnl.salk.edu/~tony/ |
---|
| 35 | |
---|
| 36 | 1. runica.m, the ica training script which calls the functions below. |
---|
| 37 | 2. sep96.m, the code for one learning pass thru the data |
---|
| 38 | 3. sepout.m, for optional text output |
---|
| 39 | 4. wchange.m, tracks size and direction of weight changes |
---|
| 40 | 5. spherex.m, spheres the training matrix x. |
---|
| 41 | 6. zeroMn.m: Returns a zero-mean form of the matrix X, where each row has |
---|
| 42 | zero-mean. (This one was added by Marian Bartlett) |
---|
| 43 | |
---|
| 44 | The following variables are used to calculate ica: |
---|
| 45 | |
---|
| 46 | sweep: how many times you've gone thru the data |
---|
| 47 | P: how many timepoints in the data |
---|
| 48 | N: how many input (mixed) sources there are |
---|
| 49 | M: how many outputs you have |
---|
| 50 | L: learning rate |
---|
| 51 | B: batch-block size (ie: how many presentations per weight update.) |
---|
| 52 | t: time index of data |
---|
| 53 | sources: NxP matrix of the N sources you read in |
---|
| 54 | x: NxP matrix of mixtures |
---|
| 55 | u: MxP matrix of hopefully unmixed sources |
---|
| 56 | a: NxN mixing matrix |
---|
| 57 | w: MxN unmixing matrix (actually w*wz is the full unmixing matrix |
---|
| 58 | in this case) |
---|
| 59 | wz: zero-phase whitening: a matrix used to remove |
---|
| 60 | correlations from between the mixtures x. Useful as a |
---|
| 61 | preprocessing step. |
---|
| 62 | noblocks: how many blocks in a sweep; |
---|
| 63 | oldw: value of w before the last sweep |
---|
| 64 | delta: w-oldw |
---|
| 65 | olddelta: value of delta before the last sweep |
---|
| 66 | angle: angle in degrees between delta and olddelta |
---|
| 67 | change: squared length of delta vector |
---|
| 68 | Id: an identity matrix |
---|
| 69 | permute: a vector of length P used to scramble the time order of the |
---|
| 70 | sources for stationarity during learning. |
---|
| 71 | |
---|
| 72 | INITIAL w ADVICE: identity matrix is a good choice, since, for prewhitened |
---|
| 73 | data, there will be no distracting initial correlations, and the output |
---|
| 74 | variances will be nicely scaled so <uu^T>=4I, right size to fit the |
---|
| 75 | logistic fn (more or less). |
---|
| 76 | |
---|
| 77 | LEARNING RATE ADVICE: |
---|
| 78 | N=2: L=0.01 works |
---|
| 79 | N=8-10: L=0.001 is stable. Run this till the 'change' report settles |
---|
| 80 | down, then anneal a little. L=0.0005,0.0002,0.0001 etc, a few passes |
---|
| 81 | (= a few 10,000's of data vectors) each pass. |
---|
| 82 | N>100: L=0.001 works well on sphered image data. |
---|
| 83 | |
---|