source: proiecte/PPPP/ica/README @ 94

Last change on this file since 94 was 94, checked in by (none), 14 years ago

PPPP - ica serial implementation

File size: 3.4 KB
Line 
1%---------------------------------------------------------
2%
3% Copyright 1997 Marian Stewart Bartlett
4% This may be copied for personal or academic use.
5% For commercial use, please contact Marian Bartlett
6% (marni@salk.edu) for a commercial license.
7%
8% Image representations by Marian Bartlett. Revised 7/14/03
9% ICA code by Tony Bell.
10% The ICA method is patented by Bell & Sejnowski, at the Salk Institute.
11%
12% Please cite Bartlett, M.S. (2001) Face Image Analysis by
13% Unsupervised Learning. Boston: Kluwer Academic Publishers.
14%
15% --------------------------------------------------------
16
17
18Based on Bartlett, Movellan, & Sejnowski (2002). Face Recognition by
19Independent Component analysis. IEEE Transactions on Neural Networks
2013(6) p. 1450-1464, and
21
22Bartlett, M.S. (2001) Face Image Analysis by Unsupervised
23Learning. Boston: Kluwer Academic Publishers.
24
25This directory contains 2 matlab scripts for finding the ICA representation
26of a set of images for recognition:
27
281. Arch1.m: Gets representation of train and test images under architecture I
292. Arch2.m: Gets representation of train and test images under architecture II
30
31Read through the comments of these scripts before attempting to run them.
32
33The above scripts call the following 6 MATLAB files for running infomax ica.
34Written by Tony Bell http://www.cnl.salk.edu/~tony/
35
361. runica.m, the ica training script which calls the functions below.
372. sep96.m, the code for one learning pass thru the data
383. sepout.m, for optional text output
394. wchange.m, tracks size and direction of weight changes
405. spherex.m, spheres the training matrix x.
416. zeroMn.m: Returns a zero-mean form of the matrix X, where each row has
42             zero-mean. (This one was added by Marian Bartlett)
43
44The following variables are used to calculate ica:
45
46sweep:    how many times you've gone thru the data
47P:        how many timepoints in the data
48N:        how many input (mixed) sources there are
49M:        how many outputs you have
50L:        learning rate
51B:        batch-block size (ie: how many presentations per weight update.)
52t:        time index of data
53sources:  NxP matrix of the N sources you read in
54x:        NxP matrix of mixtures
55u:        MxP matrix of hopefully unmixed sources
56a:        NxN mixing matrix
57w:        MxN unmixing matrix (actually w*wz is the full unmixing matrix
58          in this case)
59wz:       zero-phase whitening: a matrix used to remove
60          correlations from between the mixtures x. Useful as a
61          preprocessing step.
62noblocks: how many blocks in a sweep;
63oldw:     value of w before the last sweep
64delta:    w-oldw
65olddelta: value of delta before the last sweep
66angle:    angle in degrees between delta and olddelta
67change:   squared length of delta vector
68Id:       an identity matrix
69permute:  a vector of length P used to scramble the time order of the
70          sources for stationarity during learning.
71
72INITIAL w ADVICE: identity matrix is a good choice, since, for prewhitened
73data, there will be no distracting initial correlations, and the output
74variances will be nicely scaled so <uu^T>=4I, right size to fit the
75logistic fn (more or less).
76
77LEARNING RATE ADVICE:
78N=2: L=0.01 works
79N=8-10: L=0.001 is stable. Run this till the 'change' report settles
80down, then anneal a little. L=0.0005,0.0002,0.0001 etc, a few passes
81(= a few 10,000's of data vectors) each pass.
82N>100: L=0.001 works well on sphered image data.
83
Note: See TracBrowser for help on using the repository browser.