[37] | 1 | % * This code was used in the following articles:
|
---|
| 2 | % * [1] Learning 3-D Scene Structure from a Single Still Image,
|
---|
| 3 | % * Ashutosh Saxena, Min Sun, Andrew Y. Ng,
|
---|
| 4 | % * In ICCV workshop on 3D Representation for Recognition (3dRR-07), 2007.
|
---|
| 5 | % * (best paper)
|
---|
| 6 | % * [2] 3-D Reconstruction from Sparse Views using Monocular Vision,
|
---|
| 7 | % * Ashutosh Saxena, Min Sun, Andrew Y. Ng,
|
---|
| 8 | % * In ICCV workshop on Virtual Representations and Modeling
|
---|
| 9 | % * of Large-scale environments (VRML), 2007.
|
---|
| 10 | % * [3] 3-D Depth Reconstruction from a Single Still Image,
|
---|
| 11 | % * Ashutosh Saxena, Sung H. Chung, Andrew Y. Ng.
|
---|
| 12 | % * International Journal of Computer Vision (IJCV), Aug 2007.
|
---|
| 13 | % * [6] Learning Depth from Single Monocular Images,
|
---|
| 14 | % * Ashutosh Saxena, Sung H. Chung, Andrew Y. Ng.
|
---|
| 15 | % * In Neural Information Processing Systems (NIPS) 18, 2005.
|
---|
| 16 | % *
|
---|
| 17 | % * These articles are available at:
|
---|
| 18 | % * http://make3d.stanford.edu/publications
|
---|
| 19 | % *
|
---|
| 20 | % * We request that you cite the papers [1], [3] and [6] in any of
|
---|
| 21 | % * your reports that uses this code.
|
---|
| 22 | % * Further, if you use the code in image3dstiching/ (multiple image version),
|
---|
| 23 | % * then please cite [2].
|
---|
| 24 | % *
|
---|
| 25 | % * If you use the code in third_party/, then PLEASE CITE and follow the
|
---|
| 26 | % * LICENSE OF THE CORRESPONDING THIRD PARTY CODE.
|
---|
| 27 | % *
|
---|
| 28 | % * Finally, this code is for non-commercial use only. For further
|
---|
| 29 | % * information and to obtain a copy of the license, see
|
---|
| 30 | % *
|
---|
| 31 | % * http://make3d.stanford.edu/publications/code
|
---|
| 32 | % *
|
---|
| 33 | % * Also, the software distributed under the License is distributed on an
|
---|
| 34 | % * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
---|
| 35 | % * express or implied. See the License for the specific language governing
|
---|
| 36 | % * permissions and limitations under the License.
|
---|
| 37 | % *
|
---|
| 38 | % */
|
---|
| 39 | function [Region1 PointPix1 POriReprojM1 PoccluM1 Region2 PointPix2 POriReprojM2 PoccluM2]=... |
---|
| 40 | FindOccluPairSurf(defaultPara, ModelInfo1, ModelInfo2, Pair, GlobalScale, Matches, f1, f2) |
---|
| 41 | |
---|
| 42 | % Function find the closed occlu point and output a region on epipolar line |
---|
| 43 | % to search for dense match |
---|
| 44 | tic |
---|
| 45 | % loading Image |
---|
| 46 | Img1 = ModelInfo1.ExifInfo.IDName; |
---|
| 47 | I1 = imreadbw([defaultPara.Fdir '/pgm/' Img1 '.pgm']); |
---|
| 48 | Img2 = ModelInfo2.ExifInfo.IDName; |
---|
| 49 | I2 = imreadbw([defaultPara.Fdir '/pgm/' Img2 '.pgm']); |
---|
| 50 | [Iy1 Ix1] = size(I1); |
---|
| 51 | [DepthY1 DepthX1] = size(ModelInfo1.Model.Depth.FitDepth); |
---|
| 52 | [Iy2 Ix2] = size(I2); |
---|
| 53 | [DepthY2 DepthX2] = size(ModelInfo2.Model.Depth.FitDepth); |
---|
| 54 | |
---|
| 55 | % Proper Scaling the Depth 3-d Position and PlaneParameters (all in global scale) |
---|
| 56 | ModelInfo1.Model.PlaneParaInfo.PlanePara = ModelInfo1.Model.PlaneParaInfo.PlanePara/GlobalScale(1); |
---|
| 57 | ModelInfo1.Model.PlaneParaInfo.FitDepth = ModelInfo1.Model.PlaneParaInfo.FitDepth*GlobalScale(1); |
---|
| 58 | ModelInfo1.Model.PlaneParaInfo.Position3DFited = permute( ModelInfo1.Model.PlaneParaInfo.Position3DFited*GlobalScale(1),[ 3 1 2]); |
---|
| 59 | ModelInfo2.Model.PlaneParaInfo.PlanePara = ModelInfo2.Model.PlaneParaInfo.PlanePara/GlobalScale(2); |
---|
| 60 | ModelInfo2.Model.PlaneParaInfo.FitDepth = ModelInfo2.Model.PlaneParaInfo.FitDepth*GlobalScale(1); |
---|
| 61 | ModelInfo2.Model.PlaneParaInfo.Position3DFited = permute( ModelInfo2.Model.PlaneParaInfo.Position3DFited*GlobalScale(2),[ 3 1 2]); |
---|
| 62 | Pair.T = Pair.T/Pair.DepthScale(1)*GlobalScale(1); |
---|
| 63 | |
---|
| 64 | % Define possible mathcing point pool |
---|
| 65 | % Specify Target Point in image Pixel position |
---|
| 66 | PointPix1 = f1(:, setdiff(1:length(f1), Matches(1,:))); % unMatched Surf Features |
---|
| 67 | PointPix2 = f2(:, setdiff(1:length(f2), Matches(2,:))); |
---|
| 68 | [Picked_IND1] = ProjPosi2Mask( [Iy1 Ix1], [DepthY1 DepthX1], PointPix1); |
---|
| 69 | [Picked_IND2] = ProjPosi2Mask( [Iy2 Ix2], [DepthY2 DepthX2], PointPix2); |
---|
| 70 | |
---|
| 71 | % Region by picking Img1 |
---|
| 72 | T1_hat = [[0 -Pair.T(3) Pair.T(2)];... |
---|
| 73 | [Pair.T(3) 0 -Pair.T(1)];... |
---|
| 74 | [-Pair.T(2) Pair.T(1) 0]]; |
---|
| 75 | F1 = inv(defaultPara.InrinsicK2)'*T1_hat*Pair.R*inv(defaultPara.InrinsicK1); |
---|
| 76 | [Region1 PointPix1 POriReprojM1 PoccluM1]=FindOccluRegion(defaultPara, [Iy2 Ix2], I1, F1, PointPix1, PointPix2, Picked_IND1, Picked_IND2, ModelInfo1, ModelInfo2, Pair); |
---|
| 77 | |
---|
| 78 | % Region by picking Img2 |
---|
| 79 | Pair2.T = -Pair.R'*Pair.T; |
---|
| 80 | Pair2.R = Pair.R'; |
---|
| 81 | Pair2.Xim = Pair.Xim([3 4 1 2],:); |
---|
| 82 | Pair2.DepthScale = Pair.DepthScale([2 1]); |
---|
| 83 | [Region2 PointPix2 POriReprojM2 PoccluM2]=FindOccluRegion(defaultPara, [Iy1 Ix1], I2, F1', PointPix2, PointPix1, Picked_IND2, Picked_IND1, ModelInfo2, ModelInfo1, Pair2); |
---|
| 84 | toc |
---|
| 85 | % Display |
---|
| 86 | if defaultPara.Flag.FindOcclu |
---|
| 87 | figure; |
---|
| 88 | MaxD1 = ones(1,size(PoccluM1,2)); |
---|
| 89 | MinD1 = ones(1,size(PoccluM1,2)); |
---|
| 90 | MaxD2 = ones(1,size(PoccluM2,2)); |
---|
| 91 | MinD2 = ones(1,size(PoccluM2,2)); |
---|
| 92 | |
---|
| 93 | dispMatchSearchRegin(I1, I2,[ PointPix1; ones(1,size(PointPix1,2))], [ PointPix2; ones(1,size(PointPix2,2))], Region1, Region2, F1, ... |
---|
| 94 | POriReprojM1, MaxD1, PoccluM1, MinD1, ... |
---|
| 95 | PoccluM2, MaxD2, PoccluM2, MinD2, ... |
---|
| 96 | 0, 'Stacking', 'v', 'Interactive', 0); |
---|
| 97 | end |
---|