跳到主要內容

臺灣博碩士論文加值系統

(3.90.139.113) 您好!臺灣時間:2022/01/16 18:10
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳肇業
研究生(外文):Jaw-Yeh Chen
論文名稱:三度空間物體軌跡之估測
論文名稱(外文):The estimation of 3D object motion trajectory
指導教授:林昇甫林昇甫引用關係
指導教授(外文):Sheng-Fuu Lin
學位類別:博士
校院名稱:國立交通大學
系所名稱:電機與控制工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:英文
論文頁數:155
中文關鍵詞:移動軌跡光流法對應姿態估測雙眼法向量
外文關鍵詞:motion trajectoryoptical flowcorrespondencepose estimationbinocularnormal vector
相關次數:
  • 被引用被引用:0
  • 點閱點閱:226
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
本論文主要目的在針對三度空間物體運動之估測提出(1)對應(2)姿態估測(3)改良式光流法的技巧,加以研究分析,分別提出有效解決方法。
針對運動估測的問題,可以從兩個方向來討論它。首先,用點特徵法,在這個方法裡有兩個問題必須解決,即是對應及姿態估測。本文採用雙眼影像序列資料來處理這個問題。當採用此一立體影像技巧來產生雙眼影像序列資料時,可以找到一些特性,利用這些特性,可以讓對應的問題變成簡單的計算程序。
有許多研究人員已經討論過姿態估測的問題,他們所提的方法常常先用到誤差函數的最小平方解法,接著再用奇異值分解3×3的矩陣以求得旋轉矩陣及轉移矩陣。程序中常常用到所有的特徵點,因此,一旦特徵點多的時候,其計算量變大。本文提出一個以法向量為基礎的程序,過程中僅需要用到四個點就可以得到姿態並進而得到運動的軌跡。在此採用法向量的變化量來決定旋轉軸所在的平面,再用另一個法向量的變化量來決定出旋轉軸進而算出旋轉角度。接著可以用簡單的計算算出運動的轉移分量。因為只用到四個點,所以計算量很小。
針對運動估測的問題,本文採用的第二個方向是光流法技巧。許多光流法的技巧在物體的運動具有旋轉時表現不甚理想。因此,本文提出一個改良式光流法技巧來改善旋轉下的性能。傳統方式通常在直角坐標上處理光流,而本文的方法是建立在極坐標(球坐標)上,一旦光流算出來以後,經過一些簡單的計算可以得到運動的旋轉及轉移分量。最後, 我們並將上述的討論延伸到3D空間中的光流法技巧。

The main approaches in this dissertation are the discussions of correspondence, pose estimation, and modified optical flow problems for 3D object motion estimation.
Binocular image sequence data is used to solve the correspondence problem. When stereo imaging technique is used to produce binocular image sequence data, some properties naturally occur; then the correspondence problem can be solved by a very simple procedure.
Pose estimation problem usually needs to find the least-squares solution of the error function; then the singular value decomposition is applied. The procedure always uses all feature points; hence, the computation burden is heavy. A norm vector-based procedure is proposed to obtain the pose and hence the motion trajectory. We use the change of norm vector to determine the plane, in which the rotation axis lies; then another change is used to determine the rotation axis and the rotation angle is obtained. Then the translation can be determined. Since only four points are used, the computation burden is light.
Many optical flow techniques are not suitable when rotation is presented. Hence, a modified optical flow technique is proposed in this dissertation. Traditional methods usually obtain the flow in Cartesian coordinate; The flow is obtained in polar (spherical) coordinate in this dissertation. When the flow is calculated, then the rotation and translation component can be calculated very easily. At last, we extend the above discussions to 3D space optical flow.

Contents
中文摘要 i
Abstract ii
誌謝 iii
Contents iv
List of Figures vii
List of Tables xii
1 Introduction 1
1.1 Motivation and Background 1
1.2 Literature Review 3
1.3 Organization of the Dissertation 14
2 Preliminaries 15
2.1 Stereo Imaging 15
2.2 Least-Squares Estimation 19
2.3 Optical Flow Technique 20
2.3.1 Differential Techniques 25
2.3.2 Tensor-based Techniques 35
2.3.3 Multifeature-based Techniques 38
2.3.4 Phase-based Techniques 43
3 Binocular Image Sequence Correspondence Technique 47
3.1 Some Properties of Stereo Imaging System 47
3.2 Correspondence Technique 53
4 Normal Vector-based Pose Estimation Technique 56
4.1 Least-squares Technique 56
4.2 Normal Vector-based Technique 61
5 Modified Optical Flow Method 71
5.1 Modified Optical Flow 71
5.2 Smoothness of the Optical Flow 73
5.3 An Iterative Scheme 74
5.4 Higher Order Differences 76
5.5 2D Case 78
5.6 3D Case 81
6 Experimental Results and Analysis 90
6.1 Experimental Environment 91
6.2 The Computer Simulation Results 92
6.2.1 Pure Rotation Case 92
6.2.2 Case with Rotation and Translation 93
6.3 The Experimental Results 96
6.3.1 Pure Rotation Case 96
6.3.2 Case with Rotation and Translation 104
7 Conclusions 141
Bibliography 145
List of Figures
2.1 Model of the stereo imaging process . 16
2.2 Top view of Fig. 2.1 with the first camera brought into coincidence with the world coordinate system. 17
2.3 Straight-line approximation to data. 19
2.4 Illustration of the aperture problem: constraint line defined by equation (2-10). The normal optical flow vector f⊥ is pointing perpendicular to the line and parallel to the local gradient (x, t) 22
2.5 Illustration of the phase technique: a sinusoidal plaid pattern is composed of two sinusoids moving with the optical flow f; b the two individual compo-nents allow one to extract the corresponding component velocities and , respectively. The 2-D optical flow f is reconstructed from the compo-nent velocities; c phase images consist of the two sinusoidal patterns. 45
5.1 The rotation and translation component of the 2D image. 78
5.2 The rotation and translation component of the 3D image. 86
6.1 The hardware of the stereo imaging system. 92
6.2 Frames 13, 53, and 90 of movies 1 and 2: (a) frame 13 of camera 1, (b) frame 13 of camera 2, (c) frame 53 of camera 1, (d) frame 53 of camera 2, (e) frame 90 of camera 1, (f) fame 90 of camera 2. 97
6.3 The estimated central point trajectory 98
6.4 The estimated pose trajectory 98
6.5 Three feature points of the ball 98
6.6 The estimated motion trajectory. (a) X-axis of p1. (b) Y-axis of p1. (c) Z -axis of p1. (d) X-axis of p2. (e) Y-axis of p2. (f) Z -axis of p2 99
6.6 (Continued) (g) X-axis of p3.(h) Y-axis of p3. (i) Z -axis of p3 100
6.7 Left- and right-side images of the first test object: (a) left-side image, (b) right-side image 100
6.8 Left- and right-side images of some test images in pure rotation case. (a) Left-side image for rotation angle 0°. (b) Right-side image for rotation an-gle 0°. (c) Left-side image for rotation angle 3°. (d) Right-side for rotation angle 3°. (e) Left-side image for rotation angle 6°. (f) Right-side for rotation angle 6° 102
6.8 (Continued) (g) Left-side image for rotation angle 10°. (h) Right-side image for rotation angle 10°. (i) Left-side image for rotation angle 13°. (j) Right-side image for rotation angle 13° 103
6.9 Left- and right-side images of some test images in case with rotation and translation. (a) Left-side, translation is (85,0,142) and rotation angle is 3°. (b) Right-side, translation is (85,0,142) and rotation angle is 3°. (c) Left-side, translation is (85,0,142) and rotation angle is 6°. (d) Right-side, translation is (85,0,142) and rotation angle is 6°. (e) Left-side, translation is (85,0,142) and rotation angle is 10°. (f) Right-side, translation is (85,0,142) and rotation angle is 10°. 105
6.9 (Continued) (g) Left-side, translation is (85,0,142) and rotation angle is 13°. (h) Right-side, translation is (85,0,142) and rotation angle is 13°. (i) Left-side, translation is (85,0,142) and rotation angle is 16°. (j) Right-side, translation is (85,0,142) and rotation angle is 16°. 106
6.10 Left- and right-side images of test sequence. (a) No translation and rotation with θ = 0° and φ = 2.8°. (b) No translation and rotation with θ = 6° and φ = 5.8°. (c) No translation and rotation with θ = 3° and φ =10.3°. (d) No trans-lation and rotation with θ = 13° and φ = 13.6°. 108
6.10 (Continued) (e) Translation (85,142,35) and no rotation, (f) Translation (85,142,35) and rotation with θ = 3° and φ = 2.8°. (g) Translation (85,142,35) and rotation with θ = 6° and φ = 5.8°. (h) Translation (85,142,35) and rotation with θ = 13° and φ = 10.3°. 109
6.11 Left- and right-side images of the second test object 110
6.12 Left- and right-side images of some of the second test sequence: (a) frame 3, (b) frame 13, (c) frame 29. 111
6.13 The estimated central point trajectory of the ball 112
6.14 The estimated pose trajectory of the ball 112
6.15 Left- and right-side images of trajectory 1: (a) frame 4, (b) frame 7, (c) frame 10, (d) frame 13.. 113
6.15 (Continued) (e) frame 16, (f) frame 19, (g) frame 22, (h) frame 25.. 114
6.16 Left- and right-side images of trajectory 2: (a) frame 4, (b) frame 7, (c) frame 10, (d) frame 13.... 115
6.16 (Continued) (e) frame 16, (f) frame 19, (g) frame 22, (h) frame 25.. 116
6.17 Left- and right-side images of trajectory 3: (a) frame 4, (b) frame 7, (c) frame 10, (d) frame 13...... 117
6.17 (Continued) (e) frame 16, (f) frame 19, (g) frame 22, (h) frame 25..... 118
6.18 Left- and right-side images of trajectory 4: (a) frame 4, (b) frame 7, (c) frame 10, (d) frame 13. 119
6.18 (Continued) (e) frame 16, (f) frame 19, (g) frame 22, (h) frame 25...... 120
6.19 Left- and right-side images of trajectory 5: (a) frame 4, (b) frame 7, (c) frame 10, (d) frame 13. 121
6.19 (Continued) (e) frame 16, (f) frame 19, (g) frame 22, (h) frame 25... 122
6.20 Left- and right-side images of trajectory 6: (a) frame 4, (b) frame 7, (c) frame 10, (d) frame 13. 123
6.20 (Continued) (e) frame 16, (f) frame 19, (g) frame 22, (h) frame 25..... 124
6.21 Left- and right-side images of trajectory 7: (a) frame 4, (b) frame 7, (c) frame 10, (d) frame 13. 125
6.21 (Continued) (e) frame 16, (f) frame 19, (g) frame 22, (h) frame 25.... 126
6.22 Left- and right-side images of trajectory 8: (a) frame 5, (b) frame 9, (c) frame 13, (d) frame 17. 127
6.22 (Continued) (e) frame 21, (f) frame 25, (g) frame 29... 128
6.23 The estimated central point of trajectory 1 130
6.24 The estimated pose of trajectory 1.... 130
6.25 Eight feature points of the box appeared in trajectory 1.... 130
6.26 The estimated motion trajectory of trajectory 1. (a) X-axis of a. (b) Y-axis of a. (c) Z-axis of a. (d) X-axis of b. (e) Y-axis of b. (f) Z-axis of b. 131
6.26 (Continued) (g) X-axis of c. (h) Y-axis of c. (i) Z-axis of c. (j) X-axis of d. (k) Y-axis of d. (l) Z-axis of d..... 132
6.26 (Continued) (m) X-axis of feature point 5. (n) Y-axis of feature point 5. (o) Z-axis of feature point 5. (p) X-axis of feature point 6. (q) Y-axis of feature point 6. (r) Z-axis of feature point 6.... 133
6.26 (Continued) (s) X-axis of feature point 7. (t) Y-axis of feature point 7. (u) Z-axis of feature point 7. (v) X-axis of feature point 8. (w) Y-axis of feature point 8. (x) Z-axis of feature point 8... 134
6.27 The estimated central point of trajectory 2 135
6.28 The estimated pose of trajectory 2. 135
6.29 The estimated central point of trajectory 3 136
6.30 The estimated pose of trajectory 3. 136
6.31 The estimated central point of trajectory 4 137
6.32 The estimated pose of trajectory 4. 137
6.33 The estimated pose of trajectory 5. 137
6.34 The estimated central point of trajectory 6 138
6.35 The estimated pose of trajectory 6. 138
6.36 The estimated central point of trajectory 7 139
6.37 The estimated pose of trajectory 7. 139
6.38 The estimated central point of trajectory 8 139
6.39 The estimated pose of trajectory 8. 140
List of Tables
5.1 The maximum rotation angle in which the performance is good 81
6.1 The maximum rotation angle in which the performance is good for every α2. 94
6.2 The maximum rotation angle in which the performance is good for case with rotation and translation. 95
6.3 The maximum rotation angle in which the performance is good for pure ro-tation case in real data. 101
6.4 Comparisons of the performance between the modified optical flow tech-nique and the traditional optical flow technique. 103
6.5 The maximum rotation angle in which the performance is good for case with rotation and translation in real data 106
6.6 Comparisons of the performance between the modified optical flow tech-nique and the traditional optical flow technique for case with rotation and translation. 107
6.7 The maximum rotation angle in which the performance is good for general 3D space motion. 110

Bibliography
[1] J. K. Aggarwal and N. Nandhakumar, “On the computation of motion from sequences of images-A review,” Proceedings of the IEEE, vol. 76, no. 8, pp. 917-935, 1988.
[2] P. Anandan, “A computational framework and an algorithm for the measurement of visual motion,” International Journal of Computer Vision, vol. 2, pp. 283-310, 1989.
[3] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3-D point sets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, no. 5, pp. 698-700, 1987.
[4] A. Bainbridge-Smith, and R. G. Lane, “Determining optical flow using a differential method,” Image and Vision Computing, vol. 15, pp. 11-22, 1997.
[5] D. H. Ballard, “Generalized Hough transform to detect arbitrary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 2, pp. 111-122, 1981.
[6] D. Barnea and F. Silverman, “A class of algorithms for fast digital image registration,” IEEE Transactions on Computers, vol. C-21, pp. 179-186, 1972.
[7] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” International Journal of Computer Vision, vol. 12, no. 1, pp. 43-77, 1994.
[8] J. Bigün, and G. H. Granlund, “Optimal orientation detection of linear symmetry,” Proc. ICCV'87, London, pp. 433-438, IEEE. Washington, DC: IEEE Computer Society Press, 1987.
[9] M. J. Black and P. Anandan, “The robust estimation of multiple motions: parametric and piecewise-smooth flow fields,” Computer Vision and Image Understanding, vol. 63, no. 1, pp. 75-104, 1996.
[10] F. Blais, J.-A. Beraldin, S. F. El-Hakim, and L. Cournoyer, “Real-time geometrical tracking and pose estimation using laser triangulation and photogrammetry,” Proc. Third International Conference on 3-D Digital Imaging and Modeling, pp. 205-212, 2001.
[11] S. D. Blostein and T. S. Huang, “Estimating 3-D motion from range data,” Proc. 1st Conference on Artificial Intelligence Applications, Denver, CO, pp. 246-250, Dec. 1984.
[12] S. D. Blostein, L. Zhao, and R. M. Chann, “Three-dimensional trajectory estimation from image position and velocity,” IEEE Transactions on Aerospace and Electronic Systems, vol. 36, no. 4, pp. 1075-1089, October 2000.
[13] T. J. Broida, S. Chandrashekhar, and R. Chellappa, “Recursive 3D motion estimation from a monocular image sequence,” IEEE Transactions on Aerospace and Electronic Systems, vol. 26, no. 4, pp. 639-656, 1990.
[14] D. G. Childers, “Laryngeal pathology detection,” Proc. CRC Critical Reviews in Biomedical Engineering, pp. 375-425, 1977.
[15] A. Crétual and F. Chaumette, “Application of motion-based visual servoing to target tracking,” The International Journal of Robotics Research, vol. 20, no. 11, pp. 878-890, 2001.
[16] D. Cyganski and J. A. Orr, “Applications of tensor theory to object recognition and orientation deternination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-7, pp. 663-673, Nov. 1985.
[17] C. M. Cyr, A. F. Kamal, T. B. Sebastian, and B. B. Kimia, “2D-3D registration based on shape matching,” Proc. IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, pp. 198-203, 2000.
[18] F. Dellaert, D. Pomerleau, and C. Thorpe, “Model-based car tracking integrated with a road-follower,” Proc. International Conference on Robotics and Automation, pp. 1889-1894, 1998.
[19] A. Dickinson, B. Ackland, E.-S. Eid, D. Inglis, and E. R. Fossum, “A 256x256 CMOS active pixel image sensor with motion detection,” Proc. 1995 IEEE International Solid-State Circuits Conference, pp. 226-227, 1995.
[20] Z. Duric, J. A. Fayman, and E. Rivlin, “Function from motion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 579-591, June 1996.
[21] D. J. Fleet and A. D. Jepson, “Computation of component image velocity from local phase information,” International Journal of Computer Vision, vol. 5, pp. 77-104, 1992.
[22] D. J. Fleet, and A. D. Jepson, “Stability of phase information,” IEEE Transactions on PAMI, vol. 15, no. 12, pp. 1253-1268, 1993.
[23] Y. Genc, M. Tuceryan, A. Khamene, and N. Navab, “Optical see-through calibration with vision-based trackers: propogation of projection matrices,” Proc. IEEE and ACM International Symposium on Augmented Reality, pp.147-156, 2001.
[24] S. Gold, “Matching and learning structural and shape representations with neural networks,” PhD thesis, Yale University, 1995.
[25] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Reading, MA: Addison-Wesley, 1993.
[26] G. H. Granlund, and H. Knutsson, Signal processing for computer vision. Kluwer, 1995.
[27] E. Guest, E. Berry, R. A. Baldock, M. Fidrich, and M. A. Smith, “Robust point correspondence applied to two- and three-dimensional image registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 165-179, Februry 2001.
[28] M. A. Gutierrez, L. Moura, C. P. Melo, and N. Alens, “Computing optical flow in cardiac images for 3D motion analysis,” Proc. Computers in Cardiology, pp. 37-40, 1993.
[29] R. M. Haralick, H. Joo, C.-N. Lee, X. Zhuang, V. G. Vaidya, and M. B. Kim, “Pose estimation from corresponding point data,” IEEE Transactions on Systems, man, and cybernetics, vol. 19, no. 6, pp. 1426-1446, 1989.
[30] D. J. Hegger and A. D. Jepson, “Subspace methods for recovering rigid motion I:Algorithm and implementation,” International Journal of Computer Vision, vol. 7, no. 2, pp. 95-117, 1992.
[31] B. K. P. Horn, Robot Vision. MIT Press: Cambridge, MA. 1986.
[32] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, pp. 185-204, 1981.
[33] D. Huttenlocher and S. Ullman, “Object recognition using alignment,” Proc. International Conference on Computer Vision, London, pp. 102-111, 1987.
[34] J. Hwang, Y. Ooi, and S. Ozawa, “Advanced visual tracking system based on 3-D motion model of moving object,” Proc. 20th International Conference on Industrial Electronics, Control and Instrumentation, vol. 2, pp. 821-825, 1994.
[35] C. Jacobs, A. Finkelstein, and D. Salesin, “Fast multiresolution image querying,” Proc. Computer Graphics Proceedings SIGGRAPH, pp. 277-286, 1995.
[36] B. Jähne, Spatial-temporal image processing. Berlin: Springer, 1993.
[37] B. Jähne, Digital image processing-concepts, algorithms, and scientific applications, 4th edition. New York: Springer, 1997.
[38] B. Jähne, H. HauBecker, and P. GeiBler, Handbook of computer vision and applications, vol. 2. Academic Press, 1999.
[39] B. Jähne, H. HauBecker, and P. GeiBler, Handbook of computer vision and applications, vol. 3. Academic Press, 1999.
[40] A. Jepson and M. J. Black, “Mixture models for optical flow computation,” Proc. Computer Vision and Pattern Recognition, CVPR’93, pp. 760-761. New York, 1993.
[41] M. Kass, and A. Witkin, “Analyzing oriented patterns,” Computer Vision Graphics and Image Processing, vol. 37, pp. 362-385, 1987.
[42] C. Kim and J.-N. Hwang, “Fast and automatic video object segmentation and tracking for content-based applications,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, no. 2, pp. 122-129, February 2002.
[43] G. J. Klein and R. H. Huesman, “A 3D optical flow approach to addition of deformable PET volumes,” Proc. Nonrigid and Articulated Motion Workshop, pp. 136-143, 1997.
[44] H. Knutsson, “Representing local structure using tensors,” Proc. 6th Scandinavian Conference on Image Analysis, Oulu, Finland, pp. 244-251, Springer-Verlag, 1998.
[45] I. Kompatsiaris and M. G. Strintzis, “Spatiotemporal segmentation and tracking of objects for visualization of videoconference image sequences,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 10, no. 8, pp. 1388-1402, December 2000.
[46] Y. Li, A. Hilton, anf J. Illingworth, ‘‘Towards reliable real-time multiview tracking,” Proc. 2001 IEEE Workshop on Multi-object Tracking, pp. 43-50, 2001.
[47] J. Lim and J. B. Ra, “Semi-automatic video segmentation for object tracking,” Proc. 2001 International Conference on Image Processing, vol. 2, pp. 81-84. 2001.
[48] J. S. Lim, Two-dimensional signal and image processing. Englewood Cliffs, NJ: Prentice-Hall, 1990.
[49] S. Loncaric, “A survey of shape analysis techniques,” Pattern Recognition, vol. 31, no. 8, pp. 983-1001, 1998.
[50] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” Proc. Darpa Image Understanding Workshop, pp.121-130, 1981.
[51] D. G. Luenberger, Optimization by vector space methods. 1968.
[52] J. L. Marins, X. Yun, E. R. Bachmann, R. B. McGhee, and M. J. Zyda, “An extended Kalman filter for quaternion-based orientation estimation using MARG sensors,” Proc. International Conference on Intelligent Robots and Systems, pp. 2003-2011, 2001.
[53] T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Computer Vision and Image Understanding, vol. 81, pp. 231-268, 2001.
[54] F. Mokhtarian, S. Abbasi, and J. Kittler, “Efficient and robust retrieval by shape content through curvature scale space,” Proc. First International Workshop IDB-MMS’96, Amsterdam, The Netherlands, pp. 35-42, 1996.
[55] H.-H. Nagel, “On the estimation of optical flow: relations between different approaches and some new results,” Artificial Intelligence, vol. 33, pp. 299-324, 1987.
[56] C. Nastar and N. Ayache, “Classification of nonrigid motion in 3D images using physics-based vibration analysis,” pp.61-69, 1994.
[57] T. Okuma, T. Kurata, and K. Sakaue, “Vizwear-3D: a wearable 3-D annotation system based on 3-D object tracking using a condensation algorithm,” Proc. IEEE Virtual Reality 2002, pp. 295-296, 2002.
[58] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, “Matching 3D models with shape distributions,” Proc. SMI 2001 International Conference on Shape Modeling and Applications, pp. 154-166, 2001.
[59] A. Pentland and B. Horowitz, “Recovery of nonrigid motion and structure,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 7, pp. 730-742, July 1991.
[60] A. Pentland, B. Horowitz, and S. Sclaroff, “Non-rigid motion and structure from contour,” Proc. IEEE Workshop on Visual Motion, pp. 288-293, 1991.
[61] W. H. Press, S. A. Teukolsky, W. Vetterling, and B. Flannery, Numerical recipes in C: the art of scientific computing. New York: Cambridge University Press, 1992.
[62] W. H. Press, S. A. Teukolsky, W. Vetterling, and B. Flannery, Numerical recipes in C: the art of scientific computing. Second edition: Cambridge University Press, 1994.
[63] S. Ranade and A. Rosenfeld, “Point pattern matching by relaxation,” Pattern Recognition, vol. 12, pp. 269-275, 1980.
[64] C. Rasmussen and G. D. Hager, “Probabilistic data association methods for tracking complex visual objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 560-576, June 2001.
[65] T. Schoepflin, V. Chalana, D. R. Haynor, and Y. Kim, “Video object tracking with a sequential hierarchy of template deformations,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 11, pp. 1171-1182, November 2001.
[66] S. Sclaroff and A. P. Pentland, “Model matching for correspondences and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 6, pp. 545-561, June 1995.
[67] E. P. Simoncelli, Distributed representation and analysis of visual motion. Dissertation, MIT, 1993.
[68] S. Soatto, R. Frezza, and P. Perona, “Motion estimation via dynamic vision,” IEEE Transactions on Automatic Control, vol. 41, no. 3, pp. 393-413, March 1996.
[69] Y. Song, X. Feng, and P. Perona, “Towards detection of human motion,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 810-817, 2000.
[70] G. Stockman, “Object recognition and localization via pose clustering,” Computer Vision, Graphics, and Image Processing, vol. 40, no. 3, pp. 361-387, 1987.
[71] M. A. Taalebinezhaad, “Direct recovery of motion and shape in the general space by fixation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 8, pp. 847-853, 1992.
[72] M. Tanabe, K. Kitajima, W. J. Gould, and A. Lambiase, “Analysis of high-speed motion pictures of the vocal folds,” Folia Phoniatrica et Logopaedica, vol. 27, pp. 77-87, 1975.
[73] T. Y. Tian and M. Shah, “Estimating 3D motion and shape of multiple objects using Hough transform,” Proc. 12th IAPR International Conference on Pattern Recognition, vol. 1, pp. 658-660, 1994.
[74] O. Tretiak, and L. Pastor, “Velocity estimation from image sequences with second order differential operators,” Proc. 7th International Conference on Pattern Recognition, Montreal, pp. 20-22, 1984.
[75] S. Umeyama, “Parameterized point pattern matching and its application to recognition of object families,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 1, pp. 136-144, 1993.
[76] S. Uras, F. Girosi, A. Verri, and V. Torre, “A computational approach to motion perception,” Biological Cybernetics, vol. 60, pp.79-97, 1988.
[77] J. Wang and J. Chun, “Extended target detection and tracking using the simultaneous attitude estimation,” Proc. American Control Conference, pp. 4341-4342, 2000.
[78] C. S. Wiles, A. Maki, and N. Matsuda, “Hyperpatches for 3D model acquistion and tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 12, pp. 1391-1403, December 2001.
[79] H. Wolfson and I. Rigoutsos, “Geometric hashing: an overview,” Proc. IEEE Computational Science & Engineering, pp. 10-21, October-December 1997.
[80] Z. Yang and F. S. Cohen, “Image registration and object recognition using affine invariants and convex hulls,” IEEE Transactions on Image Processing, vol. 8, no. 7, pp. 934-946, July 1999.
[81] W. G. Yau, L.-C. Fu, and D. Liu, “Robust real-time 3D trajectory tracking algorithm for visual tracking using weak perspective projection,” Proc. American Control Conference, Arlinton, VA, pp. 25-27, June 2001.
[82] S. Yonemoto, D. Arita, and R.-I. Taniguchi, “Real-time human motion analysis and IK-based human figure control,” Proc. Workshop on Human Motion, pp. 149-154, 2000.
[83] D. Zetu, P. Banerjee, and D. Thompson, “Extended-ranged hybrid tracker and applications to motion and camera tracking in manufacturing systems,” IEEE Transactions on Robotics and Automation, vol. 16, no. 3, pp.281-293, June 2000.
[84] X. Zhang and N. Navab, “Tracking and pose estimation for computer assisted localization in industrial environments,” Proc. Fifth IEEE Workshop on Applications of Computer Vision, pp. 214-221, 2000.
[85] Z. Zhang and O. D. Faugeras, “Estimation of displacements from two 3D frames obtained from stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 12, pp. 1141-1156, December 1992.
[86] Z. Zivkovic and F. van der Heijden, “A stabilized adaptive appearance changes model for 3D head tracking,” Proc. IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pp. 175-181, 2001.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top