跳到主要內容

臺灣博碩士論文加值系統

(3.90.139.113) 您好!臺灣時間:2022/01/16 18:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:賴志群
研究生(外文):Chih-Chiun Lai
論文名稱:利用電腦視覺技術以圓形資訊作物體定位之研究及其在運載工具自動化和虛擬實境之應用
論文名稱(外文):A Study on Object Pose and Location Estimation by Computer Vision Techniques Using Circular Shape Information for Vehicle Automation and Virtual Reality Applications
指導教授:蔡文祥蔡文祥引用關係
指導教授(外文):Wen-Hsiang Tsai
學位類別:博士
校院名稱:國立交通大學
系所名稱:資訊科學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:英文
論文頁數:137
中文關鍵詞:電腦視覺圓形資訊物體定位運載工具自動化虛擬實境
外文關鍵詞:Computer VisionCircular ShapeObject Pose and Location EstimationVehicle AutomationVirtual Reality
相關次數:
  • 被引用被引用:1
  • 點閱點閱:356
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
  在電腦視覺領域中,物體定位是一個基本且重要的課題,此一問題即如何透過物體上的一組特徵來決定相機與物體之間的相對位置。由於在許多的日常的環境中圓形與直線是隨處可見的資訊而常被運用,因此在本論文中我們首先針對不同個數之圓形的情況,利用電腦視覺技術提出了四個結合圓形與直線資訊進行物體定位的方法:第一個方法為跟相機座標系統只有偏角的單圓定位、另外三個方法分別為同平面上之同半徑雙圓、同平面上之不同半徑雙圓及同平面上之兩同心圓的定位。本論文並實際應用此四個定位方法於運載工具自動化和虛擬實境等四種環境,分別概述如下:
  對於在道路上行駛的車輛,在轉換車道時如何偵測旁車道上有無汽車;以及當有車輛自旁車道轉入本車道時,如何即時偵測該車輛;兩者對於輔助安全駕駛而言都是很重要的工作,然而以往之自動車研究很少對此一狀況進行探討。在本論文中我們提出兩個新方法來處理此一問題,第一個方法為利用旁車後輪鋼圈之圓形資訊來作旁車定位,並利用鋼圈所在平面之方向來預測旁車車頭之方向。同時針對影像中鋼圈邊界的橢圓特徵提出一穩定的影像偵測技巧。本論文之第二個方法為利用旁車前後兩輪鋼圈之圓形資訊偵測影像中車輪與地面接觸點,之後再求得該點之三度空間位置來作旁車定位。
  針對已經求得的旁車定位資訊,我們推導出兩種預測旁車行進路線的方法。第一種為利用連續兩張影像所偵測之資訊來預測旁車行進路線;第二種則為利用在單一影像所偵測之空間資訊(例如車前後輪的方向及位置),來預測旁車行進路線。兩者皆可利用來達到偵測旁車及避碰之目的。
  在空中運載工具的環境中,航空飛行員的負擔十分沉重,自動化或輔助飛行無疑是一大幫助。一般飛行自動化之研究大都只考慮以固定之標誌特徵來計算直昇機相對落地點之位置,而達到輔助飛行之目的。但是在直昇機降落的過程中,隨著高度的降低,直昇機上攝得之落地點標誌在影像中會愈變愈大以致原來選定之標誌特徵不再適用。本論文第三個方法針對此問題,提出一個階層化的新概念來作直昇機定位。此一方法隨著直昇機高度的改變,自動採用落地點標誌影像中更細部的圓形與直線之資訊,以求得更精確之直昇機相對落地點的位置,而達到輔助飛行之目的。
  隨著虛擬實境及網路視訊會議應用的與日俱增,利用攝影機作為輸入裝置的需求也相對增加。本論文第四個方法針對此類需求,設計了一套快速校正照相機和電腦螢光幕之相對位置的方法。在此類應用中相機通常是架設在螢幕旁隨時偵測使用者的動作,因此螢幕並不在相機的視野中,本論文利用一個特別設計其上附有小型雷射光頭的眼鏡作為輔助工具。第一步先求得相機相對於眼鏡的位置(以二條平行線及二個圓為物體幾何特徵),第二步利用螢幕上特定點相對眼鏡的位置和第一步結果可得螢幕上特定點關於相機的相對位置,最後求得相機相對於螢幕座標的相對位置。同時針對虛擬實境及人機互動的三個應用亦作詳細之數學公式推導。
  最後,本論文所提的各個方法均具有公式解,且都已由實驗證明其正確性及可行性。

In the field of computer vision, one of the basic and important problems is object pose and location estimation. This kind of problem means to determine the relative position and orientation between a camera and a certain feature such as a point, a line, or a curve detected on an object. Among various kinds of features, circles are very popular in the man-made environments. In this study, one purpose is to explore feasible and effective object location approaches by using combinations of features of circles and lines. Another purpose is to create new computer vision applications, in which the features of lines and circles are utilized to solve the problem of object pose and location estimation. Four methods for object pose and localization of circles are proposed in this study. Then, their applications are discussed, including moving vehicle localization, moving vehicle trajectory prediction, automatic rotorcraft landing, and camera calibration with respect to a computer monitor for virtual reality applications.
An approach to estimation of moving lateral vehicle locations for driving assistance using one-wheel shape information in single 2D vehicle images by 3D computer vision techniques is proposed. There are still very few studies on the detection and localization of lateral vehicles. In this study, the wheel shape information of the lateral vehicle is employed to locate the lateral vehicle, in which the wheel information comes from images captured with a camera mounted on the driver’s vehicle. Then, the trajectory of the moving lateral vehicle is predicted by using the temporal information of single wheel shapes in consecutive image pairs.
In addition, another approach to estimation of lateral moving vehicle locations and prediction of their trajectories for driving assistance using two-wheel shape information in single 2D images by 3D computer vision techniques is also proposed. The radius of a wheel need not be known in advance. No special mark on the vehicle is required. The 3D relative positions of the contact points of wheels and ground with respect to the camera are obtained by the back-projection principle as the desired relative location of a lateral vehicle. Then, the trajectory of a lateral vehicle is predicted from the spatial information obtained in single images
Automatic landing system can alleviate heavy burdens on aircraft pilots. In this study, a new hierarchical approach to landmark localization by computer vision techniques is proposed. Most of the existing related vision-based methods for aircraft landing consider only a situation, in which the entire landmark is in the field of view of the camera. The proposed method instead achieves aircraft location estimation by four hierarchical stages according to the aircraft flight altitudes. In different stages, the features of different parts of the landmark are utilized to estimate the landmark location.
Finally, a new approach to camera calibration with respect to a computer monitor is proposed. The camera location parameters, including three position parameters and three orientation ones, obtained from the proposed calibration method are useful for many virtual reality applications in which a camera is mounted on or located near the monitor to “look at” a user in front of the monitor. The calibration problem is solved indirectly by introducing an auxiliary tool, a pair of specially-designed eye-glasses, with a laser pointer mounted on the nosepiece between the two glass frames. Three applications of the proposed method, namely, VR scene display, cursor control by the head direction, and face view rectification for preserving eye contact in video conferencing or Internet telephony, are also discussed, with their respective mathematics derived in detail.
Experiments have also been conducted with good results to prove the correctness of the derived equations and the feasibility of all the proposed approaches.

CHAPTER 1 INTRODUCTION 1
1.1 MOTIVATION 1
1.2 SURVEY OF RELATED STUDIES 3
1.2.1 STUDIES ON OBJECT POSE ESTIMATION BY LINE AND CIRCULAR FEATURES 3
1.2.2 STUDIES ON VISION-BASED VEHICLE LOCALIZATION 5
1.2.3 STUDIES ON VISION-BASED AIRCRAFT FLYING 7
1.2.4 STUDIES ON CAMERA CALIBRATION 8
1.2.5 STUDIES ON HCI AND VR APPLICATIONS 8
1.3 OVERVIEW OF PROPOSED APPROACHES 9
1.3.1 MATHEMATICS OF POSE ESTIMATION USING CIRCLE AND LINE FEATURES 10
1.3.2 ESTIMATION OF MOVING VEHICLE LOCATIONS USING WHEEL SHAPE INFORMATION IN SINGLE 2-D LATERAL VEHICLE IMAGES 12
1.3.3 TRAJECTORY PREDICTION FOR AVOIDANCE OF COLLISION WITH MOVING LATERAL VEHICLES USING WHEEL SHAPE INFORMATION 14
1.3.4 A HIERARCHICAL APPROACH TO AUTOMATIC ROTORCRAFT LANDING 15
1.3.5 CAMERA CALIBRATION WITH RESPECT TO COMPUTER MONITORS 16
1.4 CONTRIBUTIONS OF DISSERTATION STUDY 17
1.5 ORGANIZATION OF DISSERTATION 19
CHAPTER 2 MATHEMATICS OF POSE ESTIMATIONS USING CIRCLE AND LINE FEATURES 20
2.1 COMPUTATION OF 3D POSE OF A SINGLE CIRCLE 20
2.2 COMPUTATION OF 3D POSE OF A CIRCLE VIEWED FROM A CAMERA ONLY WITH PANNING 22
2.3 COMPUTATION OF 3D POSES OF TWO SEPARATE CIRCLES WITH IDENTICAL RADII 27
2.4 COMPUTATION OF 3D POSES OF TWO SEPARATE CIRCLES WITH DIFFERENT RADII 34
2.5 COMPUTATION OF 3D POSES OF TWO CONCENTRIC CIRCLES 38
CHAPTER 3 ESTIMATION OF MOVING VEHICLE LOCATIONS USING WHEEL SHAPE INFORMATION IN SINGLE 2D LATERAL VEHICLE IMAGES 43
3.1 PROPOSED ESTIMATION APPROACH USING ONE WHEEL SHAPE INFORMATION 43
3.1.1 PROBLEM DEFINITION 43
3.1.2 ESTIMATION OF 3D ORIENTATION AND POSITION PARAMETERS OF WHEELS 44
3.1.3 IMAGE PROCESSING FOR WHEEL SHAPE SEGMENTATION 45
3.1.4 RESULTS OF IMAGE PROCESSING 50
3.1.5 EXPERIMENTAL RESULTS 52
3.2 PROPOSED ESTIMATION APPROACH BY USING TWO WHEEL SHAPE INFORMATION 56
3.2.1 PROBLEM DEFINITION 56
3.2.2 DETERMINING OF ORIENTAL PARAMETERS OF FRONT AND REAR WHEELS 59
3.2.3 DETERMINING OF POSITION PARAMETERS OF CONTACT POINTS OF WHEELS AND GROUND 60
3.2.4 EXPERIMENTAL RESULTS 61
3.3 SUMMARY 66
CHAPTER 4 TRAJECTORY PREDICTION FOR AVOIDANCE OF COLLISION WITH MOVING LATERAL VEHICLES USING WHEEL SHAPE INFORMATION 67
4.1 PREDICTION OF MOVING LATERAL VEHICLE TRAJECTORIES USING TEMPORAL INFORMATION IN CONSECUTIVE IMAGE PAIRS 67
4.2 PREDICTION OF MOVING LATERAL VEHICLE TRAJECTORIES USING SPATIAL INFORMATION IN SINGLE IMAGES 69
4.3 EXPERIMENTAL RESULTS 70
4.4 SUMMARY 75
CHAPTER 5 A HIERARCHICAL APPROACH TO AUTOMATIC ROTORCRAFT LANDING IN VERTIPORTS BY LANDMARK LOCALIZATION TECHNIQUES USING MULTIPLE LINE AND CIRCLE INFORMATION 77
5.1 OVERVIEW OF PROPOSED FOUR-STAGE APPROACH TO LANDMARK LOCALIZATION 77
5.2 DETAILS OF EACH LANDMARK LOCALIZATION STAGE 80
5.2.1 COMPUTATION OF 3D DIRECTIONS OF A SET OF PARALLEL LINES BY VANISHING POINTS 81
5.2.2 COMPUTATION OF 3D LOCATIONS OF LINE SEGMENTS WITH KNOWN DIRECTIONS AND KNOWN LENGTHS 82
5.2.3 STAGE ONE: COMPUTING LANDMARK LOCATION FROM AN OUTER SQUARE SHAPE 83
5.2.4 STAGE TWO: COMPUTING LANDMARK LOCATION FROM A CIRCLE AND TWO SETS OF PARALLEL LINES 85
5.2.5 STAGE THREE: COMPUTING LANDMARK LOCATION FROM TWO CONCENTRIC CIRCLES 86
5.2.6 STAGE FOUR: COMPUTING LANDMARK LOCATION FROM A CIRCLE 86
5.3 IMAGE PROCESSING AND EXPERIMENTAL RESULTS 87
5.3.1 IMAGE PROCESSING 87
5.3.2 EXPERIMENTAL RESULTS 91
5.4 SUMMARY 95
CHAPTER 6 CAMERA CALIBRATION WITH RESPECT TO COMPUTER MONITORS USING EYE-GLASSES WITH A LASER POINTER FOR VIRTUAL REALITY APPLICATIONS 96
6.1 CONCEPT OF PROPOSED CAMERA CALIBRATION 96
6.1.1 PROPOSED THREE-STAGE CAMERA CALIBRATION PROCEDURE 97
6.1.2 COORDINATE SYSTEMS AND TRANSFORMATIONS 100
6.2 DETAILS OF PROPOSED CAMERA CALIBRATION STAGES 102
6.2.1 LOCATING EYE-GLASSES SHAPE WITH RESPECT TO CAMERA 103
6.2.2 COMPUTING COORDINATES (IN CCS) OF THREE CORNER POINTS ON MONITOR 103
6.2.3 COMPUTING LOCATION PARAMETERS OF CAMERA WITH RESPECT TO MONITOR 105
6.2.4 ALGORITHM OF CALIBRATION PROCESS 107
6.3 APPLICATIONS FOR VIRTUAL REALITY BY CALIBRATION INFORMATION 108
6.3.1 VR SCENE DISPLAY 108
6.3.2 CURSOR CONTROL BY HEAD DIRECTION 110
6.3.3 FACE VIEW RECTIFICATION FOR PRESERVING EYE CONTACT IN VIDEO CONFERENCING OR INTERNET TELEPHONY 112
6.4 EXPERIMENTAL RESULTS 115
6.5 SUMMARY 121
CHAPTER 7 CONCLUSIONS AND SUGGESTIONS FOR FURTHER RESEARCH 123
7.1 LOCATION ESTIMATION AND TRAJECTORY PREDICTION OF MOVING LATERAL VEHICLE 124
7.2 A HIERARCHICAL APPROACH TO AUTOMATIC ROTORCRAFT LANDING ON VERTIPORTS BY VISION-BASED LANDMARK LOCATION TECHNIQUES 125
7.3 CAMERA CALIBRATION WITH RESPECT TO COMPUTER MONITORS 126
REFERENCES 128
VITA 135
PUBLICATION LIST 136

[1] R. M. Haralick, “Determining camera parameters from the perspective of a rectangle,” Pattern Recognition, Vol. 22, No. 3, pp.225-230, 1989.
[2] S. T. Barnard, “Interpreting perspective images,” Artificial Intelligence, Vol. 21, pp.435-462, 1983.
[3] L. Quan and R. Mohr, “Determining perspective structures using hierarchical Hough transform,” Pattern Recognition Letter, Vol. 9, 1989.
[4] A. Tai, J. Kittler, M. Petrou, and T. Windeatt, “Vanishing point detection,” Image and Vision Computing, Vol. 11, No.4, pp.240-245, 1993.
[5] M. J. Magee and J. K. Arrgawal, “Determining vanishing points from perspective images,” Computer Vision, Graphics, and Image Processing, Vol. 26, pp. 256-267, May 1993.
[6] B. Brillault-O’Mahony, “New method for vanishing point detection,” CVGIP: Image Understanding, Vol. 54, No. 2, pp.289-300, Sep. 1991.
[7] G. F. Mclean and D. Kotturi, “Vanishing point detection by line clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 11, pp.1090-1095, 1995.
[8] J. A. Shufelt, “Performance evaluation and analysis of vanishing point detection techniques,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 3, 1999.
[9] R. S. Weiss and H. Nakatani, “An error analysis for surface orientation from vanishing points,” Proc. SPIE, Vol. 974, pp.187-194, 1988.
[10] K. Kanatani, “Statistical foundation for hypothesis testing of image data,” CVGIP: Image Understanding, Vol. 60, No. 3, pp.382-391, Nov. 1994.
[11] S. Y. Chen and W. H. Tsai, "A systematic approach to analytic determination of camera parameters by line features," Pattern Recognition, Vol. 23, No. 8, pp. 859-877, 1990.
[12] M. H. Han and S. Rhee, “Camera calibration for three-dimensional measurement,” Pattern Recognition, Vol. 25, No. 2, pp. 155-164, 1992.
[13] M. R. Kabuka and A. E. Areans. “Position verification of a mobile robot suing standard pattern,” IEEE Transactions on Robotics and Automation, RA-3, No. 6, pp.505-516, 1987.
[14] R. Safaee-Rad, B. Benhabib, K. C. Smith, and Z. Zhou, “Pre-marking mehods for 3D object recognition,” Proc. IEEE Intl. Conf. System Man Cybern, Boston, pp.592-595, Nov. 1989.
[15] M. J. Magee and J. K. Arrgawal,. “Determining the position of a robot using a single calibration object,” Porc. IEEE Intl. Conf. Robotics and Automation, Atlanta, GA, pp. 140-149, Mar. 1984.
[16] N. Olgac, Z. Gan, and B. E. Platin, “3-D recognition of object configurations by hybrid projection analysis using a single-camera image, Proc. IEEE First Nat. App. Mechanisms Robotics Conf., Ohio, pp.1-6, Nov. 1989.
[17] Y. Fainman, L. Feng, and Koren, “Estimation of absolute spatial position of mobile systems by hybrid opto-electronic processor,” Proc. IEEE Intl. System Man Cybern, Cambridge, MA, pp.651-657, Nov. 1989.
[18] B. Hussain and M. T. Kabuka, “Real-time system for accurate three-dimensioanl position determination and verication,” IEEE Transactions on Robotics and Automation, Vol. 6, No. 1, pp.31-43, Feb. 1990.
[19] R. Safaee-Rad, I. Tchounanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Transactions on Robotics and Automation, Vol. 8, No. 5, pp.624-640, Oct. 1992.
[20] K. Kanatani and W. Liu, “3D Interpretation of Conics and Orthogonality,” CVGIP: Image Understanding, Vol. 58, No. 3, 1993, pp.286-301.
[21] Z. Chen and J. B. Huang, “A vision-based method for the circle pose determination with a direct geometric interpretation,” IEEE Transactions on Robotics and Automation, Vol. 15, No. 6, pp.1135-1140, 1999.
[22] I. Fukui, “TV image processing to determine the position of a robot vehicle,” Pattern Recognition, Vol. 14, pp.101-109, 1981.
[23] J. Courtney and J. K. Aggarwal, “Robot guidance using computer vision,” Proceedings of the IEEE Conference Trends and Applications, Automating Intelligent Behavior Applications and Frontiers, Gaithersburg Maryland, pp.56-62, 1983.
[24] J. Courtney, M. Magee and J. K. Aggarwal, “Robot guidance using computer vision,” Pattern Recognition, Vol. 17, No. 6, 1984
[25] D. Kite and M. Magee, “Determine the 3D position and orientation of a robot camera using 2D monocular vision,” Pattern Recognition, Vol. 23, No. 8, pp.807-817, 1990.
[26] Y. Hung, P. Yeh and D. Harwood, “Passive ranging to know planar point sets,” Proceedings of the IEEE International Conference on Robotics and Automation, Silver Spring, MD, U.S.A., pp.80-85, 1985.
[27] M. Magee and J. K. Aggarwal, “Robot self-location using visual reasoning relative a single target object,” Pattern Recognition, Vol. 28, No. 2, pp.125-134, 1995.
[28] A. Gilg and G. Schmidt, "Landmark-oriented visual navigation of a mobile robot," IEEE Trans. on Industrial Electronics, Vol. 41, No. 4, pp. 392-397, 1994.
[29] Z. F. Yang and W. H. Tsai, “Viewing corridors as right parallelepipeds for vision-based vehicle localization,” IEEE Transactions on Industrial Electronics, Vol. 46, No. 3, 1999, pp.653-661.
[30] H. L. Chou and W. H. Tsai, "A new approach to robot location by house corners," Pattern Recognition, Vol. 19, No. 6, pp. 439-451, 1986.
[31] P. Y. Ku and W. H. Tsai, “Model-based guidance of autonomous land vehicle for indoor navigation,” Proc. of Workshop on Computer Vision, Graphics and Image Processing, Taipei, Taiwan, R.O.C., pp.165-174, Aug, 1989.
[32] P. S. Lee, Y. E. Shen, and L. L. Wang, “Model-Based Location of Automated Guided Vehicles in the Navigation Sessions by 3D Computer Vision,” Journal of Robotic Systems, Vol. 11, No. 3, pp.181-195, 1994.
[33] R. Talluri and J. K. Aggarwal, “Mobile robot self-location using model-image feature correspondence,” IEEE Transactions on Robotics and Automation, Vol. 12, No. 1, pp. 63-77, Feb. 1996.
[34] S. Atiya and G. D. Hager, “Real-time vision-based robot localization,” IEEE Transactions on Robotics and Automation, Vol. 9, No. 6, pp.785-800, Dec. 1993.
[35] M. Betke and L. Gurvits, “Mobile robot localization using landmarks,” IEEE Transactions on Robotics and Automation, Vol. 13, No. 2, pp.251-263, April. 1997.
[36] N. M. Charkari, and H. Mori, “A New Approach for Real Time Moving Vehicle Detection,” Proceedings of the 1993 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Yokohama, Japan, pp.273-278, 1993.
[37] K. Sasaki, N. Ishikawa, T. Otsuka, M. Nakajima, “3-D Image Location Surveillance System for the Automotive Rear-View,” Vehicle Navigation and Information Systems Conference Proceedings, Yokohama, Japan, pp.27-32, 1994.
[38] J. Yamagchi, and M. Nakajima, “Development of an Observation System for the Automotive Rear View using a Fiber Grating Visual Sensor,” Sensor Technology, Vol. 13, No. 4, pp.18-22, 1993.
[39] R. Chapuis, A. Potelle, J. L. Brame, and F. Chausse, “Real-Time Vehicle Trajectory Supervision on the Highway,” International Journal of Robotics Research, Vol. 14, No. 6, pp.531-542, 1995.
[40] D. Pomerleau and T. Jochem, “Rapidly Adapting Machine Vision for Automated Vehicle Steering,” IEEE Expert, Vol. 11, No. 2, pp.19-27, 1996.
[41] Y. L Tang and R. Kasturi, “Accurate estimation of object location in an image sequence using helicopter flight data,” Robotics and Computer-Integrated Manufacturing, Vol. 11, No. 2, pp.65-72, Dec. 1994.
[42] Y. L Tang and R. Kasturi, “Tracking moving objects during low altitude flight,” Machine Vision and Applications, Vol. 9, pp.20-31, 1996.
[43] B. Sridhar, R. Suorsa, and B. Hussen, “Passive range estimation for rotorcraft low-altitude flight,” Machine Vision and Applications, Vol. 6, pp.10-24, 1993.
[44] B. Sridhar, A. V. Phatak, “Analysis of image-based navigation system for rotorcraft low altitude flight,” IEEE Transactions on Systems, Man, Cybernetics, Vol. 22, No. 2, pp.290-299, 1992.
[45] B. Sridhar, G. B. Chatterji, and T, Soni, “Model-based vision for aircraft position determination,” Control Eng. Practice, Vol. 4, No. 8, pp.1153-1159, 1996.
[46] G. B. Chatterji, P. K. Menon, and B. Sridhar, “GPS/Machine vision navigation system for aircraft,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 33, No. 33, pp.1012-1025, 1997.
[47] G. B. Chatterji, P. K. Menon, and B. Sridhar, “Vision-based position and altitude determination for aircraft night landing,” Journal of Guidance, Control, and Dynamics, Vol. 21, No. 1, 1998.
[48] M. Sato, "Position and attitude estimation from an image sequence of a circle," Academic Reports, Faculty of Engineering, Tokyo Institute of Polytechnics, Vol. 18, Iss. 1, pp. 28-35, 1995.
[49] Z. F. Yang and W. H. Tsai, "Using parallel line information for vision-based landmark location estimation and an application to automatic helicopter landing, " Robotics & Computer-Integrated Manufacturing, Vol. 14, No. 4, pp.297-306, Aug. 1998.
[50] I. Sutherland, “Three-Dimensional data input by tablet,” Proc. IEEE, Prentice Hall, Vol. 62, pp.453-461, Apr. 1974.
[51] L. L. Wang and W. H. Tsai, “Computing camera parameters using vanishing line information from a rectangular parellelepiped,” Machine Vision and Applications, Vol. 3, pp.129-141, Summer 1990.
[52] L. L. Wang and W. H. Tsai, “Camera calibration by vanishing lines for 3D computer vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 4, pp.370-376, April 1991.
[53] M. Penna, “Determning camera calibration parameters from the perspective projection of quadrilateral,” Pattern Recognition, Vol. 24, No. 6, 1991.
[54] O. Faugeras and G. Toscani, “The calibration problem for stereo,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.15-20, 1986.
[55] W. Gorsky and L. Tamburino, “A unified approach to the linear camera calibration problem,” Proc. Int'l. Conf. Computer Vision, pp.511-515, 1987.
[56] C. Chatterjee, V. P. Roychowdhury and E. K. O. Chong, “A nonlinear gouss-seidel algorithm for noncoplanar and coplanar camera calibration with convergence analyasis,” Computer Vision & Image Understanding, Vol.67, No.1, pp.58-80, July 1997.
[57] J. Batista, H. Araujo and A. T. de Almeida, “Iterative multistep explicit camera calibration,” IEEE Transactions on Robotics & Automation, Vol.15, No.5, pp.897-917, Oct. 1999.
[58] L. R. Rabiner and B. Juang, Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs, H. J., 1993.
[59] V. I. Pavlovic, R. Sharma, and T. S. Huang, “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp.677-695, 1997.
[60] T. E. Hutchinson, K. P. White jr., W. N. Martin, K. C. Reichert and L. A. Frey, “Human-Computer Interaction Using Eye-gaze Input,” IEEE Transactions on Systems, Man & Cybernetics, Vol. 19, No. 6, pp.1527-34, 1989.
[61] C. Colombo and A. D. Bimbo, “Interacting through Eyes,” Robotics and Autonomous Systems, Vol. 19, No. 3-4, pp.359-368, 1997.
[62] M. Qi and C. Y. Hao, “A virtual reality technology based on personal computer,” Guangzi Xuebao/Acta Photonica Sinica, Vol. 27, No. 3, pp.276-279, 1998.
[63] Y. S. Chen, C. H. Su, J. H. Chen, C. S. Chen, Y. H. and C. S. Fuh, C-S. “Video-based eye tracking for autostereoscopic displays,” Optical Engineering, Vol. 40, No. 12, pp. 2726-2734, 2001.
[64] L. S. Smoot, “Teleconferencing terminal with camera behind display screen,” in U. S. Patent Documents, US4928301, 1990.
[65] T. J. Nelson, et al., “Eye contact apparatus for video conferencing,” in U. S. Patent Documents, US5117285, 1992.
[66] M. Kuriki, et al., “Display and image capture apparatus which enables eye contact,” in U. S. Patent Documents, US5317405, 1994.
[67] K. O. Mersereau, “Eye contact video telephony,” in U. S. Patent Documents, US5666155, 1997.
[68] S. H. McNelley, et al., “Image blocking teleconferencing eye contact terminal,” in U. S. Patent Documents, US5777665, 1998.
[69] Y. P. Hung, C. C. Kao, and Y. P. Tsai, “Real-Time Software Method for Preserving Eye Contact in Video Conferencing,” Proceedings of 13th IPPR Conference on Computer Vision, Graphics, and Image Processing, pp.517-524, Taipei, Aug. 2000.
[70] D. H. Ballard, “Generalizing the Hough Transform to detect arbitrary shapes,” Pattern Recognition, Vol. 13, No. 2, pp.111-122, 1981.
[71] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, Volume 2, Addison-Wesley, Reading, MA, U.S.A., 1993.
[72] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Addison-Wesley, Reading, MA, U.S.A., 1992.
[73] C. T. Ho and L. W. Chen, “A Fast Ellipse/Circle Detector Using Geometric Symmetry,” Pattern Recognition, Vol. 28, No. 1, pp.117-124, 1995.
[74] Z. Zhang, “Parameter estimation techniques: a tutorial with application to conic fitting,” Image and Vision Computing, Vol. 15, pp. 59-76, 1997.
[75] E. R. Davis, “Finding ellipses using the generalized Hough Transform,” Pattern Recognition, Vol. 9, pp.87-96, 1989.
[76] R. Safaee-Rad, I. Tchoukanov, B. Benhabib, and K. C. Smith, “Accurate parameter estimation of quadratic curves from gray-level images,” CVGIP: Image Understanding, Vol. 54, pp.259-274, 1991.
[77] C. C. Lai and W. H. Tsai, “Estimation of moving vehicle locations using wheel shape information in single 2-D lateral vehicle images by 3-D computer vision techniques,” Robotics and Computer-Integrated Manufacturing, Vol. 15, pp.111-120, 1999.
[78] R. Horonjeff and F. X. McKelvey, Planning & Design of Airports, McGraw-Hill, New York, 1994.
[79] T. Minagawa, H. Saito, and S. Ozwa, “Face-direction estimation system using stereo vision,” IECON'97: 23rd Int'l. Conf. on Industrial Electronics, Control, and Instrumentation, Vol. 3, pp.1454-1459, 1997.
[80] M. A. Fischler and R.C. Bolls, “Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography,” Commun. ACM, Vol. 24, pp.381-395, 1981.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊