跳到主要內容

臺灣博碩士論文加值系統

(44.201.92.114) 您好!臺灣時間:2023/03/31 08:51
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:褚哲宇
研究生(外文):Che Yu Chu
論文名稱:環場多攝影機之校正與目標追蹤
論文名稱(外文):Research on Calibration and Object Tracking of Multi-Camera System
指導教授:張耀仁張耀仁引用關係
指導教授(外文):Y. Z. Chang
學位類別:碩士
校院名稱:長庚大學
系所名稱:醫療機電工程研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
論文頁數:94
中文關鍵詞:多攝影機校正追蹤攝影機切換
外文關鍵詞:multi-cameracalibrationtrackingcamera hand-off
相關次數:
  • 被引用被引用:0
  • 點閱點閱:655
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
一般使用雙攝影機的追蹤與定位系統,因攝影機拍攝角度有限的關係,導致目標物容易被遮擋或因超出視野而中斷追蹤。本研究目的在於開發環場多攝影機架構,將其應用在手術導航之目標定位與追蹤。本研究採用二維校正法來作攝影機的校正,並將其擴展至環場攝影機系統之世界座標系的配準,並使用以LED球體作為標記物的定位器。系統架構是架設兩兩相對的環場攝影機,以此在同一時刻能擷取四個不同角度所拍攝的影像,使環場系統能囊括比雙攝影機更多的資訊量,並設計權重計算與攝影機切換的程式,讓程式能根據規則作分析判斷,提昇系統計算效率,並使連續追蹤目標時的強健性得以提昇。
While we were using the traditional stereo-camera tracking system, the object could been easily occluded or out of the field of the camera view. The purpose of this research is to develop a multi-camera system, which can apply to surgical navigation system to tracking object. This research is use two-dimension calibrated method to calibrating the multi-camera system, and set all the camera coordinate system to correspond to one world coordinate system. The research is use the LED light ball to be the marker. The multi-camera system is use four cameras to get different field of view, and that will get more information than only two cameras. Then design the program which can calculate the weight of each camera system and choose the best one to continuing tracking. The system is improve the computer efficiency and the robust of the object tracking.
目 錄
指導教授推薦書…………………………………………………………
口試委員會審定書………………………………………………………
長庚大學博碩士紙本論文著作授權書………………………………iii
誌謝……………………………………………………………………iv
中文摘要…………………………………………………………………v
英文摘要………………………………………………………………vi
第一章 緒論……………………………………………………………1
1.1 研究背景與動機………………………………………………1
1.2 文獻回顧………………………………………………………5
1.2.1 小結………………………………………………………10
1.3 研究目的………………………………………………………11
第二章 研究方法………………………………………………………13
2.1 實驗設備………………………………………………………13
2.1.1 鋁擠型框架………………………………………………13
2.1.2 攝影機……………………………………………………14
2.1.3 影像擷取卡………………………………………………14
2.1.4 校正版……………………………………………………14
2.1.5 定位器……………………………………………………15
2.1.6 MATLAB……………………………………………………16
2.2 影像處理………………………………………………………17
2.2.1 影像座標系統……………………………………………17
2.2.2 灰階化與二值化…………………………………………18
2.2.3 膨脹、斷開與標記連通成份……………………………20
2.2.4 色彩模型…………………………………………………24
2.2.5 中心點……………………………………………………26
2.3 立體視覺與環場攝影機………………………………………27
2.3.1 立體視覺…………………………………………………27
2.3.2 內、外參數………………………………………………30
2.3.3 三維座標重建……………………………………………32
2.3.4 環場攝影機系統與座標轉換……………………………33
2.4 攝影機切換……………………………………………………36
2.4.1概觀………………………………………………………37
2.4.2 應用層面…………………………………………………40
2.5定位器座標轉換與simplex method……………………………43
第三章 實驗與結果……………………………………………………47
3.1 環場攝影機校正………………………………………………47
3.1.1 攝影機內參數……………………………………………48
3.1.2 攝影機外參數……………………………………………51
3.1.3 環場多攝影機的校正……………………………………52
3.2 環場攝影機之目標追蹤………………………………………55
3.2.1 系統流程…………………………………………………55
3.2.2 程式與介面………………………………………………57
3.2.3 實驗與結果………………………………………………58
第四章 結論與未來發展………………………………………………65
4.1 結論……………………………………………………………65
4.2 未來發展………………………………………………………66
參考文獻………………………………………………………………67









圖 目 錄
圖1.2.1 VectorVision………………………………………………10
圖2.1.1 鋁擠型框架…………………………………………………13
圖2.1.4 校正版………………………………………………………15
圖2.1.5 定位器………………………………………………………16
圖2.2.2 二值化示意圖………………………………………………20
圖2.2.3-1 LED球體二值化……………………………………………22
圖2.2.3-2 侵蝕………………………………………………………23
圖2.2.3-3 膨脹………………………………………………………23
圖2.2.3-4 有三個區塊之二值化影像………………………………24
圖2.2.3-5 標籤化……………………………………………………24
圖 2.2.5 中心點的計算………………………………………………26
圖2.3.1 攝影機模型…………………………………………………28
圖2.3.2 攝影機座標系與世界座標系關係圖………………………30
圖2.3.4 環場系統座標轉換…………………………………………35
圖2.4.1-1 攝影機架構示意圖………………………………………37
圖2.4.1-2 LED中心點與影像中央之間距……………………………38
圖2.4.1-3 LED投影面積與遠近之關係………………………………39
圖2.4.2 權重計算流程………………………………………………43
圖2.5 LED與尖端之座標轉換………………………………………45
圖3.1 校正版拍攝數量與角度與準確率……………………………47
圖3.1.1-1 不同角度之校正版影像…………………………………48
圖3.1.1-2 角點判定失誤……………………………………………49
圖3.1.1-3 改變角點偵測之參數……………………………………49
圖3.3.1-4 distortion修正前後……………………………………50
圖3.1.1-5 再投影誤差(以pixel為單位)………………………50
圖3.1.3-1 環場攝影機架構…………………………………………52
圖3.1.3-2 環場攝影機之編號順序…………………………………53
圖3.1.3 三維空間關係圖……………………………………………54
圖3.2.1 系統流程圖…………………………………………………56
圖3.2.2-1 GUI介面…………………………………………………57
圖3.2.2-2 二值化閾值之調整………………………………………58
圖3.2.3-1 程式根據權重切換至後組攝影機………………………59
圖3.2.3-2程式根據權重切換至前組攝影機………………………59
圖3.2.3-3 camera 1被遮擋…………………………………………60
圖3.2.3-4 實驗過程-繞方格…………………………………………61
圖3.2.3-5實驗過程-繞圈……………………………………………63

表 目 錄
表3.1.1 攝影機內參數………………………………………………51
表3.1.2 攝影機外參數………………………………………………51
表3.2.3-1 繞方格之尖端點座標……………………………………62
表3.2.3-2 尖端點間的距離…………………………………………62
表3.2.3-3 尖端之座標………………………………………………63
[1] D.-S. Jang, H.-I. Choi, “Active models for tracking moving objects,” Pattern Recognition, Vol. 33, No. 7, pp. 1135–1146, 2000.
[2] Dorin Comaniciu, Visvanathan Ramesh, Peter Meer, “Kernel-Based Object Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 5, pp. 564-575, 2003.
[3] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22,
[4] Z. Zhang, “Flexible Camera Calibration by Viewing a Plane from Unknown Orientations,” Seventh International Conference on Computer Vision, Vol. 1, pp. 666, 1999.
[5] Svoboda T., Martinec, D., Pajdla T., “A convenient multi-camera self-calibration for virtual environments,” PRESENCE: Teleoperators and Virtual Environments, Vol. 14, No. 4, pp. 407-422, 2005.
[6] Cai Q., Aggarwal J., “Tracking human motion using multiple cameras,” in Proceedings of the 13th International Conference on Pattern Recognition, pp. 68-72, Vienna, Austria, Aug. 1996.
[7] T-H. Chang, S. Gong, E -J. Ong, “Tracking Multiple People under Occlusion using Multiple Cameras,” in Proc. Of British Machine Vision Conference, pp. 566-575, Bristol, UK, September 2000.
[8] S. Khan, M. Shah, “Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 25, No. 10, pp. 1355-1360, October 2003.
[9] J. Black, T. Ellis, “Multiple Camera Image Tracking,” In Performance Evaluation ofTracking and Surveillance (PETS 2001), with CVPR 2001, pp. 315-320, Kauai, Hawaii, USA, Dec. 2001.
[10] T.N. Tan, G.D. Sullivan, K.D. Baker, “Recognizing Objects on the Ground Plane,” Image and vision computing, Vol. 12, pp. 164-172, Apr. 1994.
[11] P. Kelly, A. Katkere, D. Kuramura, S. Moezzi, S. Chatterjee, R. Jain, “An Architecture for Multiple Perspective Interactive Video,” Proceedings of the third ACM international conference on Multimedia, pp. 201-212, San Francisco, California, United States, November 1995.
[12] Tao Zhao, Manoj Aggarwal, Rakesh Kumar, Harpreet Sawhney, “Real-time Wide Area Multi-Camera Stereo Tracking,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, pp.976-983, San Diego, Calif., USA, 2005.
[13] M. Y. Lee, C. H. Kuo , Hung, S. S., “Implementation of a sound-guided navigation system for tibial closed interlocking nail fixations in orthopedics,” Journal of Medical and Biological Engineering, Vol. 24, No. 3, pp. 147-153, 2004.
[14] 陳中綱,〈利用黑白標記開發光學式空間定位系統〉,長庚大學機械工程研究所碩士論文,2008。
[15] X. Papademetris, K.P. Vives, M.DiStasio, “Development of a research interface for image guided intervention initial application to epilepsy neurosurgery,” Biomedical Imaging: Nano to Macro 3rd IEEE International Symposium on, pp. 490-493, 2006.
[16] 劉寧,〈基於雙目立體視覺的手術導航空間點定位系統的研究〉,河南科技大學碩士論文,2005。
[17] X. Huang., X. Wei, “Accuracy Assessment for Hip Rotation Centre in Total Knee Replacement Surgical Navigation,” Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008. The 2nd International Conference on, pp. 514-517, 2008.
[18] Andrew D., Thompson G., “Accuracy Assessment and Interpretation for Optical Tracking Systems,” In Proc. SPIE 5367, Medical Imaging 2004, Vol. 5367, pp. 421--432, San Diego, CA, USA, 2004.
[19] 黃重璟,〈應用於機械手臂抓取與操控之視覺系統開發〉,長庚機械工程研究所碩士論文,2004。
[20] José A.,Brian S. R., “Model-based Landmark Location Estimators,” Precision Landmark Location for Machine Vision and Photogrammetry, springer, pp. 57-63, 2007.
[21] J. A. Nelder, R. Mead, “A Simplex Method for Function Minimization,” The Computer Journal, Vol. 7, No. 4, pp. 308-313, 1965.
[22] N. Peterfreund, “Robust tracking of position and velocity with Kalman snakes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.21, No.6, pp. 564-569, June 1999.
[23] Christopher Richard Wren, Ali Azarbayejani, Trevor Darrell, Alex Paul Pentland, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 780-785, Killington, Vermont, USA, July 1997.
[24] E. Fontaine, A. Barr, W. J. Burdick, “Model-based tracking of multiple worms and fish,” In ICCV Workshop on Dynamical Vision, 2007.
[25] O. Faugeras, Q. Luong, S. Maybank, “Camera self-calibration: theory and experiments,” In Proceedings of European Conference on Computer Vision, ECCV’92, Vol.588, pp. 321-334, Santa Margherita, Italy, 1992.
[26] R.I. Hartley, “Self-Calibration of Stationary Cameras,” International Journal of Computer Vision, Vol. 22, Issue 1, pp. 5-23, 1997.
[27] Roger Y. Tsai, “A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses,” In IEEE Journal of Robotics and Automation, Vol. 3, Issue 4, pp. 323-344, 1987.
[28] 牛恩國,〈多攝像機環境下的目標交接研究與系統實現〉,西北工業大學,碩士論文,2006。
[29] 馬萬里,〈攝相機標定技術研究〉,南昌航空工業學院,碩士論文,2006。
[30] 劉德健,〈運動人體目標的多視角匹配與跟蹤方法研究〉,深圳大學,碩士論文,2007。
[31] 齊美彬,〈多攝像機協作分布式智能視覺監控中若干問題研究〉,合肥工業大學,博士論文,2007。
[32] 陳戰,〈基于多攝像機定標的足球比賽場景恢復〉,西安電子科技大學,碩士論文,2007。
[33] 張莉,〈多攝像機人體跟蹤技術的研究〉,浙江大學,博士論文,2008。
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top