跳到主要內容

臺灣博碩士論文加值系統

(3.231.230.177) 您好!臺灣時間:2021/07/28 20:12
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:黃暉斌
研究生(外文):Hui-Pin Huang
論文名稱:基於多攝影機即時視覺追蹤系統之設計與實現
論文名稱(外文):Design and Implementation of a Real-time Visual Tracking System Based on Multiple Cameras
指導教授:鄭銘揚鄭銘揚引用關係
指導教授(外文):Ming-Yang Cheng
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2008
畢業學年度:96
語文別:中文
論文頁數:231
中文關鍵詞:對應性物件追蹤移動物偵測視覺監控多攝影機遮蔽
外文關鍵詞:correspondenceobject trackingmotion detectionvisual surveillancemultiple camerasocclusion
相關次數:
  • 被引用被引用:2
  • 點閱點閱:239
  • 評分評分:
  • 下載下載:40
  • 收藏至我的研究室書目清單書目收藏:3
近年來,智慧型視覺安全監控系統愈來愈受到重視,其目的是透過電腦視覺的方法,讓系統能夠自動地對攝影機所擷取的影像進行移動物偵測與目標物追蹤,並進一步對物體做分類、辨識或動作分析等。本論文利用多台攝影機設計一套即時視覺追蹤系統,以改善單台攝影機拍攝視野有限之缺點。首先利用改良式適應性背景相減法偵測各台攝影機影像中的移動物體,並利用多重相似度量測法追蹤特定目標物。而在多台攝影機之物件對應方面,本論文使用極線幾何限制及色彩長條圖相似度量測法來求得目標物的對應位置。同時,本論文也提出多攝影機之合作策略,使各台攝影機能即時地在不同狀況下扮演不同角色,以溝通合作的方式互相協助,除了可以求得目標物在各台攝影機中的確切對應位置,並提升目標物的追蹤效果外,亦可估測被遮蔽之對應物位置。實驗結果顯示本論文所提出的方法效果良好,系統能夠持續地追蹤到特定目標物並處理目標物或對應物被遮蔽的問題。
Intelligent visual surveillance has attracted increasing attention in recent years. The aim of this research is to use computer vision so that the system can automatically detect moving objects or track a specific target through a sequence of video frames. The system can also classify the target and analyze its action. Because the field of view (FOV) is limited when using a single camera, this paper adopts multiple cameras to implement a real-time tracking system. A modified adaptive background subtraction method is used to detect moving objects, and a multi-cue template matching approach is employed to track a moving target. The epipolar constraint and color histogram matching are exploited to deal with the correspondence problem that occurs when multiple cameras are used. In this thesis, an efficient and robust cooperation strategy is proposed to improve the tracking performance. The visual tracking system developed in this thesis can continuously track a moving target and estimate its position in other cameras even if the target is occluded in the FOV. The experimental results show that the proposed approach has satisfactory performance.
中文摘要............................................................................................................I
英文摘要...........................................................................................................II
誌謝..................................................................................................................III
目錄..................................................................................................................IV
圖目錄...........................................................................................................VIII
表目錄...........................................................................................................XIV
第一章 緒論......................................................................................................1
1.1 前言.............................................................................................................1
1.2 研究動機與目標.........................................................................................6
1.3 文獻回顧....................................................................................................14
1.4 論文架構...................................................................................................20
第二章 攝影機模型........................................................................................22
2.1 攝影機內部參數.......................................................................................22
2.2 攝影機外部參數.......................................................................................26
第三章 多攝影機之幾何對應原理................................................................29
3.1 平面投影幾何...........................................................................................31
3.1.1 平面投影轉換理論................................................................................31
3.1.2 平面投影轉換矩陣之計算....................................................................33
3.1.3 平面投影轉換矩陣之評估....................................................................35
3.1.4 平面投影轉換矩陣之模擬....................................................................37
3.1.5 平面投影轉換矩陣之應用....................................................................42
3.2 極線幾何...................................................................................................43
3.2.1 極線幾何原理........................................................................................45
3.2.2 基本矩陣之介紹....................................................................................46
3.2.3 基本矩陣之運算-線性估測法...............................................................48
3.2.4 奇異性限制............................................................................................51
3.2.5 正規化之八點演算法............................................................................51
3.2.6 基本矩陣之評估....................................................................................54
3.2.7 基本矩陣之模擬....................................................................................56
3.2.8 極線幾何之應用(一)........................................................................59
3.2.9 極線幾何之應用(二)........................................................................65
第四章 移動目標物之偵測與追蹤................................................................71
4.1 移動物偵測...............................................................................................71
4.1.1 適應性背景相減法................................................................................76
4.1.2 改良式適應性背景相減法....................................................................79
4.1.2.1 物件標示.............................................................................................81
4.1.2.2 重疊分類法.........................................................................................82
4.1.2.3 輪廓淬取.............................................................................................82
4.1.2.4 前景相似度量測.................................................................................83
4.1.2.5 靜止物體之背景更新.........................................................................86
4.2 目標物追蹤...............................................................................................89
4.2.1 多重影像特徵比對法............................................................................91
4.2.1.1 色彩亮度比對法.................................................................................91
4.2.1.2 色彩長條圖比對法.............................................................................93
4.2.1.3 邊緣輪廓比對法.................................................................................95
4.2.1.4 位置能量.............................................................................................97
4.2.1.5 特徵資訊融合.....................................................................................99
4.2.2 樣版比對搜尋法....................................................................................99
第五章 多攝影機之溝通與合作..................................................................102
5.1 雙攝影機系統.........................................................................................104
5.1.1 雙攝影機系統之多工模式..................................................................105
5.1.2 雙攝影機系統之特定目標物對應......................................................106
5.1.3 雙攝影機系統之溝通與合作..............................................................115
5.2 三攝影機系統.........................................................................................121
5.2.1 利用三台攝影機求得確切對應物......................................................122
5.2.2 利用極線交點驗證法區別色彩分佈相同之物體..............................126
5.2.3 利用三台攝影機估測被遮蔽之對應物位置......................................129
5.2.4 三攝影機系統之溝通與合作..............................................................132
5.3 多攝影機系統.........................................................................................138
5.3.1 利用多台攝影機求得確切對應物......................................................139
5.3.2 利用多台攝影機處理被遮蔽之物體..................................................142
5.3.3 多攝影機系統之溝通與合作..............................................................143
第六章 多攝影機視覺追蹤系統之設計與實現..........................................150
6.1 一般硬體限制.........................................................................................150
6.2 實驗硬體設備簡介.................................................................................153
6.3 系統介面設計及運作流程.....................................................................155
第七章 實驗結果..........................................................................................168
7.1 雙攝影機系統之目標物追蹤及對應實驗.............................................168
7.1.1 雙攝影機系統之多工實驗..................................................................169
7.1.2 雙攝影機系統之物件對應實驗..........................................................173
7.1.3 雙攝影機系統之溝通合作實驗..........................................................178
7.1.3.1 目標物追蹤失敗時之實驗-無攝影機合作機制..............................178
7.1.3.2 目標物追蹤失敗時之實驗-有攝影機合作機制..............................181
7.1.3.3 目標物被遮蔽時之實驗-無攝影機合作機制..................................185
7.1.3.4 目標物被遮蔽時之實驗-有攝影機合作機制..................................188
7.2 三攝影機系統之目標物追蹤及對應實驗.............................................192
7.2.1 三攝影機系統之溝通合作實驗..........................................................193
7.2.2 三攝影機系統之相似對應物區分實驗..............................................202
7.2.2.1 相似物體之錯誤對應實驗-以雙台攝影機為例..............................202
7.2.2.2 相似物體之正確對應實驗-以三台攝影機為例..............................204
7.3 多攝影機系統之目標物追蹤及對應實驗.............................................208
第八章 結論與建議......................................................................................215
參考文獻.......................................................................................................217
自述...............................................................................................................231
[1] R. Cucchiara, C. Grana, A. Prati and R. Vezzani, “Computer vision system for in-house video surveillance,” IEE Proceedings -Vision Image and Signal Processing, vol. 152, no. 2, pp. 242–249, 2005.
[2] I. Haritaoglu, D. Harwood and L. S. Davis, “W4: Real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 809–830, 2000.
[3] M. Shah, O. Javed, and K. Shafique, “Automated Visual Surveillance in Realistic Scenarios,” IEEE MultiMedia, pp. 30–39, 2007.
[4] R. C. Gonzalez and R. E. Woods, Digital image processing, 2nd edition, Prentice Hall, New Jersey, 2002.
[5] T. Horprasert, D. Harwood, and L.S. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” in Proceedings of the IEEE International Conference on Computer Vision Frame-Rate Workshop, 1999.
[6] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 246–252, 1999.
[7] P. KaewTraKulPong and R. Bowden, “An improved background mixture model for real-time tracking with shadow detection,” in Proceedings of the 2nd European Workshop on Advanced Video-Based Surveillance Systems, vol. 25, 2001.
[8] A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proceedings of the 6th European Conference on Computer Vision, pp. 751–767, 2000.
[9] A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of the IEEE, vol. 90, pp. 1151–1163, 2002.
[10] Y. Ren, C. S. Chua, and Y. K. Ho, “Motion detection with nonstationary background,” Machine Vision and Application, vol. 13, pp. 323–343, 2003.
[11] K. Toyama, J. Krumm, B. Brumitt, and B. Meryers, “Wallflower: Principles and practice of background maintenance,” in Proceedings of the International Conference on Computer Vision, pp. 255–261, 1999.
[12] D. Gutchess, M. Trajkovics, E. Cohen-Solal, D. Lyons and A. K. Jain, “A background model initialization algorithm for video surveillance,” in Proceedings of the 8th IEEE International Conference on Computer Vision, vol. 1, pp. 733–740, 2001.
[13] D. Murray and A. Basu, “Motion tracking with active camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.16, pp. 449–459, 1994.
[14] M.C.Tsai, K.Y.Chen, M.Y.Cheng, and K.C. Lin, “Implementation of a real-time moving object tracking system using visual servoing,” Robotica, vol. 21, pp. 615–625, 2003.
[15] S. Araki, T. Matsuoka, N. Yokoya, and H. Takemura, “Real-time tracking of multiple moving object contours in a moving camera image sequences,” IEICE Transaction on Information and Systems, vol. E83-D, no. 7, pp. 1583–1591, 2000.
[16] I. Pavlidis, V. Morellas, P. Tsiamyrtzis, and S. Harp, “Urban surveillance system: from the laboratory to the commercial world,” Proceedings of the IEEE, vol. 89, pp. 1478–1497, October 2001.
[17] Z. Zhengyou, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, November 2000.
[18] I. H. Chen and S. J. Wang, “An efficient approach for the calibration of multiple PTZ cameras,” IEEE Transactions on Automation Science and Engineering, vol. 4, no. 2, pp. 286–293, April 2007.
[19] P. H. Kelly, A. Katkere, D. Y. Kuramura, S. Moezzi, and S. Chatterjee, “An architecture for multiple perspective interactive video,” in Proceedings of the Third ACM International Conference on Multimedia, pp. 201–212, 1995.
[20] G. P. Stein, “Tracking from multiple view points: self-calibration of space and time,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 521–527, 1999.
[21] L. Lee, R. Romano, and G. Stein, “Monitoring activities from multiple video streams: establishing a common coordinate frame,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 758–767, August 2000.
[22] C. Jaynes, “Multi-view calibration from motion planar trajectories,” Image and Vision Computing, vol. 22, no. 7, pp. 535–550, July 2004.
[23] J. Renno, J. Orwell, and G. Jones, “Learning surveillance tracking models for the self-calibrated ground plane,” in Proceedings of British Machine Vision Conference, Cardiff, pp.607–616, September 2002.
[24] Y.Capsi and M. Irani, “Spatio-temporal alignment of sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, pp. 1409–1424, 2002.
[25] S. Velipasalar and W. Wolf, “Frame-level temporal calibration of video sequences from unsynchronized cameras by using projective invariants,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, pp. 462–467, 2005.
[26] T. Matsuyam, S. Hiura, T. Wada, K. Murase, and A. Toshioka, “Dynamic memory: Architecture for real time integration of visual perception, camera action, and network communication,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 728–735, 2000.
[27] Q. Zhou and J. K. Aggarwal, “Object tracking in an outdoor environment using fusion of feature and cameras,” Image and Vision Comp, vol. 24, pp. 1244–1255, 2006.
[28] R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proceedings of the IEEE, vol. 89, no. 10, pp. 1456–1477, October 2001.
[29] T. Kanade, T. Collins, and A. Lipton, “Advances in cooperative multi-sensor video surveillance,” in Proceedings of the DARPA Image Understanding Workshop, pp. 3–24, Nov. 1998.
[30] T. Matsuyama and N. Ukita, “Real-time multitarget tracking by a cooperative distributed vision system,” Proceedings of the IEEE, vol. 90, pp. 1136–1150, 2002.
[31] V. Kettnaker and R. Zabih, “Bayesian multi-camera surveillance,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 253–259, 1999.
[32] A. Utsumi, H. Mori, J. Ohya, and M. Yachida, “Multiple-view-based tracking of multiple humans,” in Proceedings of the Fourteenth International Conference on Pattern Recognition, pp. 197–601, 1998.
[33] H. Tsutsui, J. Miura, and Y. Shirai, “Optical flow-based person tracking by multiple cameras,” in Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 91–96, 2001.
[34] A. Mittal and L.S. Davis, “M2Tracker: A multi-view approach to segmenting and tracking people in a cluttered scene using region-based stereo,” in Proceedings of European Conference on Computer Vision, pp. 18–36, May 2002.
[35] A. Mittal and L. Davis, “Unified multi-camera detection and tracking using region-matching,” in Proceedings of the IEEE Workshop on Multi-Object Tracking, Vancouver, BC, Canada, pp. 3–10, 2001.
[36] Q. Cai and J. K. Aggarwal, “Tracking human motion using multiple cameras,” in Proceedings of the 13th International Conference on Pattern Recognition, pp. 67–72, 1996.
[37] C. Stauffer, “Learning to track objects through unobserved regions,” in Proceedings of the IEEE Workshop on Motion and Video Computing, vol. 2, pp. 96–102, 2005.
[38] M. Xu, J. Orwell, L. Lowey, and D. Thirde, “Architecture and algorithms for tracking football players with multiple cameras,” IEE Proceedings on Vision, Image and Signal Processing, vol. 152, pp. 232–241, 2005.
[39] L. Snidaro, R. Niu, P. K. Varshney, and G. L. Foresti, “Automatic camera selection and fusion for outdoor surveillance under changing weather conditions,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, pp. 364–369, 2003.
[40] W. Du and J. Piater, “Data fusion by belief propagation for multi-camera tracking,” in Proceedings of the 9th International Conference on Information Fusion, Florence, pp. 1–8, 2006.
[41] S. Khan and M. Shah, “Consistent labeling of tracked objects in multiple cameras with overlapping fields of view,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1355–1360, October 2003.
[42] T.H. Chang, S. Gong, and E.J. Ong, “Tracking multiple people under occlusion using multiple cameras,” in Proceedings of British Machine Vision Conference, pp. 566–575, September 2000.
[43] O. Javed, S. Khan, Z. Rasheed, and M. Shah, “Camera handoff: Tracking in multiple uncalibrated stationary cameras,” in Proceedings of the IEEE Workshop Human Motion, pp. 113–118, 2000.
[44] W. Hu, M. Hu, X. Zhou, T. Tan, J. Lou, and S. Maybank, “Principal axis-based correspondence between multiple cameras for people tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 4, pp. 663–671, April 2006.
[45] Q. Chai, and J.K. Aggrtrwal, “Tracking human motion in structured environments using a distributed-camera system,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, no. 11, pp. 1241–1247, November 1999.
[46] C. Stauffer and K. Tieu, “Automated multi-camera planar tracking correspondence modeling,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 18–20, 2003.
[47] J. Black and T. Ellis, “Multi camera image tracking,” in Proceedings of the IEEE International Workshop Performance Evaluation of Tracking and Surveillance, pp. 68–75, December 2001.
[48] S. Calderara, R. Vezzani, A. Prati, and R. Cucchiara, “Entry edge of field of view for multi-camera tracking in distributed video surveillance,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, pp. 93–98, 2005.
[49] S. Calderara, R. Cucchiara, and A. Prati, “Group detection at camera handoff for collecting people appearance in multi-camera systems,” in Proceedings of the IEEE International Conference on Advanced Video and Signal-Based Surveillance, pp. 36–41, 2006.
[50] S. L. Dockstader and A. M. Tekalp, “Multiple camera tracking of interacting and occluded human motion,” Proceedings of the IEEE, vol. 89, pp. 1441–1455, October 2001.
[51] N. T. Nguyen, S. Venkatesh, G. West, and H. H. Bui, “Multiple camera coordination in a surveillance system,” Acta Automatica Sinica, vol. 29, no. 3, pp. 408–422, 2003.
[52] T. Yang, F. Chen, D. Kimber, and J. Vaughan, “Robust People Detection and Tracking in a Multi-Camera Indoor Visual Surveillance System,” in Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 675–678, 2007.
[53] F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, “Multi-camera people tracking with a probabilistic occupancy map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 267–282, 2008.
[54] T. Huang and S. Russell, “Object identification in a Bayesian context,” in Proceedings of International Joint Conference on Artificial Intelligence, Nagoya, Aichi, Japan, pp. 1276–1283, August 1997.
[55] R. Jain and K. Wakimoto, “Multiple perspective interactive video,” in Proceedings of the International Conference on Multimedia Computing and Systems, Washington, DC, USA, pp. 202–211, May 1995.
[56] I. Everts, N. Sebe, and G. A. Jones, “Cooperative object tracking with multiple PTZ cameras,” in Proceedings of the 14th International Conference on Image Analysis and Processing, Modena, Italy, pp. 323–330, September 2007.
[57] R. Collins, A. Lipton, and T. Kanade, “A system for video surveillance and monitoring,” in Proceedings of American Nuclear Society 8th Internal Topical Meeting on Robotics and Remote Systems, Pittsburgh, PA, pp. 25–29, April 1999.
[58] J. Kang, I. Cohen, and G. Medioni, “Multi-views tracking within and across uncalibrated camera streams,” in Proceedings of the First ACM International Workshop Video Surveillance, Berkeley, CA, USA, pp. 21–33, November 2003.
[59] G. Kayumbi and A. Cavallaro, “Robust homography-based trajectory transformation for multi-camera scene analysis,” in Proceedings of the First ACM/IEEE International Conference on Distributed Smart Cameras, pp. 59–66, 2007.
[60] D. Makris, T. Ellis, and J. Black, “Bridging the gaps between cameras,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 205–210, 2004.
[61] X. Zhou, R. Collins, T. Kanade, and P. Metes, “A master-slave system to acquire biometric imagery of humans at distance,” in Proceedings of ACM International Workshop on Video Surveillance, pp. 113–120, November, 2003.
[62] S. Stillman, R. Tanawongsuwan and I. Essa, “A system for tracking and recognizing multiple people with multiple cameras,” in Proceedings of the International Conference on Audio and Video-Based Biometric Person Authentication, pp. 96–101, Washington, DC, March 1999.
[63] A. Hampapur, S. Pankanti, A. Senior, Y.-L. Tian, L. Brown, and R. Bolle, “Face cataloger: Multi-scale imaging for relating identity to location,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, Miami, pp. 13–20, July 2003.
[64] D. W. Lim, S. H. Choi, and J. S. Jum, “Automated detection of all kinds of violations at a street intersection using real time individual vehicle tracking,” in Proceedings of the Fifth IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 126–129, April 2002.
[65] C. Regazzoni, V. Ramesh, and G. Foresti, “Scanning the issue/technology - special issue on video communications, processing and understanding for third generation surveillance systems,” Proceedings of the IEEE, vol. 89, pp. 1355–1366, 2001.
[66] W. Hu, T. Tan, L. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 34, pp. 334–352, 2004.
[67] G. L. Foresti, C. Micheloni, L. Snidaro, P. Remagnino, and T. Ellis, “Active video-based surveillance system: the low-level image and video processing techniques needed for implementation,” IEEE Signal Processing Magazine, vol. 22, no. 2, pp. 25–37, March 2005.
[68] W. Kang and F. Deng, “Research on intelligent visual surveillance for public security,” in Proceedings of the 6th IEEE/ACIS International Conference on Computer and Information Science, pp. 824–829, July 2007.
[69] J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale, and S. Shafer, “Multi-camera multi-person tracking for EasyLiving,” in Proceedings of the Third IEEE International Workshop on Visual Surveillance, pp. 3–10, July 2000.
[70] J. Orwell, P. Remagnino, and G.A. Jones, “Multi-camera colour tracking,” in Proceedings of the Second IEEE Workshop on Visual Surveillance, pp. 14–24, June 1999.
[71] B. Georgescu and P. Meer, “Point matching under large image deformations and illumination changes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 674–688, 2004.
[72] A. Utsumi, H. Mori, J. Ohya, and M. Yachida, “Multiple-human tracking using multiple cameras,” in Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 498–503, 1998.
[73] H. P. Moravec, “Towards automatic visual obstacle avoidance,” in Proceedings of the 5th International Joint Conference on Artificial Intelligence, pp. 584, August 1977.
[74] S. M. Smith and J. M. Brady, “SUSAN – A new approach to low level image processing,” International Journal of Computer Vision, vol. 23, pp. 45–78, 1997.
[75] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference, Manchester, pp. 147–151, August 1988.
[76] F. Mokhtarian and R. Suomela, “Robust image corner detection through curvature scale space,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, pp. 1376–1381, 1998.
[77] T.-H. Chang and S. Gong, “Bayesian modality fusion for tracking multiple people with a multi-camera system,” in Proceedings of the 2nd European Workshop on Advanced Video-Based Surveillance Systems, pp. 79–87, 2001.
[78] O. Javed, Z. Rasheed, K. Shafique, and M. Shah, “Tracking across multiple cameras with disjoint views,” in Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, pp. 952–957, 2003.
[79] A. Gilbert and R. Bowden, “Tracking objects across cameras by incrementally learning inter-camera colour calibration and patterns of activity,” in Proceedings of European Conference on Computer Vision, pp.125 –136, 2006.
[80] Y. Nam, J. Ryu, Y. J. Choi, and W. D. Cho, “Learning spatio-temporal topology of a multi-camera network by tracking multiple people,” in Proceedings of World Academy of Science, Engineering and Technology, vol. 24, pp.175–180, October 2007.
[81] O. Javed, K. Shafique, and M. Shah, “Appearance modeling for tracking in multiple non-overlapping cameras,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 26–33, 2005.
[82] F. Porikli and A. Divakaran, “Multi-camera calibration, object tracking and query generation,” in Proceedings of International Conference on Multimedia and Exposition, Baltimore, MD, USA, vol. 1, pp. 653–656, July 2003.
[83] W. Zheng, Y. Shishikui, Y. Kanatsugu, Y. Tanaka, and I. Yuyama, “A high-precision camera operation parameter measurement system and its application to image motion inferring,” IEEE Transactions on Broadcasting, vol. 47, pp.46–55, March 2001.
[84] R. M. Haralick, “Determining camera parameters from the perspective projection of a rectangle,” Pattern Recognition, vol. 22, pp. 255–230, 1989.
[85] B. Hu, C. Brow, and A. Choi, “Acquiring an environment map through image mosaicking,” The University of Rochester Computer Science Department, Rochester, New York, November 2001.
[86] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communication Association and Computing Machine, vol. 24, pp. 381–395, 1981.
[87] H.C. Longuet-Higgins, “A computer program for reconstructing a scene from two projections,” Nature, vol. 293, pp. 133–135, 1981.
[88] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2003.
[89] R.Y. Tsai and T.S. Huang, “Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, pp. 13–27, 1984.
[90] Q. T. Luong and O. D. Faugeras, “The fundamental matrix: theory, algorithms, and stability analysis,” International Journal of Computer Vision, vol. 17, pp. 43–75, 1996.
[91] Q. T. Luong, R. Deriche, O. Faugeras, and T. Papadopoulo, “On determining the fundamental matrix: analysis of different methods and experimental results,” Technical Report RR-1894, INRIA, 1993.
[92] Z.Y. Zhang, “Determining the epipolar geometry and its uncertainty: A review,” International Journal of Computer Vision, vol. 27, no. 2, pp.161–195, 1998.
[93] R.I. Hartley, “Euclidean reconstruction from uncalibrated views,” in Proceedings of the Second Joint European-US Workshop on Applications of Invariance in Computer Vision, Ponta Delgada, Azores, vol. 852, pp. 237–256, October 1993.
[94] P.A. Beardsley, A. Zisserman, and D.W. Murray, “Navigation using affine structure from motion,” in Proceedings of the Third European Conference - Volume II on Computer Vision - Volume II, vol. 801, pp. 85–96, 1994.
[95] R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, pp. 580–593, 1997.
[96] E. Davies, Machine Vision: Theory, Algorithms and Practicalities, Academic Press, 1997.
[97] M. Sonka, V. Hlavac and R. Boyle, Image Processing, Analysis, and Machine Vision, Pacific Grove, 1999.
[98] H.-M. Jong, L.-G. Chen, and T.-D. Chiueh, “Parallel architectures for 3-Step hierarchical search block-matching algorithm,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 4, no. 4, pp. 407–416, 1994.
[99] S. Zhu and K. K. Ma, “A new diamond search algorithm for fast block-matching motion estimation,” IEEE Transactions on Image Processing, vol. 9, pp. 287–290, 2000.
[100] J. Y. Tham, S. Ranganath, M. Ranganath and A. A. Kassim, “A novel unrestricted center-biased diamond search algorithm for block motion estimation,” IEEE Transactions on Circuits Systems for Video Technology, vol. 8, pp. 369–377, 1998.
[101] C. E. Erdem, A. M. Tekalp and B. Sankur, “Metrics for performance evaluation of video object segmentation and tracking without ground-truth,” in Proceedings of the IEEE International Conference on Image Processing, vol. 2, pp. 69–72, 2001.
[102] M.-Y. Cheng, C.-K. Wang, and H.-P. Huang, “Development of a dynamic visual tracking system based on multi-cue matching and target position prediction,” Autonomous Robots Research Advances, edited by Weihua Yang, Nova Publishers, chapter 9, pp. 269–294, 2008.
[103] T. Zhao, M. Aggarwal, R. Kumar, and H. Sawhney, “Real-time wide area multi-camera stereo tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 976–983, 2005.
[104] S. Calderara, R. Cucchiara, and A. Prati, “Bayesian-competitive consistent labeling for people surveillance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 354–360, 2008.
[105] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, pp. 185–203, 1981.
[106] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–679, 1981.
[107] N. Friedman and S. Russell, “Image segmentation in video sequences: A probabilistic approach,” in Proceedings of 13th Conference on Uncertainty in Artificial Intelligence, 1997.
[108] D. S. Lee, “Improved adaptive mixture learning for robust video background modeling,” in Proceedings of the International Association for Pattern Recognition Workshop on Machine Vision for Applications, pp. 443–446, 2002.
[109] 王俊凱,基於改良式適應性背景相減法與多重影像特徵比對法之多功能及時視覺追蹤系統之設計與實現,碩士論文,國立成功大學電機工程學系,2004。
[110] 郭大正,停車場自動監視系統,碩士論文,中華大學資訊工程研究所,2003。
[111] 黃禮潭,一個應用於戶外無人管理停車場之自動監控方法,碩士論文,中華大學資訊工程研究所,2004。
[112] 謝明逢,利用雙攝影機取像模組建構一大型環境監控系統,碩士論文,國立中央大學資訊工程研究所,2005。
[113] 謝銘倫,室內場景之特徵點擷取與追蹤,碩士論文,國立交通大學資訊科學研究所,2003。
[114] 張政祺,利用雙攝影機即時監視系統作人員偵測及特寫之研究,碩士論文,銘傳大學資訊管理研究所,2003。
[115] 胡智強,全景影像之移動物偵測與追蹤系統,碩士論文,國立中央大學資訊工程研究所,2004。
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊