跳到主要內容

臺灣博碩士論文加值系統

(98.82.140.17) 您好!臺灣時間:2024/09/10 12:26
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:楊松輯
研究生(外文):Sung-Chi Yang
論文名稱:具自動化對焦功能之立體視覺影像追蹤
論文名稱(外文):Stereo Vision-Based Image Tracking System with Auto-Focus Capabilities
指導教授:陳冠宇陳冠宇引用關係
指導教授(外文):Kuan-Yu Chen
學位類別:碩士
校院名稱:中原大學
系所名稱:機械工程研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:中文
論文頁數:82
中文關鍵詞:被動式對焦立體視覺顏色辨識影像追蹤
外文關鍵詞:color identificationstereo visionpassive auto-focusimage tracking
相關次數:
  • 被引用被引用:5
  • 點閱點閱:303
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
摘要
由於大部份的影像監視系統常常受到攝影機解析度不足、拍攝角度固定、目標距離太遠及環境光源不足的影響,導致記錄的影像品質不佳,或遺漏關鍵畫面,或無法清楚辨識,雖有監視設備,卻無法提供有參考價值的影像。本文的研究目的在發展一部由雙攝影機構成之立體視覺影像追蹤系統,改善上述的缺點。第一,此系統可偵測環境中的移動目標物,進行追蹤,不會遺漏關鍵畫面。第二,本文提出一混合式被動對焦方法,乃結合立體視覺演算法及基於影像清晰度的對焦點預估法,其中,立體視覺演算法能求出移動目標物與攝影機間的距離,估算攝影機的放大倍率及縮小對焦點的搜尋範圍;而基於影像清晰度的對焦點預估法搭配多重閥值顏色篩選,能將影像清晰度函數簡化成單波峰狀,再利用雲形線內插,即可快速且準確的預估對焦點位置,以記錄清晰且適度放大目標物的影像,有效解決目標物距離太遠時影像太小或放大失焦的缺點。


ABSTRACT
Most of video monitoring system is often hampered by lack of camera resolution, the shooting angle fixed, objective effects of distance and lack of ambient light, led to record images of poor quality, omit key images or cannot be clearly identified. It can't provide worthy video frame, notwithstanding have surveillance and monitoring system. In order to improve the above shortcomings.This research aims at developing a mobile imaging tracking system which consists of dual camera stereo vision. First, the system is able to detect moving targets and tracking it, but it would not left out key images. Second, the purpose of this dissertation present a mixed type passive focus method which is combination with stereo visual algorithm and focus point estimation basing on image sharpness function. Stereo Visual algorithm can not only seeking out distance between target and camera, but also estimateing cameras of zoom rate and narrowing on focus of search range. To clarify the record target image and moderate magnification. We can use focus point estimation basing on images sharpness function mixs color multithresholding filter method. This method can simplify into one peak on image sharpness function. And using cubic splines interpolation can quickly predictives focus location very accurate. To resolve the image is too small when the traget is too far or when you zoom in traget defocused a result of nonpoint.


目錄
摘要 i
ABSTRACT ii
目錄 iii
圖目錄 vi
表目錄 ix
第一章 緒論 1
1.1研究動機與目的 1
1.2文獻回顧 2
1.3研究方法 4
1.4論文架構 5
第二章 移動目標影像偵測追蹤 6
2.1灰階化 7
2.2影像相減 7
2.3影像二值化 9
2.4影像重整 10
2.4.1膨脹 10
2.4.2侵蝕 11
2.4.3 斷開與閉合 11
2.5中通濾波 12
2.6影像標籤化 13
2.7面積過濾 14
2.8質心計算 15
2.9色彩空間與轉換 15
2.9.1色彩空間介紹 16
2.9.2 RGB色彩空間 16
2.9.3 HSV色彩空間 17
2.9.4 色彩空間轉換 18
2.10移動目標判別 19
2.11 感興趣區域 20
2.11.1 投影法則 21
2.11.2 顏色篩選 22
第三章 混合式被動對焦 23
3.1 對焦發展 23
3.2 光學成像原理 26
3.3 成像距離 27
3.4 顏色多重閥值感興趣區域 31
3.5 立體視覺 33
3.6 影像清晰度函數 38
3.6.1 灰度對比差異 39
3.6.2 標準差 39
3.6.3 調制轉換函數 39
3.6.4 離散餘弦轉換 40
3.6.5 離散傅立葉轉換 40
3.7 對焦準則評估 42
3.7.1 立體視覺深度 42
3.7.2 清晰度函數準確度 44
3.8 對焦點預估 47
第四章 實驗設備 52
4.1硬體架構 52
4.2硬體配備規格 54
4.3 軟體環境 57
第五章 實驗結果與討論 58
5.1 立體視覺查表對焦實驗 58
5.2 混合式被動對焦實驗 61
5.3 混合式被動對焦於日光環境實驗 63
5.4 混合式被動對焦準確度實驗 64
5.5 實驗討論 66
第六章 結論與未來展望 68
6.1結論 68
6.2未來展望 68
參考文獻 70

圖目錄
圖2.1 移動目標偵測與追蹤流程 6
圖2.2 灰階影像相減 : (A)灰階影像1;(B)灰階影像2;(C)相減影像 8
圖2.3 實際影像相減 : (A)灰階影像1;(B)灰階影像2;(C)相減影像 8
圖2.4 二值化影像 : (A)RGB全彩影像;(B)閥值0.3;(C)閥值0.6 9
圖2.5 影像膨脹 : (A)輸入二值影像; (B)膨脹後影像; (C)遮罩元素 10
圖2.6 影像侵蝕 : (A)輸入二值影像; (B)侵蝕後影像; (C)遮罩元素 11
圖2.7 影像膨脹侵蝕 : (A)原始二值影像; (B)斷開; (C)閉合;(D)先斷開後閉合 12
圖2.8 中通濾波 : (A)原始影像;(B)中值取代;(C)中通濾波結果;(D)濾波遮罩 13
圖2.9 影像標籤化: (A)4方向遮罩;(B)8方向遮罩;(C)原始影像;(D)4方向結果(E)8方向結果 14
圖2.10 面積過濾 : (A)原始標籤化;(B)面積過濾後 14
圖2.11 質心計算 : (A)原始影像; (B) 二值影像標記為質心位置 15
圖2.12 RGB色彩空間模型 17
圖2.13 HSV色彩空間色彩分佈 17
圖2.14 移動面積偵測流程 19
圖2.15 目標影像由右至左移動面積偵測 20
圖2.16 移動目標物感興趣區域 21
圖2.17 顏色篩選二值化 : (A)RGB原圖;(B)H-S閥值矩陣區域;(C)顏色篩選二值化 22
圖3.1 主動式對焦示意圖 24
圖3.2 相位偵測基本原理 25
圖3.3 透鏡成像示意圖 26
圖3.4 影像原圖 : (A)~(C)影像模糊到清晰示意圖 28
圖3.5 影像灰度圖 : (A)~(C)為圖3.4(A)~(C)灰度分佈 28
圖3.6 不同距離成像關係 : (A)透鏡成像示意圖;(B)影像清晰度函數圖 29
圖3.7 兩近物影像清晰度曲線 : (A)未對焦;(B)前物對焦;(C)後物對焦;(D)清晰度曲線 30
圖3.8 單一閥值不同光源特徵表現 : (A) 不同光源原始圖; (B) 閥值篩選二值化 31
圖3.9 不同光源建立顏色多重閥值 : 字母A~D表各光源 ; 數字1~2表模糊到清晰 32
圖3.10 多重閥值H-S矩陣 : (A)色彩資訊;(B)單一閥值;(C)多重閥值 32
圖3.11立體視覺3D示意圖 34
圖3.12 參數量測示意圖 34
圖3.13 立體視覺X-Z平面示意圖 35
圖3.14 立體視覺Y-Z平面示意圖 36
圖3.15 全域性焦點搜尋程序 38
圖3.16 傅立葉轉換頻譜圖 : (A)模糊影像頻譜;(B)清晰影像頻譜 41
圖 3.17立體視覺影像處理流程 42
圖3.18 立體視覺由近到遠的深度所擷取影像(A)左眼擷取之影像 (B)圖A經過顏色篩選 (C)右眼擷取之影像 (D)圖C經過顏色篩選 43
圖3.19 目標物各倍率下之面積比值 44
圖3.20 清晰度曲線於不同倍率下之景深關係 45
圖3.21 清晰度函數準確度試驗圖 : (A)明亮;(B)昏暗 46
圖3.22 各清晰度函數於全域搜尋之波峰圖 : (A)頻譜半徑2;(B)頻譜半徑5 48
圖3.23對焦點預估間距表現 : (A1)、(B1)頻譜半徑2 ; (A2)、(B2)頻譜半徑5 50
圖 4.1 追蹤平台周邊設備 52
圖4.2 硬體控制流程圖 53
圖4.3 立體視覺移動平台實體 54
圖4.4 五相步進馬達與驅動器 55
圖4.5 DFK 21AF04-Z2 55
圖4.6 IEEE 1394 介面卡 56
圖4.7 運動控制卡 56
圖4.8 MATLAB開發環境基本架構 57
圖5.1 立體視覺查表對焦追蹤流程 58
圖5.2 立體視覺深度查表對焦實驗: (A)左邊攝影機;(B)右邊攝影機 60
圖5.3混合式對焦實驗: (A)左邊攝影機;(B)右邊攝影機 62
圖5.4混合式於日光下對焦實驗: (A)左邊攝影機;(B)右邊攝影機 64
圖5.5 目標物位置圖 : (A)亮光源;(B)暗光源 64

表目錄
表3.1 立體視覺Z方向深度誤差值 43
表3.2清晰度函數焦點誤差值 46
表3.3清晰度函數焦點誤差值 47
表3.4 立體視覺深度插值搜尋區間 49
表4.1 個人電腦規格表 54
表4.2 五相步進馬達規格表 54
表4.3 DFK 21AF04-Z2規格表 55
表4.4 ADLINK PCI-8134規格表 56
表5.1 對焦準確率比較(亮光源) 65
表5.2 對焦準確率比較(暗光源) 65


參考文獻
[1]Feng Li and Hong Jin, “A fast auto focusing method for digital still camera”, Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, China Guangzhou, 2005, Aug. 18-21, pp. 5001-5005.
[2]曾彥博, 立體視覺伺服雙目標追蹤系統之研究, 碩士論文, 中原大學機械工程學系, 2010.
[3]G. L. Foresti, L. Marcenaro, and C. S. Regazzoni, “Automatic detection and indexing of video-event shots for surveillance applications,” IEEE Trans. Multimedia, Vol.4, No. 4, pp. 459-471, 2002.
[4]D. J. Dailey, F. W. Cathey, and S. Pumrin, “An algorithm to estimate mean traffic speed using uncalibrated cameras,” IEEE Trans. Intelligent Transportation Systems, Vol. 1, No. 2, pp. 98-107, 2000.
[5]C. Anderson, P. Burt, G. van der Wal, “Change detection and tracking using pyramid transformation techniques,” In Proceedings of SPIE - Intelligent Robots and Computer Vision, Vol. 579, pp. 72–78, 1985.
[6]C Kim and J-N Hwang, “Fast and automatic video object segmentation and tracking for content-based applications, ” IEEE Transactions on Circuits and Systems for Video Technology , Vol. 12, no. 2, pp. 122-129, Feb 2002
[7]C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, “Pfinder: real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 780-785, 1997.
[8]I. Haritaoglu, D. Harwood, and L. S. Davis, “ W4: who? when? where? what? a real-time system for detecting and tracking people,” Third Face and Gesture Recognition Conference, pp. 222-227, Apr. 1998.
[9]J. Barron, D. Fleet, and S. Beauchemin, “Performance of optical flow techniques,” International Journal of Computer Vision, Vol. 12, No. 1, pp. 42-77, 1994.
[10]B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, Vol. 17, No. 1-3, pp. 185-203, 1981.
[11]C. S. Fuh and P. Maragos, “Region-based optical flow estimation,” Proc. of 1989 IEEE Conference on Computer Vision and Patter Recognition, San Diego, CA, pp. 130-133. 1989.
[12]T. Mituyosi, Y. Yagi, M.Tachida, “Real-time human feature acquisition and human tracking by omnidirectional image sensor,” IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003., pp. 258- 263, Aug. 2003.
[13]Kai-Tai Song and Chen-Chu Chien, “Visual Tracking of a Moving Person for a Home Robot,” Journal of Systems and Control Engineering, Vol. 219, No. 14, pp. 259-269, 2005.
[14]Kai-Tai Song and Jen-Chao Tai, “Image-Based Traffic Monitoring With Shadow Suppression,” Proceedings of the IEEE , Vol.95, No.2, pp.413-426, Feb. 2007.
[15]C. Veenman, M. Reinders, and E. Backer. “Resolving motion correspondence for densely moving points,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 1, pp.54-72, Jan 2001.
[16]鍾宜岑, 應用於動態背景中的移動物體影像之偵測與及時追蹤系統, 碩士論文, 國立交通大學電機學院, 2007.
[17]C. M. Kuo, C. H. Hsieh, H.C.Lin, and P. C. Lu, “Motion estimation algorithm with Kalman filter,” IEEE Transactions on Electronics Letter, Vol. 30, No. 15, pp. 1204-1206, 1994.
[18]I. Zuriarrain, F. Lerasle, N. Arana, and M. Devy. “An MCMC-based particle filter for multiple person tracking,” 19th International Conference on Pattern Recognition, Vol.23, No.1, pp.1-4, 8-11 Dec 2008.
[19]林準, 利用粒子濾波器追蹤限制區域內非法進入者, 碩士論文, 國立中央大學資訊工程學系, 2009.
[20]Jie He, Rongzhen Zhou, Zhiliang Hong. “Modified Fast Climbing Search Auto-focus Algorithm with Adaptive StepSize Searching Technique for Digital Camera”, IEEE Trans.on Consumer Electronics, vol. 49, pp.257-262, May 2003.
[21]G. Yang, B.J. Nelson, “Wavelet-based auto-focusing and unsupervised segmentation of microscopic images,” IEEE International Conference on Intelligent Robots and Systems, 2003, pp. 2143-2148.
[22]N. Kehtarnavaz, H.-J. Oh, “Development and real-time implementation of a rule-based auto-focus algorithm,” Real-Time Imaging, 197-203.2003.
[23]Chih-yung Chen, Rey-chue Hwang, Yu-ju Chen. “A passive autofocus camera control system,” Applied Soft Computing, 2010, pp. 296-303.
[24]M. Gamadia, M. Rahman, and N. Kehtarnavaz, “Performance metricsfor passive auto-focus search algorithms for digital and smart-phonecameras,” Journal of Imaging Science and Technology, vol. 20,013007, Jan/Feb 2011.
[25]Baina, J. and J. Dublet, “Automatic focus and iris control for video cameras,” in Proc. of the Fifth Int. Conf. on Image Processing and its Applications, Edinburgh, Jul.4-6, pp.232-235,1995.
[26]J. Jeon, J. Lee, and J. Paik, “Robust Focus Measure for Unsupervised Auto-Focusing Based on Optimum Discrete Cosine Transform Coefficients, ” IEEE Transactions on Consumer Electronics, vol. 57, no. 1,pp. 1-5, Feb. 2011.
[27]A. Santos, C. Solorzano, J. Pena, N. Malpica, and F. Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” Journal of Microscopy, vol. 188, pp. 264-272, June 1997.
[28]Christopher F. Batten. “Auto focusing and Astigmatism Correction inthe Scanning Electron Microscope,” M.Phil. Thesis, University of Cambridge, 2000.
[29]June-Sok Lee, You-Young Jung, Byung-Soo Kim, Sung-Jea Ko, “An advanced video camera system with robust AF, AE, and AWB control , ” Consumer Electronics, IEEE Transactions on , vol.47, no.3, pp.694-699, Aug 2001
[30]Quan Feng, Ke Han, Xiu-chang Zhu. “An auto-focusing method for different object distance situation,” International lournal of Computer Science and Network Security, Vol. 7, pp.31-35, 2007.
[31]S. Yousefi, M. Rahman, N. Kehtarnavaz, and M. Gamadia,“A new auto-focus sharpness function for digital and smart-phone cameras,” Proceedings of IEEE International Conference on Consumer Electronics, pp. 478-488, Las Vegas, Jan 2011.
[32]繆紹綱, 數位影像處理, 普林斯頓國際有限公司, 2007.
[33]Adams, A ., “ The Camera, ” New York Graphic Society, Boston, 1980.
[34]Lian-jie Liu, Ya-yu Zheng, Jia-qin Feng, Li Yu. “A fast auto-focusing technique for multi-objective situation,” 2010 International Conference on Computer Application and System Modeling, vol.1, pp.V1-607-V1-610, Oct. 2010.
[35]Quan Feng, Ke Han, Xiu-chang Zhu, “A New Auto-focusing Method Based on the Center Blocking DCT,” Fourth International Conference on Image and Graphics, pp.32-38, Aug. 2007.
[36]Tsung-Han Tsai, Chung-Yuan Lin, “A new auto-focus method based on focal window searching and tracking approach for digital camera, ” International Symposium on Communications, Control and Signal Processing, pp.650-653, March 2008.
[37]張文龍, 自主移動機器人之人機互動控制的設計與實現, 碩士論文, 中原大學機械學系, 2007.
[38]K. Y. Chen, C. C. Chien, W. L. Chang, and C. C. Hsieh, “Improving the accuracy of depth estimation using a modified stereo vision model in binocular vision,” ISMTII, 2011.
[39]Image Acquisition Toolbox User’s Guide Version4.2,The MathWorks, Inc., 2011.
[40]Image Processing Toolbox User’s Guide Version7.3,The MathWorks, Inc., 2011.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top