(100.24.122.117) 您好!臺灣時間:2021/04/12 05:52
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:林家瑜
研究生(外文):Jia Yu
論文名稱:以連通區域標記為基礎的自動景深估測方法之研究
論文名稱(外文):An Automatic Depth Map Estimation System using Connected-Component Labeling method
指導教授:黃鎮淇
指導教授(外文):Jen-Chi Huang
口試委員:黃鎮淇
口試委員(外文):Jen-Chi Huang
口試日期:2014-07-16
學位類別:碩士
校院名稱:國立屏東商業技術學院
系所名稱:資訊工程系(所)
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2014
畢業學年度:102
語文別:中文
論文頁數:156
中文關鍵詞:色彩衰減景深估測深度立體感聯通區域標記物件分割
外文關鍵詞:Color ReduceDepth EstimationDepth PerceptionConnected-Component LabelingObject Segmentation
相關次數:
  • 被引用被引用:2
  • 點閱點閱:228
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:5
  • 收藏至我的研究室書目清單書目收藏:0
隨著科技的飛速發展,3D立體電視(3D Stereoscopic TV)已經是家電商品的主流。然而3D立體顯示的觀賞品質高低,在於景物與物件的相對位置的呈現,也就是物件的深度立體感,所以景深估測是目前產業界的技術發展重點。
我們提出一個自動化的景深估測方法,將雙眼攝影機所拍攝的左圖像和右圖像轉換成為景深圖。首先,先將右圖像使用圖像色彩衰減(Color Reduce)的方法,來讓相近的顏色能分割到同一區塊。並且使用聯通區域標記(CCL , Connected-Component Labeling)方法,將相同亮度的區塊分割出來。再將分割的物件區塊分別儲存在256個階層的灰度圖像中,再利用每一階層的灰度圖像當作遮罩來分割出物件。
最後將右圖分割出的物件區域與左圖像相同形狀的區域做相減比對,並記錄相減值趨近於零的像素(pixel)數量。重複上述方法,將左圖向左邊水平移動一個像素並與右圖相減,直到移動到邊界為止,同時將所有值紀錄下來,並找尋在紀錄中的像素數量最多的水平移動量,則可以計算出此物件區域在左右圖像間的水平視差值,再將視差值轉換成景深值。
經實驗證明,我們所提的方法,在單一物件、多物件、單純背景與複雜背景都能使用自動的方法來估測景深圖。
With the rapidly development of technology, 3D TV has been the mainstream of electronic appliances. However, the quality of 3D image depends on the relative positions of the scene and objects. In other words, it is the depth perception of the objects. Estimating the depth of field is the focal point of the technological development in the present technology industry.
In this thesis, we propose the method about automatic estimating the depth, and we convert the left image and the right image, which were taken by the binocular camera, into a depth map. First, let the similar colors in the right image be partitioned into the same block by using image color reduce method(Color Reduce), and we cut out the blocks of the same brightness by using Unicom and labeling method(CCL, Connected-Component Labeling). Secondly, we store the Objects blocks in the segmentation of 256 levels of grayscale image. The third step is to take advantage of the gray images in every stage as masks to cut out the objects
Finally, we subtract the segmentation of object area on the right image from left image, and the zone in the same and recorded record down all the subtract value from right to left. We find the smallest differences in value, you can compute this object area on the left and right offsets between the left and right images, and then to convert the offset to the deep value.
The experience proves that the method we proposed can estimate the depth of field automatically in simple or complex backgrounds. And the proposed method is also useful in the single or multi objects.
誌謝
摘要
Abstract
目錄
圖目錄
表目錄
第1章 緒論
1.1 前言
1.2 研究目的
第2章 研究背景
2.1 立體影像景深估測
2.1.1 雙眼視差
2.1.2 被動式立體視覺
2.1.3 立體圖像技術
2.2 色彩衰減(Color Reduce)
2.3 聯通區域標記
2.3.1 連通區域分析
2.3.2 連通區域分析算法
2.4 門檻值 (Thresholding)
2.5 景深估測
2.6 立體視訊編碼技術
2.6.1 立體影像對編碼技術
2.6.2 2D+depth資料表示法
2.7 基於左右平行影像視差比對方法
2.7.1 線段比對方法[2]
2.7.2 邊緣適應性區塊比對演算法[3]
第3章 研究方法
3.1 OpenCV (Open Computer Vision Library)
3.2 圖像色彩衰減
3.3 聯通區域標記
3.4 遮罩
3.5 圖像相減匹配
3.6 顯示物件景深
第4章 研究結果
4.1 以聯通區域標記
4.1.1 幾何圖形
4.1.2 標準雙眼立體影像對
4.2 與不同匹配方法結果比較
4.2.1 與線段比對[2]方法比較
4.2.2 與邊緣適應性區塊比對[3]比較
第5章 結論與未來工作
5.1 結論
5.2 未來工作
參考文獻
附件
附件1 幾何圖形
附件1.1 幾何圖形-分割
附件1.2 幾何圖形-景深
附件1.3 幾何圖形-匹配水平偏移量表
附件2 Tsukuba
附件2.1 Tsukuba-分割
附件2.2 Tsukuba-景深
附件2.3 Tsukub-匹配水平偏移量表
附件3 Teddy
附件3.1 Teddy-分割
附件3.2 Teddy-景深
附件3.3 Teddy-匹配水平偏移量表
附件4 Plastic
附件4.1 Plastic-分割
附件4.2 Plastic-景深
附件4.3 Plastic-匹配水平偏移量表
[1]黃怡菁,黃乙白,謝漢萍,“3D立體顯示技術”,科學發展,451期,中華民國九十九年七月
[2]郭子豪,“基於線段比對之快速深度估測法”(碩士論文),取自臺灣博碩士論文系統,中華民國一百零一年八月
[3]陳正豪,“具邊緣適應性區塊比對與不可靠區域深度修復之視差估計演算法” (碩士論文),取自臺灣博碩士論文系統,中華民國一百年七月
[4]賴文能、陳韋志,“淺談 2D 至 3D 視訊轉換技術”,影像與識別,vol.16 no.2,pp.61-75,中華民國九十九年
[5]C. Fehn, “A 3D-TV Approach Using Depth-Image-Based Rendering (DIBR),” Proceedings of Visualization, Imaging and Image Processing(VIIP), pp.482-487, 2003
[6]C. Fehn, “Depth-Image-Based Rendering (DIBR), Compression and Transmission for a New Approach on 3D-TV”, Stereoscopic Displays and Virtual Reality Systems XI, pp.93-104, 2004.
[7] D. Bradley and G. Roth, “Adaptive thresholding using the integral image,” Journal of Graphics Tools, vol. 12, no.2, pp.13-21, 2007
[8]F. Blais, M. Picard, and G. Godin, “Accurate 3d acquisition of freely moving objects,” in Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission(3DPVT), pp.422-429, 2004.
[9] G. Bradski, A. Kaehler, Learning OpenCV, Computer Vision with the OpenCV Library, O’Reilly, 2008.
[10] H. Yamanoue, M. Okui, and F. Okano., “Geometrical Analysis of Puppet-Theater and Cardboard Effects in Stereoscopic HDTV Images,” IEEE trans. on Circuits Syst. Video Technol., vol. 16, No. 6, pp. 744-752. 2006.
[11]L. Zhang and W. J. Tam, “Stereoscopic Image Generation Based on Depth Images for 3D TV,” IEEE Trans. on Broadcasting, vol. 51, no. 2, pp.191-199, 2005.
[12]L. He, Y. Chao, and K. Suzuki, “A run-based two-scan labeling algorithm," IEEE Transactions on Image Processing, vol. 17, no.5, pp. 749-756, 2008.
[13]Middlebury Stereo Datasets, [online]. http://vision.middlebury.edu/stereo/data/
[14]N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognition, Vol. 26, No. 9, pp.1277-1294, 1993.
[15]Qian, N, “Binocular Disparity and the Perception of Depth,” Neuron, Vol. 18, pp. 359-368, 1997.
[16]Q.Wei, “Converting 2D to 3D: A Survey,”Research Assignment, Information and Communication Theory Group (ICT), DelftUniversity of Technology, December 2005.
[17]R. Lagani`ere. , OpenCV 2 Computer Vision Application Programming Cookbook, Packt Publishing, Scanning an image with pointers, pp.41-48, 2011.
[18]Sung-Yeol Kim, Eun-Kyung Lee, Yo-Sung Ho, “Generation of ROI Enhanced Depth Maps Using Stereoscopic Cameras and a Depth Camera,” IEEE Trans. on Broadcasting (TBC), Vol. 54, No.4, pp.732-740, Dec. 2008.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔