(54.236.58.220) 您好!臺灣時間:2021/02/27 12:35
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:黃舒苑
研究生(外文):Su-Yuan Huang
論文名稱:基於分析邊緣視差之影像反光移除
論文名稱(外文):Reflection Removal for Binocular Images by Analyzing Edge Disparity Values
指導教授:莊永裕
指導教授(外文):Yung-Yu Chuang
口試委員:葉正聖吳賦哲
口試委員(外文):Jeng-Sheng YehFu-Che Wu
口試日期:2014-07-28
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2014
畢業學年度:102
語文別:英文
論文頁數:28
中文關鍵詞:反光反光移除影像深度差距
外文關鍵詞:reflectionreflection removalimage disparity
相關次數:
  • 被引用被引用:0
  • 點閱點閱:120
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在日常攝影時,當場景中具有玻璃或者光滑平面時,很容易便會將其所造成的
反光一併照至相片內。儘管這是很普通的自然現象,但對於照相者而言這些反光
卻可能會影響到自己原先所期望拍攝的成果,是不理想的。因此,如何將影像中
的反光移除一直對照相者造成很大的困擾。
本文中,我們嘗試使用雙眼照片來解決這個問題。如同人左右眼一般,深度不
同的物體在雙眼相機所照下的左右兩張圖片中具有不同的位置差距。這些差距正
是讓我們產生立體感的關鍵所在。而因為物體的反光相當於是在反射面的另一頭
製造出一個與此物體和反光面距離相等的反光層,因此在影像中,反光也同樣具
有其自身的深度差距。我們先對左右兩張圖片的影像梯度分別進行立體匹配計算
之後,依照反光所具有的深度差距特性將影像邊緣分為反光層和背景層兩類,再
藉由圖層分離演算法將單張圖片分出反光層和背景層之後,利用 SIFT 的光流軌跡
將兩張圖片對齊平均後得到最後的結果。

When taking photos, if there is glass or reflected plane in a camera scene, reflections may occur in the photos. And for photographers, there reflections could have bad effects
to their ideal images. Thus, how to remove reflections has been a common problem in image processing.
In this thesis, we try to solve this problem by a pair of binocular images. Like human eyes, images taken by a stereo camera have different disparity values on objects of different depths. As the reflection layers could be regarded as a layer inside the reflected plane, they also have their own depths and disparity values. By these stereo information, we can tell where the reflections are. Here, we first apply stereo matching to image gradients of the both left and right images. Then we label image edges by analyzing their disparity values.
With the help of these labels, we can do layer separation to get the reflection layers and the background layers for the both images. Finally, we use SIFT flow for aligning these two images and combine them to get the final result.

Chapter 1: Introduction (1)
Chapter 2: Background and Related Work (4)
Chapter 3: Method (8)
3.1 Stereo Matching (9)
3.2 Analyzing Disparity (12)
3.3 Removing Reflection (16)
3.4 Combination (17)
Chapter 4: Results (19)
4.1 Binocular images (19)
4.2 Images from reference dataset (23)
Chapter: 5 Conclusion and Discussion (25)
Bibliography (27)

[1] A. Levin, A. Zomet, and Y. Weiss, “Separating reflections from a single image using
local features,” in Computer Vision and Pattern Recognition, 2004. CVPR 2004.
Proceedings of the 2004 IEEE Computer Society Conference on, vol. 1, June 2004,
pp. I–306–I–313 Vol.1.
[2] A. Levin and Y. Weiss, “User assisted separation of reflections from a single image
using a sparsity prior,” in Computer Vision-ECCV 2004, 2004, pp. 602–613.
[3] H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching
and mutual information,” in Computer Vision and Pattern Recognition, 2005. CVPR
2005. IEEE Computer Society Conference on, vol. 2, June 2005, pp. 807–814 vol. 2.
[4] A. Levin and Y. Weiss, “User assisted separation of reflections from a single
image using a sparsity prior,” Pattern Analysis and Machine Intelligence, IEEE
Transactions on, vol. 29, no. 9, pp. 1647–1654, Sept 2007.
[5] R. Szeliski, S. Avidan, and P. Anandan, “Layer extraction from multiple images
containing reflections and transparency,” in IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR’2000), vol. 1. IEEE Computer
Society, June 2000, pp. 246–253.
[6] Y. Li and M. S. Brown, “Exploiting reflection change for automatic reflection
removal,” in ICCV, 2013, pp. 2432–2439.
[7] A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts
using gradient projection and flash-exposure sampling,” in ACM Transactions on
Graphics (TOG), vol. 24, no. 3. ACM, 2005, pp. 828–835.
[8] Y. Y. Schechner, N. Kiryati, and R. Basri, “Separation of transparent layers using
focus,” in Proc. ICCV, 2000, pp. 1061–1066.
[9] S. Sinha, J. Kopf, M. Goesele, D. Scharstein, and R. Szeliski, “Image-based ren-
dering for scenes with reflections,” ACM Transactions on Graphics (Proceedings of
SIGGRAPH 2012), vol. 31, no. 4, pp. 100:1–100:10, 2012.
[10] T. Twardowski, B. Cyganek, and J. Borgosz, “Gradient based dense stereo match-
ing,” in Image Analysis and Recognition, 2004, pp. 721–728.
[11] T. Gevers and H. M. G. Stokman, “Classification of color edges in video into shadow-
geometry, highlight, or material transitions,” Highlight, or Material Transitions,
IEEE Trans. on Multimedia, vol. 5, pp. 237–243, 2003.
[12] A. Koschan and M. Abidi, “Detection and classification of edges in color images,”
Signal Processing Magazine, IEEE, vol. 22, no. 1, pp. 64–73, Jan 2005.
[13] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “Sift flow: Dense
correspondence across different scenes,” in Computer Vision–ECCV 2008, 2008, pp.
28–42.
[14] IVS Lab, the University of Auckland, “Online computational stereo vision.”
[Online]. Available: https://www.ivs.auckland.ac.nz/quick_stereo/index.php

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔