跳到主要內容

臺灣博碩士論文加值系統

(3.236.68.118) 您好!臺灣時間:2021/07/31 19:57
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:盧泓志
研究生(外文):HUNG-CHIH LU
論文名稱:深度相機的模糊偵測與去模糊
論文名稱(外文):Structured Light Depth Camera Motion Blur Detection and Deblurring
指導教授:王傑智
指導教授(外文):Chieh-Chih Wang
口試委員:胡竹生林文杰林惠勇
口試日期:2014-12-30
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:英文
論文頁數:28
中文關鍵詞:深度相機結構光去模糊
外文關鍵詞:Depth CameraStructured LightDeblurring
相關次數:
  • 被引用被引用:0
  • 點閱點閱:145
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
利用深度相機取得的三維場景的去模糊化在電腦視覺領域中是一個新穎的題目。動態模糊(motion blur)發生在許多基於結構光(structured light)的三維相機中。我們分析了基於結構光的三維相機產生動態模糊的原因,並設計了一個新穎的方法在三維場景中去模糊化。我們利用物體的模型去取代三維場景中有動態模糊的部分。因為我們處理連續的三維影像,因此我們可以在物體還沒產生動態模糊時建出物體的模型。我們的去模糊演算法分為兩個部分:動態模糊偵測以及動態模糊去模糊化。在動態模糊偵測部分,我們依物體的速度來辦定是否產生動態模糊。在動態模糊去模糊化部分,我們先判斷動態模糊的種類,並應用跌代最近點演算法(iterative closest point algorithm)針對不同種類的動態模糊來做不同的處理。我們對三組真實數據(real data)做實驗,成功得到了去模糊化的結果。

Deblurring of 3D scenes captured by 3D sensors is a novel topic in computer vision. Motion blur occurs in a number of 3D sensors based on structured light techniques. We analyze the causes of motion blur captured by structured light depth cameras and design a novel algorithm using the speed cue and object models to deblur a 3D scene. The main idea is using the 3D model of an object to replace the blurry object in the scene. Because we aim to deal with consecutive 3D frame sequences, ie 3D videos, an object model can be built in the frame where the object is not blurry yet. Our deblurring method can be divided into two parts: motion blur detection and motion blur removal. For the motion blur detection part, we use the speed cue to detect where the motion blur is. For the motion blur removal part, first we judge the type of the motion blur, and then we apply the iterative closest point (ICP) algorithm in different ways according to the motion blur type. The proposed method is evaluated in real world cases and successfully accomplishes motion blur detection and blur removal.

CHAPTER 1. Introduction 1
CHAPTER 2. RelatedWork 3
CHAPTER 3. Motion Blur Detection 5
3.1. The Foundation of Structured Light 5
3.2. Causes of Motion Blur of Structured Light Depth Cameras 7
3.3. The Difference between Motion Blur in 2D Images and 3D Piont Clouds 7
3.4. Our Blur Detection Method 12
CHAPTER 4. Deblurring 14
4.1. Building Object Model 14
4.2. Judge the Type of Motion Blur 14
4.3. Find the Correct Object Model Pose 17
CHAPTER 5. Experiment and Discussion 19
5.1. Experiment Setup 19
5.2. Experiment Results and Discussion 19
CHAPTER 6. Conclusion and Future Work 25
BIBLIOGRAPHY 27


Arieli, Y., Freedman, B., Machline, M., & Shpunt, A. (2012). Depth mapping using projected
patterns. US Patent 8,150,142.
Bascle, B., Blake, A., & Zisserman, A. (1996). Motion deblurring and super-resolution from
an image sequence. In Computer VisionECCV’96 (pp. 571–582). Springer.
Cho, S. & Lee, S. (2009). Fast motion deblurring. In ACM Transactions on Graphics (TOG),
volume 28, (pp. 145).
Girod, B. & Scherock, S. (1990). Depth from defocus of structured light. In 1989 Advances
in Intelligent Robotics Systems Conference, (pp. 209–215).
Khoshelham, K. (2011). Accuracy analysis of kinect depth data. In ISPRS workshop laser
scanning, volume 38, (pp. W12).
Kim, T. H., Ahn, B., & Lee, K. M. (2013). Dynamic scene deblurring. In 2013 IEEE International
Conference on Computer Vision (ICCV), (pp. 3160–3167).
Kim, T. H. & Lee, K. M. (2014). Segmentation-free dynamic scene deblurring. In 2014 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), (pp. 2766–2773).
Liu, R., Li, Z., & Jia, J. (2008). Image partial blur detection and classification. In IEEE
Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, (pp. 1–8).
Nayar, S. & Ben-Ezra, M. (2004). Motion-based motion deblurring. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 26(6), 689–698.
Ringaby, E. & Forss´en, P.-E. (2011). Scan rectification for structured light range sensors
with rolling shutters. In 2011 IEEE International Conference on Computer Vision (ICCV),
(pp. 1575–1582).
Scharstein, D. & Szeliski, R. (2003). High-accuracy stereo depth maps using structured
light. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
2003. Proceedings, volume 1, (pp. I–195).

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top