跳到主要內容

臺灣博碩士論文加值系統

(35.175.191.36) 您好!臺灣時間:2021/07/30 11:56
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:管雲平
研究生(外文):Yun-Ping Kuan
論文名稱:基於SIFT特徵點引導修補方向之視訊修補演算法
論文名稱(外文):Video Inpainting Based on SIFT Feature Point to Modify Repairing Direction
指導教授:郭天穎郭天穎引用關係
口試委員:陳煥蘇柏齊
口試日期:2012-07-23
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電機工程系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:中文
論文頁數:71
中文關鍵詞:Haar小波轉換影像修補尺度不變特徵轉換動作向量
外文關鍵詞:Haar wavelet transformimage inpaintingscale-invariant feature transformmotion vectorentropy
相關次數:
  • 被引用被引用:0
  • 點閱點閱:158
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
視訊修補(Video Inpainting)是針對視訊畫面受損或是不要的人工特效進行修復和移除。經過此程序視訊影片看起來會恢復成完整或沒有被後製的樣子。視訊修補演算法基本上可以視為將單張畫面的影像修補(Image Inpainting)演算法延伸到視訊中多張畫面的應用。基於影像修補可以分為傳統的直接對畫面進行修補以及先考慮修補優先權再進行修補的兩種技術,視訊修補技術也有這樣的差別。傳統直接取鄰近畫面資訊進行修補的方法通常為了維持畫面在時間上的連續性而導致空間一致性不夠產生畫面模糊的現象,然而先將畫面內容進行分類再對不同類型的畫面內容分先後做修補處理,雖然可以維持修補畫面空間中的一致性,但是分類機制很難適應於各種視訊畫面,所以一旦分類錯誤就會嚴重影響修補效能。
本論文提出以簡單的分類機制將畫面內容分成不同的類型,但是我們的修補順序不同於相關文獻是依照畫面內容的類型來決定,而是以一個排程機制決定不同畫面類型的修補優先權。此外加入向量場(Motion Field)的亂度(Entropy)作為優先權決策機制在時間域的參考,同時利用不同尺度空間的特徵資訊來引導修補方向,使得我們的修補結果不但同時維持良好的時間連續性(Continuity)以及空間一致性(Consistency),而且可以適應於各種不同的畫面。我們透過實際的電視節目與傳統文獻中的視訊影片進行測試,證實本論文提出的視訊修補演算法的確較傳統文獻方法優異。


The video inpainting is a technique to repair the damaged part of video clips or to remove unwanted artificial post-production effects on them. The video after inpainting should be restored to a good state with a look without being ever altered. Video inpainting is basically regarded as the extended version of image inpainting. Image inpainting can be divided into two technologies, direct repair and priority-based repair, and video inpainting is the same. The direct repair of video inpainting takes the information of the adjacent frame as the source to maintain the temporal continuity of the repaired frame, but it could result in spatial inconsistencies. Priority-based approach is to classify the contents of video frames into two types and make them corresponding to different repair priorities. Although the priority-based approach could maintain the consistency of the most frames, the classification mechanism is difficult to adapt to different types of videos, because the classification error would affect the repair performance.
In this paper, we propose a video inpainting technique with a simple mechanism to divide the contents of a video frame into different types. Unlike that the literature works have to process each type in different order, we do not make the type strictly corresponding to the repair order as we also implement a scheduling mechanism for the priority. Furthermore, we use the entropy of the motion field as a reference to the priority mechanism to consider time domain information to improve the consistency of the repaired frame. We also use the robust SIFT features points as a guide to the repair direction in the spatial domain. Therefore, our repair results can maintain good continuity in both temporal and spatial domains, and can adapt to a variety of different videos. Through the experiment on the video sequences used in television broadcast and in the existing works, our proposed method is proved to be superior to other methods.

摘 要 i
ABSTRACT ii
誌 謝 iv
目 錄 v
表目錄 viii
圖目錄 ix
第一章 緒論 1
1.1 研究動機與目的 1
1.2 研究方法 3
1.3 研究貢獻 3
1.4 論文組織架構 4
第二章 相關背景知識 5
2.1 影像修補技術之文獻回顧 5
2.1.1 傳統影像修補方法 5
2.1.2 基於樣本影像修補演算法 6
2.2 視訊修補技術之文獻回顧 11
2.2.1 基於時空域區塊搜尋 11
2.2.2 基於物件背景分割的修補演算法 14
2.2.3 基於結構紋理分割的修補演算法 16
2.2.4 相關文獻總結 17
第三章 SIFT特徵擷取演算法 18
3.1 SCALE-SPACE EXTREMA DETECTION 18
3.2 KEYPOINT LOCALIZATION 20
3.2.1 消除低對比區域特徵點 20
3.2.2 移除邊緣處特徵點 21
3.3 ORIENTATION ASSIGNMENT 21
3.4 KEYPOINT DESCRIPTOR 22
第四章 本論文提出之方法 23
4.1 改良型影像修補演算法 23
4.1.1 修補點分配 24
4.1.2 修補點選擇 29
4.1.3 相似區域搜尋 30
4.1.4 基於視窗形狀不固定之修補 30
4.1.5 參數更新 32
4.2 改良型影像修補在時間域的延伸 34
4.2.1 動作估測 34
4.2.2 修補點分配 38
4.2.3 修補點選擇 39
4.2.4 相似區域搜尋 39
4.2.5 基於亮度融合之貼補 40
4.2.6 參數更新 41
4.3 基於SIFT特徵點引導之視訊修補演算法 42
4.3.1 特徵點標記 43
4.3.2 SIFT視訊修補演算法 47
第五章 實驗結果與討論 49
5.1 實驗環境 49
5.2 實驗結果比較 51
5.2.1 影像修補之結果比較 51
5.2.2 視訊修補之結果比較(4.3節所提之方法) 54
5.2.3 我們所提之方法比較 66
第六章 結論 69
參考文獻 70


[1]M. Bertalmio, G. Sapiro, V. Caselles and C. Ballester, “Image Inpainting,” in ACM Comput. Graph.(SIGGRAPH 2002), pp. 417-424, 2000.
[2]A. Efros, T. Leung, “Texture synthesis by non-parametric sampling,” Proc. IEEE International Conference Computer Vision, pp. 1033-1038, 1999.
[3]LIANG, L., LIU, C., XU, Y.-Q., GUO, B., and SHUM, H.-Y., “Real-time texture synthesis by patch-based sampling,” ACM Transactions on Graphics Vol. 20, pp. 127–150, 2001.
[4]A. Criminisi, P. Perez, and K. Toyama, “Object removal by exemplar-based inpainting,” in Proc. Conf. Computer Vision and Pattern Recognition, pp.1-8, 2003.
[5]A. Criminisi, P. Perez and K. Toyama, “Region Filling and Object Removal by Exemplar-Based Image Inpainting,” IEEE Transactions on Image Processing, Vol.13, pp.1200-1212, 2004.
[6]Y. Wexler, E. Shechtman, and M. Irani, “Space-time video completion,” in Proc. IEEE Comput. Soc. Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 120-127, 2004.
[7]Y. Wexler, E. Shechtman, and M. Irani, “Space-Time Completion of Video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 3, pp. 463-476, Mar. 2007.
[8]T. Shiratori, Y. Matsushita, S. -B. Kang, and X. Tang, “Video Completion by Motion Field Transfer,” in Proc. of the IEEE Int’l Conf. on Computer Vision and Pattern Recognition, pp. 411-418, 2006.
[9]T.K. Shih, N. C. Tang, and J. -N. Hwang, “Exemplar-Based Video Inpainting Without Ghost Shadow Artifacts by Maintaining Temporal Continuity,” IEEE Trans. Circuits Syst. Video Technol., Vol. 19, no. 2, pp. 347-360, Mar. 2009.
[10]N. Tang, C. Hsu, C. Su, T. Shih, H. Liao, “Video Inpainting on Digitized Vintage Films via Maintaining Spatiotemporal Continuity,” IEEE Transactions on Multimedia, 2011.
[11]K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video Inpainting of Occluding and Occluded Objects,” Proc. IEEE Int’l Conf. on Image Processing, Genoa, Italy, pp. 67-72, Step. 2005.
[12]K.A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video Inpainting Under Constrained Camera Motion,” IEEE Transactions on Image Processing, Vol. 16, pp.545-553, 2007.
[13]Aijuan Xia, Yan Gui, Li Yao, Lizhuang Ma, Xiao Lin, “Exemplar-Based Object Removal in Video Using GMM,” 2011 International Conference on Multimedia and Signal Processing (CMSP), pp.366-370, 2011.
[14]J. Jia, T.P. Wu, Y.W. Tai, and C.K. Tang, “Video Repairing: Inference of Foreground and Background under Severe Occlusion,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 364-371, 2004.
[15]J. Jia, Y. Tai, T. Wu, and C. Tang, “Video repairing under variable illumination using cyclic motions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, pp. 832-839, May 2006.
[16]C. L. Fang, and T. H. Tsai, “Advertisement Video Completion Using Hierarchical Model,” in Proc. IEEE Int. Conf. Multimedia and Expo., pp. 1557-1560, 2008.
[17]Tsung-Han Tsai, Chih-Lun Fang, “Text-Video Completion Using Structure Repair and Texture Propagation,” IEEE Transactions on Multimedia, pp.29-39, 2011.
[18]Tsung-Han Tsai, Chih-Lun Fang, “Structural Videotext Regions Completion with Temporal -Spatial Consistency,” IEEE Int’l Conf. on Sensor Networks, Ubiquitous and Trustworthy Computing, pp.256-261, 2008.
[19]D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int.’l Journal of Computer Vision, Vol.60, No.2, pp. 91-110 , 2004.
[20]J.J. Koenderink, “The structure of images,” Biological Cybernetics, Vol. 50, pp. 363-396, Aug.1984.
[21]T. Lindeberg, “Scale-space theory: a basic tool for analyzing structures at different scales,” Journal of Applied Statistics, Vol.21, pp.225-270, 1994.
[22]V. Rankov, R. J. Locke, R. J. Edens, P. R. Barber, and B. Vojnovic, “An algorithm for image stitching and blending.,” In Proceedings of SPIE, vol. 5701, pp. 190-199, 2005.
[23]C. Chok-Kwan, P. Lai-Man, “Normalized Partial Distortion Search Algorithm for Block Motion Estimation,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 10, No. 3, pp.417-422, 2000.
[24]X. Jing, C. Zhu, L. P. Chau, “ Smooth constrained motion estimation for video coding,” Signal Processing 83, pp. 677–680, March 2003.
[25]R. Hess, “An Open Source SIFT Library,” ACM MM, 2010.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top