跳到主要內容

臺灣博碩士論文加值系統

(44.192.254.59) 您好!臺灣時間:2023/01/27 19:14
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:翁浚瑞
研究生(外文):Jyun-Ruei Wong
論文名稱:使用三維 N3Net 和細節水平圖作影片去噪
論文名稱(外文):Video Denoising using 3D N3Net and Detail-Level Map
指導教授:莊永裕
指導教授(外文):Yung-Yu Chuang
口試委員:葉正聖吳賦哲
口試委員(外文):Jeng-Sheng YehFu-Che Wu
口試日期:2021-01-21
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊網路與多媒體研究所
學門:電算機學門
學類:網路學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:英文
論文頁數:25
中文關鍵詞:深度學習影像去雜訊影片去雜訊
外文關鍵詞:Deep learningImage DenoisingVideo Denoising
DOI:10.6342/NTU202100123
相關次數:
  • 被引用被引用:0
  • 點閱點閱:94
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
由於硬體的限制,雜訊在攝影學上是一個不可避免的問題。為了處理雜訊學術上已經有了許多去雜訊的方法。其中一個稱為影片去雜訊,也就是利用影片中其他幀來幫忙對每個幀去雜訊。本論文中提出一個使用 N3NET 作為骨幹的模型。我們將這個概念延伸到多影像的去雜訊問題。另外我們還訓練另一個子模型來學一個所謂的細節水平圖。細節水平圖之於影像的概念類似於雜訊水平圖之於雜訊。整個模型最後使用細節水平圖和原本的影像共同預測最後的結果。利用 3D 的 N3NET 可以在視覺上得到和前人成果類似的品質。並且使用接近真實的細節水平圖,我們可以得到再進一步更好的結果。
Noise is an inevitable problem of photography due to hardware limitations. To tackle with it, researchers have developed various kinds of denoising methods. One of the methods use neighbor frames from video to help denoising each frames, which is so-called video denoising. In this paper, we use N3NET as backbone, which leverages neighbor patches to help denoising, and extend the concept of it to multiple images denoising problem. Furthermore, we train another sub-model to learn a so-called detail-level map of images, an analogy to noise-level map of noise from photography terms. In the end we use both detail-level map and original frames to predict the denoised result. We show that by using 3D N3Net we can have similar visual quality with state-of-the-art methods. And with close-to-ground-truth detail-level map, we can further improve the result.
Verification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
Chapter 2 Related Work 4
2.1 Denoising Models of Synthetic Noise 5
2.2 Denoising Models of Real Noise 5
2.3 Burst Denoising Models. 6
2.4 Video Denoising Models. 6
2.5 Synthetic Noise Datasets 6
2.6 Real Noise Datasets 7
Chapter 3 Method 8
3.1 Overall Architecture 8
3.2 Preliminary of N3Net 8
3.3 3D N3Net 10
3.4 Detail-­Level Map 12
Chapter 4 Experiments and discussion 14
4.1 Dataset 14
4.2 Implementation Details 14
4.3 Result 15
4.4 Ablation Study 16
Chapter 5 Conclusion 19
References 20
[1] A. Abdelhamed, S. Lin, and M. S. Brown. A high­ quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1692–1700, 2018.
[2] M. Aittala and F. Durand. Burst image deblurring using permutation invariant convo­lutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 731–747, 2018.
[3] A. Alsaiari, R. Rustagi, M. M. Thomas, A. G. Forbes, et al. Image denoising using a generative adversarial network. In2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT), pages 126–132. IEEE,2019.
[4] J. Anaya and A. Barbu. Renoir­-a dataset for real low ­light noise image reduction.arXiv preprint arXiv, 1409:6, 2014.
[5] T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron. Unprocess­ing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019.
[6] J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi. Real-­time video super­resolution with spatio-­temporal networks and motion compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 4778–4787, 2017.
[7] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3291–3300, 2018.
[8] J. Chen, J. Chen, H. Chao, and M. Yang. Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3155–3164, 2018.
[9]  S. Chen, D. Shi, M. Sadiq, and M. Zhu. Image denoising via generative adversarial networks with detail loss. In Proceedings of the 2019 2nd International Conference on Information Science and Systems, pages 261–265, 2019.
[10]  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance­-chrominance space. In 2007 IEEE International Conference on Image Processing, volume 1, pages I–313. IEEE, 2007.
[11] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3­d transform­ domain collaborative filtering.IEEE Transactions on image processing,16(8):2080–2095, 2007.
[12] M. Gharbi, G. Chaurasia, S. Paris, and F. Durand. Deep joint demosaicking and denoising. ACM Transactions on Graphics (TOG), 35(6):1–12, 2016.
[13] C. Godard, K. Matzen, and M. Uyttendaele. Deep burst denoising. In Proceedings of the European Conference on Computer Vision (ECCV), pages 538–554, 2018.
[14] S. Gu, R. Timofte, and L. Van Gool. Multi-­bin trainable linear unit for fast image restoration networks.arXiv preprint arXiv:1807.11389, 2018.
[15] S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang. Toward convolutional blind de­noising of real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1712–1722, 2019.
[16] Y. Huang, W. Wang, and L. Wang. Bidirectional recurrent convolutional networks for multi-­frame super-­resolution. Advances in neural information processing systems,28:235–243, 2015.
[17] Y. Jo, S. Wug Oh, J. Kang, and S. Joo Kim. Deep video super­-resolution network using dynamic upsampling filters without explicit motion compensation. In Pro­ceedings of the IEEE conference on computer vision and pattern recognition, pages 3224–3232, 2018.
[18] A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos. Video super­-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging,2(2):109–122, 2016.
[19] S. Lefkimmiatis. Non­-local color image denoising with convolutional neural net­works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3587–3596, 2017.
[20] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, andT. Aila. Noise2noise: Learning image restoration without clean data. arXiv preprint arXiv:1803.04189, 2018.
[21] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang. Non­-local recurrent network for image restoration. In Advances in Neural Information Processing Systems, pages 1673–1682, 2018.
[22] M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian. Video denoising, deblocking,and enhancement through separable 4­d non-local spatio-temporal transforms. IEEE Transactions on image processing, 21(9):3952–3966, 2012.
[23] O. Makansi, E. Ilg, and T. Brox. End-­to-­end learning of video super-­resolution with motion compensation. In German conference on pattern recognition, pages 203–214. Springer, 2017.
[24] F. Perazzi, J. Pont­Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine­-Hornung. A benchmark dataset and evaluation methodology for video object seg­mentation. In Computer Vision and Pattern Recognition, 2016.
[25] T. Plotz and S. Roth. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE conference on computer vision and pattern recognition,pages 1586–1595, 2017.
[26] T. Plötz and S. Roth. Neural nearest neighbors networks. Advances in Neural Infor­mation Processing Systems, 31:1087–1098, 2018.
[27] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, andZ. Wang. Real-­time single image and video super-­resolution using an efficient sub­pixel convolutional neural network. In Proceedings of the IEEE conference on com­puter vision and pattern recognition, pages 1874–1883, 2016.
[28] Y. Tai, J. Yang, X. Liu, and C. Xu. Memnet: A persistent memory network for image restoration. In Proceedings of the IEEE international conference on computer vision, pages 4539–4547, 2017.
[29] X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia. Detail ­revealing deep video super-­resolution. In Proceedings of the IEEE International Conference on Computer Vi­sion, pages 4472–4480, 2017.
[30] M. Tassano, J. Delon, and T. Veit. Fastdvdnet: Towards real-­time deep video de­noising without flow estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1354–1363, 2020.
[31] Y. Tian, Y. Zhang, Y. Fu, and C. Xu. Tdan: Temporally ­deformable alignment net­work for video super­resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3360–3369, 2020.
[32] X. Wang, K. C. Chan, K. Yu, C. Dong, and C. Change Loy. Edvr: Video restoration with enhanced deformable convolutional networks. In Proceedings of the IEEE Con­ference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
[33] J. Xu, H. Li, Z. Liang, D. Zhang, and L. Zhang. Real-world noisy image denoising: A new benchmark.arXiv preprint arXiv:1804.02603, 2018.
[34] T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman. Video enhancement with task-­oriented flow. International Journal of Computer Vision, 127(8):1106–1125, 2019.
[35] D. Yang and J. Sun. Bm3d­net: A convolutional neural network for transform­-domain collaborative filtering.IEEE Signal Processing Letters, 25(1):55–59, 2017.
[36] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
[37] K. Zhang, W. Zuo, S. Gu, and L. Zhang. Learning deep cnn denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3929–3938, 2017.
[38] K. Zhang, W. Zuo, and L. Zhang. Ffdnet: Toward a fast and flexible solution for cnn­-based image denoising. IEEE Transactions on Image Processing, 27(9):4608–4622,2018.
[39] Q. ZhiPing, Z. YuanQi, S .Yi, and L.XiangBo. A new generative adversarial network for texture preserving image denoising. In 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA), pages 1–5. IEEE, 2018.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊