(3.238.7.202) 您好!臺灣時間:2021/03/02 00:54
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:王思傑
研究生(外文):Szu-Chieh Wang
論文名稱:基於生成對抗網路之極低光源影像品質增強
論文名稱(外文):Extreme Low Light Image Enhancement with Generative Adversarial Networks
指導教授:莊永裕
口試委員:吳賦哲葉正聖
口試日期:2019-07-26
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:25
中文關鍵詞:低光源影像低光源影像品質增進生成對抗網路
DOI:10.6342/NTU201903501
相關次數:
  • 被引用被引用:0
  • 點閱點閱:77
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
時下照相機在低光源拍照得到的照片,常常被很強的雜訊影響圖片的品 質,因此低光源影像品質的增進是非常重要且極需解決的問題。另一方面, 近幾年深度學習的方法在各個領域取得許多成功,但是在深度學習模型中的 大量參數,仰賴高品質且大量的數據集來訓練及調整,收集如此的數據集則 需要花費很多時間。在低光源影像品質增強這個任務上,一般的作法需要數 據以成對的形式存在,並且需要在同一個場景同時拍攝短曝光時間、長曝光 時間各一張照片,以此來做為監督式學習使用。但是,在同一個場景拍攝兩 種曝光時間的照片,會需要限制場景內容、控制場景很長一段時間,這樣的 限制也使的一些場景的選擇變得稀少。另外,現存的深度學習方法皆是以整 個模型取代現有的相機處理程序,必須花費額外的力氣去整合兩者的優點。 因此,我們的方法目標是希望藉由連續拍攝的短曝光照片,以及不同場景的 長曝光照片所形成的數據集下,藉由兩階段的訓練,運用生成對抗網路改善 輸出的品質,來達到低光源影像品質增進的目標。並且,我們的方法設計成 可以融入現有相機處理程序,當做原有處理的的預處理階段,因此也能得以 保有現有相機處理程序的各個參數與彈性。在同樣的設定下,我們的方法比 其他方法可以達到更多的品質增進。
Taking photos under low light environments is always a challenge for current imaging pipelines. Image noise and artifacts corrupt the image. Tak- ing the great success of deep learning into consideration recently, it may be straightforward to train a deep convolutional network to perform enhance- ment on such images to restore the underlying clean image. However, the large number of parameters in deep models may require a large amount of data to train. For the low light image enhancement task, paired data requires a short exposure image and a long exposure image to be taken with perfect alignment, which may not be achievable in every scene, thus limiting the choice of possible scenes to capture paired data and increasing the effort to collect training data. Also, data-driven solutions tend to replace the entire camera pipeline and cannot be easily integrated to existing pipelines. There- fore, we propose to handle the task with our 2-stage pipeline, consisting of an imperfect denoise network, and a bias correction net BC-UNet. Our method only requires noisy bursts of short exposure images and unpaired long expo- sure images, relaxing the effort of collecting training data. Also, our method works in raw domain and is capable of being easily integrated into the ex- isting camera pipeline. Our method achieves comparable improvements to other methods under the same settings.
口試委員會審定書 i
誌謝 ii
摘要 iii
Abstract iv
1 Introduction 1
2 Related Work 3
2.1 ImageDenoising .............................. 3
2.1.1 Supervised Image Denoising with Neural Networks . . . . . . . . 3
2.1.2 Training with Real-world Low Light Images . . . . . . . . . . . 4
2.1.3 Training with only Noisy Observations . . . . . . . . . . . . . . 4
2.2 Generative Adversarial Networks...................... 5
3 Methodology 6
3.1 Whole Pipeline ............................... 6
3.2 Stage1 - Denoiser trained with Noise2Noise . . . . . . . . . . . . . . . . 7
3.3 Stage2 - Bias Correction with Unpaired Adversarial Training (BC-UNet) 8
3.4 Training details ............................... 10
4 Experiment 11
4.1 SID dataset ................................. 11
4.2 Comparison with Synthetic-based Methods . . . . . . . . . . . . . . . . 12
4.2.1 Signal Dependent Gaussian Noise ................. 12
4.2.2 Generate Noise according to Prior Knowledge on Cameras . . . . 13
4.3 Comparison with Noise2Noise ....................... 13
4.4 Experiment Settings............................. 14
4.5 PSNR Results................................ 15
4.5.1 Raw PSNR Results......................... 16
4.5.2 JPG PSNR Results ......................... 16
4.5.3 Results of Noise2Truth....................... 17
4.5.4 Ablation study on global information . . . . . . . . . . . . . . . 18
4.6 Qualitative Results ............................. 18
5 Conclusion 22
Bibliography 23
[1] T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron. Unprocess- ing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019.
[2] A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm for image denoising. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition (CVPR’05), volume 2, pages 60–65. IEEE, 2005.
[3] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3291– 3300, 2018.
[4] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising with block- matching and 3d filtering. In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, volume 6064, page 606414. International Society for Optics and Photonics, 2006.
[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
[6] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767–5777, 2017.
[7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[8] G. E. Healey and R. Kondepudy. Radiometric ccd camera calibration and noise estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(3):267–276, 1994.
[9] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626–6637, 2017.
[10] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with con- ditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
[11] A. Jolicoeur-Martineau. The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734, 2018.
[12] D.P.KingmaandJ.Ba.Adam:Amethodforstochasticoptimization.arXivpreprint arXiv:1412.6980, 2014.
[13] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution us- ing a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681–4690, 2017.
[14] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila. Noise2noise: Learning image restoration without clean data. arXiv preprint arXiv:1803.04189, 2018.
[15] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley. Least squares gen- erative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2794–2802, 2017.
[16] B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll. Burst de- noising with kernel prediction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2502–2510, 2018.
[17] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
[18] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learn- ing with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[19] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔