跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.171) 您好!臺灣時間:2025/01/17 08:53
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:邱郁閔
研究生(外文):Chiu, Yu-Min
論文名稱:不對稱輕量指紋恢復網絡加入多階段審查多層特徵與相關性的知識蒸餾
論文名稱(外文):Asymmetric Lightweight Network for Fingerprint Restoration with Multi-Stage Multi-layer Relation Knowledge Distillation
指導教授:邱瀞德
指導教授(外文):Chiu, Ching-Te
口試委員:劉權毅林嘉文
口試日期:2023-08-30
學位類別:碩士
校院名稱:國立清華大學
系所名稱:通訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2023
畢業學年度:112
語文別:英文
論文頁數:42
中文關鍵詞:指紋驗證指紋辨識影像強化模型壓縮知識蒸餾
外文關鍵詞:Fingerprint authenticationFingerprint recognitionImage enhancementConvolution Neural Net WorkModel CompressionKnowledge Distillation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:171
  • 評分評分:
  • 下載下載:20
  • 收藏至我的研究室書目清單書目收藏:0
指紋辨識系統被廣泛應用於身份識別領域。為了確保指紋辨識系統的正確性,高品質的指紋影像至關重要。然而,在許多情況下,指紋的品質可能會下降,例如在低溫環境下會出現模糊或破碎的指紋等問題。因此,我們採用深度學習模型來恢復指紋的品質。同時,為了提高使用者體驗,我們的目標是確保模型的推論時間低於0.1 秒。
為實現這一目標,我們設計了一個不對稱的輕量級指紋恢復網路。然而,在深度學習模型中,模型的性能通常與其大小密切相關,較大的模型通常能夠獲得更好的性能。因此,為了提高模型的性能,我們使用了多層特徵相關性知識蒸餾的技術。由於我們的多層特徵相關性知識蒸餾方法,是學習相關性及使用多層特徵,可以讓學生模型更好的有效學習老師模型的資訊,因此這個方法適用於學生模型和老師模型之間存在顯著結構差異的情況。結果顯示,我們的模型實現了低於 0.1 秒的推論時間,並相較於當前最先進的模型在 FVC2002 和 FVC2004 數據集上實現了相對錯誤率 (EER)50%的性能提升。
Fingerprint recognition systems are widely used in the field of identity authen-tication. To ensure the accuracy of fingerprint recognition system, high-quality fingerprint images are crucial. However, in many cases, the quality of fingerprints may degrade, for example, in low-temperature environments, resulting in blurry or fragmented fingerprints. Therefore, we employ deep learning models to restore the quality of fingerprints. Additionally, to enhance the user experience, our goal is to ensure that the model’s inference time is less than 0.1 seconds.
To achieve this goal, we designed an asymmetric lightweight network for fin-gerprint restoration. However, in deep learning models, performance is often closely related to the model’s size, with larger models typically achieving better perfor-mance. Therefore, to improve the model’s performance, we used the technique of multi-layer feature-related knowledge distillation.Our multi-layer feature-related knowledge distillation method is suitable for cases where there are significant struc-tural differences between the student model and the teacher model, allowing the student model to effectively learn from the teacher model’s information.
The results show that our model achieved an inference time of less than 0.1 seconds and outperformed the current state-of-the-art models by achieving a 50%improvement in Equal Error Rate (EER) on the FVC2002 and FVC2004 databases.
Acknowledgements

摘要
i
Abstract ii
1 Introduction 1
1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Related Work 7
2.1 Fingerprint Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Knowledge Distillation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Method 11

3.1 Multi-stage Multi-layer Relation Knowledge Distillation on Asymmetrical Lightweight Architecture 11
3.1.1 Relation Knowledge Distillation in 1st training stage 13
3.1.2 Multi-layer Relation Knowledge Distillation in 2nd training stage 14
3.2 Teacher Model: Fingerprint Restoration Using M-Net and Attention Module
with Ridge Map Assistance(FMANet) 15
3.2.1 Ridge Prediction Extractor 15
3.2.2 FMANet Ridge Prediction Extractor Loss Function 17
3.2.3 FMANet Ridge Restorer 19
3.2.4 FMANet Ridge Restorer Loss Function 20
3.3 Student model: Asymmetrical Lightweight Architecture for Fingerprint Restora-
tion (ALNet) 20
3.3.1 ALNet Ridge Prediction Extractor 20
3.3.2 ALNet Ridge Prediction Extractor Loss Function 22
3.3.3 ALNet Ridge Restorer 23
3.3.4 ALNet Ridge Restorer Loss Fuction 24
4 Experiment 27
4.1 Dataset 27
4.1.1 Normal Synthesized(NS) Dataset 27
4.1.2 Normal Low(NL) 28
4.2 Implement Details 29
4.3 Ablation Study 29
4.3.1 Effect of FMANet ACE module 29
4.3.2 Effect of FMANet orientation branch 30
4.3.3 Layer Selection of LRelation−s1 30
4.3.4 Effect of ALNet Lrelation−s1 32
4.3.5 Effect of ALNet relation knowledge distillation 32
4.3.6 Compare with State-of-the-art(SOTA) 34
5 Conclusion 37
References 39
[1] J. Li, J. Feng, and C.-C. J. Kuo, “Deep convolutional neural network for latent fingerprint enhancement,” Signal Processing: Image Communication, vol. 60, pp. 52–63, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0923596517301492
[2] S. Adiga V and J. Sivaswamy, “Fpd-m-net: Fingerprint image denoising and inpainting using m-net based convolutional neural networks,” in Inpainting and Denoising Chal-lenges. Springer, 2019, pp. 51–61.
[3] Y. Sun, Y. Tang, and X. Chen, “Enhancement method of low quality fingerprint based on
u- net,” in 2022 4th International Conference on Frontiers Technology of Information and Computer (ICFTIC). IEEE, 2022, pp. 236–239.
[4] J. Nielsen and K. Pernice, Eyetracking web usability. New Riders, 2010.
[5] Y. Li, Q. Xia, C. Lee, S. Kim, and J. Kim, “A robust and efficient fingerprint image restora-tion method based on a phase-field model,” Pattern Recognition, vol. 123, p. 108405, 2022.[6] W. J. Wong and S.-H. Lai, “Multi-task cnn for restoring corrupted fingerprint images,” Pattern Recognition, vol. 101, p. 107203, 2020.
[7] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241.
[8] C.-H. Cheng, C.-T. Chiu, C.-Y. Kuan, Y.-C. Su, K.-H. Liu, T.-C. Lee, J.-L. Chen, J.-Y. Luo, W.-C. Chun, Y.-R. Chang et al., “Multiple training stage image enhancement enrolled with ccrgan pseudo templates for large area dry fingerprint recognition,” IEEE Access, 2023.
[9] Y. Cheng, D. Wang, P. Zhou, and T. Zhang, “A survey of model compression and accel-eration for deep neural networks,” arXiv preprint arXiv:1710.09282, 2017.
[10] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” Advances in neural information processing systems, vol. 28, 2015.
[11] M. Zhu and S. Gupta, “To prune, or not to prune: exploring the efficacy of pruning for model compression,” arXiv preprint arXiv:1710.01878, 2017.
[12] T. Hoefler, D. Alistarh, T. Ben-Nun, N. Dryden, and A. Peste, “Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks,” The Journal of Machine Learning Research, vol. 22, no. 1, pp. 10 882–11 005, 2021.[13] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convolutional neural networks for resource efficient inference,” arXiv preprint arXiv:1611.06440, 2016.
[14] K. Wang, Z. Liu, Y. Lin, J. Lin, and S. Han, “Haq: Hardware-aware automated quanti-zation with mixed precision,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 8612–8620.
[15] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and
D. Kalenichenko, “Quantization and training of neural networks for efficient integer-arithmetic-only inference,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2704–2713.
[16] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” Interna-tional Journal of Computer Vision, vol. 129, pp. 1789–1819, 2021.
[17] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
[18] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” arXiv preprint arXiv:1412.6550, 2014.
[19] L. Zhang, X. Chen, X. Tu, P. Wan, N. Xu, and K. Ma, “Wavelet knowledge distillation: Towards efficient image-to-image translation,” in Proceedings of the IEEE/CVF Confer-ence on Computer Vision and Pattern Recognition, 2022, pp. 12 464–12 474.
[20] H. Wang, Y. Li, Y. Wang, H. Hu, and M.-H. Yang, “Collaborative distillation for ultra-resolution universal style transfer,” in Proceedings of the IEEE/CVF conference on com-puter vision and pattern recognition, 2020, pp. 1860–1869.
[21] W. Chen, L. Peng, Y. Huang, M. Jing, and X. Zeng, “Knowledge distillation for u-net based image denoising,” in 2021 IEEE 14th International Conference on ASIC (ASICON). IEEE, 2021, pp. 1–4.
[22] F. Tung and G. Mori, “Similarity-preserving knowledge distillation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1365–1374.
[23] C. Yang, H. Zhou, Z. An, X. Jiang, Y. Xu, and Q. Zhang, “Cross-image relational knowl-edge distillation for semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 319–12 328.
[24] P. Chen, S. Liu, H. Zhao, and J. Jia, “Distilling knowledge via knowledge review,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5008–5017.
[25] Z. Shen, Y. Xu, and G. Lu, “Cnn-based high-resolution fingerprint image enhancement for pore detection and matching,” in 2019 IEEE Symposium Series on Computational In-telligence (SSCI). IEEE, 2019, pp. 426–432.
[26] J. Jam, C. Kendrick, V. Drouard, K. Walker, G.-S. Hsu, and M. H. Yap, “R-mnet: A perceptual adversarial network for image inpainting,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 2714–2723.[27] K. Xu, X. Yang, B. Yin, and R. W. Lau, “Learning to restore low-light images via decomposition-and-enhancement,” in Proceedings of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition (CVPR), June 2020.
[28] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain, “Fvc2000: Finger-print verification competition,” IEEE transactions on pattern analysis and machine intel-ligence, vol. 24, no. 3, pp. 402–412, 2002.
[29] L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement: algorithm and perfor-mance evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 777–789, 1998.
[30] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain, “Fvc2002: Second finger-print verification competition,” in 2002 International conference on pattern recognition, vol. 3. IEEE, 2002, pp. 811–814.
[31] ——, “Fvc2004: Third fingerprint verification competition,” in International conference on biometric authentication. Springer, 2004, pp. 1–7.
[32] J. Yang, N. Xiong, and A. V. Vasilakos, “Two-stage enhancement scheme for low-quality fingerprint images by learning from the images,” IEEE Transactions on Human-Machine Systems, vol. 43, no. 2, pp. 235–248, 2013.
[33] P. Sutthiwichaiporn and V. Areekul, “Adaptive boosted spectral filtering for progressive fingerprint enhancement,” Pattern Recognition, vol. 46, no. 9, pp. 2465–2486, 2013. [On-line]. Available: https://www.sciencedirect.com/science/article/pii/S0031320313000782
[34] W. J. Wong and S.-H. Lai, “Multi-task cnn for restoring corrupted fingerprint images1,” Pattern Recognition, vol. 101, p. 107203, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320320300108
[35] M. Ferrara, D. Maltoni, and R. Cappelli, “Noninvertible minutia cylinder-code represen-tation,” IEEE Transactions on Information Forensics and Security, vol. 7, pp. 1727–1737, 12 2012.
[36] W. J. Wong and S.-H. Lai, “Multi-task cnn for restoring corrupted fingerprint images1,” Pattern Recognition, vol. 101, p. 107203, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320320300108
[37] Y. Tu, Z. Yao, J. Xu, Y. Liu, and Z. Zhang, “Fingerprint restoration using cubic bezier curve,” BMC Bioinformatics, vol. 21, 12 2020.
[38] C. Gottschlich, “Curved-region-based ridge frequency estimation and curved gabor filters for fingerprint image enhancement,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2220–2227, 2012.
[39] Y. Li, L. Pang, H. Zhao, Z. Cao, E. Liu, and J. Tian, “Indexing-min-max hashing: Relax-ing the security-performance tradeoff for cancelable fingerprint templates,” IEEE Trans-actions on Systems, Man, and Cybernetics: Systems, pp. 1–12, 2022.
[40] M. Sahasrabudhe and A. M. Namboodiri, “Fingerprint enhancement using unsupervised hierarchical feature learning,” in Proceedings of the 2014 Indian Conference on Computer Vision Graphics and Image Processing, ser. ICVGIP ’14. New York, NY, USA: Association for Computing Machinery, 2014. [Online]. Available: https: //doi.org/10.1145/2683483.2683485
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊