跳到主要內容

臺灣博碩士論文加值系統

(44.220.255.141) 您好!臺灣時間:2024/11/13 10:02
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃依凡
論文名稱:基於深度學習之低解析度文字辨識
論文名稱(外文):Recognition of low resolution text using deep learning approach
指導教授:廖文宏廖文宏引用關係
指導教授(外文):Liao, Wen-Hung
口試委員:廖峻鋒江佩穎
學位類別:碩士
校院名稱:國立政治大學
系所名稱:資訊科學學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
畢業學年度:105
語文別:中文
論文頁數:66
中文關鍵詞:文字辨識低解析度卷積神經網路
外文關鍵詞:Text recognitionConvolution neural networksLow resolution
相關次數:
  • 被引用被引用:1
  • 點閱點閱:712
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
本論文關注的是電腦視覺中一個已充分研究過的議題,即光學文字識別。然而,我 們主要著重在一種非常特別的圖片類型:解析度非常低並且有大量失真與干擾的印刷中 文字。雖然使用卷積神經網路已能成功穩定識別高解析度印刷文字或手寫文字,然而, 對於品質非常低的印刷中文字仍有幾個挑戰,需要進一步分析研究。具體來說,我們的 資料集是點陣印刷機產生的 31,570 張文字圖片,包含模糊文字、缺少筆劃的文字以及 文字與其他文字或圖形重疊的文字圖片。為了有效地解決這些困難,我們實驗不同的深 層神經網路架構以及超參數,最後獲得辨識成果最佳的設置。在 1,530 類,平均解析度 為 16x18 像素的圖片中,top-1 和 top-5 的準確率分別為 71% 和 87%。
Recent advances in deep neural networks have changed the landscape of computer vision and pattern recognition research significantly. Convolutional neural networks (CNN), for example, have demonstrated outstanding capabilities in image classification, in many cases exceeding human performance. Many tasks that did not get satisfactory results using conventional machine learning approaches are now being actively re-examined using deep learning techniques.

This thesis is concerned with a well-investigated topic in computer vision, namely, optical character recognition (OCR). Our main focus, however, is a very specific class of input: printed Chinese texts with very low resolution and a significant amount of distortion/interference. Whereas the recognition of high-resolution texts, either printed or handwritten, has been successfully tackled using convolutional neural networks, the analysis of very low-quality printed Chinese texts poses several challenges that require further study. Specifically, our dataset consists of~31570~text images generated with dot-matrix printers, blurred texts, texts with missing strokes, and texts overlapping with other texts or graphs.To effectively address these difficulties, we have experimented with different deep neural networks with various combinations of network architectures and hyperparameters. The results are reported and discussed in order to obtain an optimal setting for the recognition task. The top-1 and top-5 accuracies are 71% and 87%, respectively, for input images with an average resolution of 16x18 pixels belonging to 1530 classes.
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 2
1.3 論文架構 3
第二章 技術背景與相關研究 4
2.1 深度學習的背景與突破 4
2.2 CNN 概述 6
2.3 相關研究 12
第三章 資料集 17
3.1 發票測試集 17
3.2 CASIA-HWDB1.1 18
3.3 Tesseract 資料集 18
第四章 研究方法及架構 20
4.1 深度學習工具及環境 20
4.2 CNN 架構 20
4.3 Caffe solver 22
4.4 實驗流程 23
第五章 實驗及析 24
5.1 實驗一: CASIA 資料集 24
5.2 實驗二: 加入變化 25
5.3 實驗三: padding 4 個像素 27
5.4 實驗四: 隨機 padding 29
5.5 實驗五: 修改文字亮度 33
5.6 實驗結果 37
第六章 結論與未來研究方向 45
參考文獻 46
附錄 49
[1] Yuhao Zhang. Deep convolutional network for handwritten chinese character recognition. CS231N course project.

[2] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using con- volutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414–2423, 2016.

[3] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3156–3164, 2015.

[4] Xu Chen. Convolution neural networks for chinese handwriting recognition.

[5] Charles Jacobs, Patrice Y Simard, Paul Viola, and James Rinker. Text recognition of low- resolution document images. In Eighth International Conference on Document Analysis and Recognition (ICDAR’05), pages 695–699. IEEE, 2005.

[6] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.

[7] YuanqingLin, FengjunLv, ShenghuoZhu, MingYang, TimotheeCour, KaiYu, Liangliang Cao, and Thomas Huang. Large-scale image classification: fast feature extraction and svm training. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1689–1696. IEEE, 2011.

[8] AlexKrizhevsky, IlyaSutskever, and Geoffrey Hinton.Imagenetclassificationwithdeep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.

[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pages 1026–1034, 2015.

[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. IEEE, pages 770 – 778, 2016.

[11] Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.

[12] Yann LeCun, LD Jackel, Léon Bottou, Corinna Cortes, John S Denker, Harris Drucker, Isabelle Guyon, UA Muller, E Sackinger, Patrice Simard, et al. Learning algorithms for classification: A comparison on handwritten digit recognition. Neural networks: the sta- tistical mechanics perspective, 261:276, 1995.

[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European Conference on Computer Vision, pages 346–361. Springer, 2014.

[14] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 1440–1448, 2015.

[15] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real- time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.

[16] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. arXiv preprint arXiv:1506.02640, 2015.

[17] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. Ssd: Single shot multibox detector. arXiv preprint arXiv:1512.02325, 2015.

[18]Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman.Deepinsideconvolutionalnet- works: Visualising image classification models and saliency maps. arXiv preprint arXiv: 1312.6034, 2013.
[19] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 1520–1528, 2015.

[20] Zhuoyao Zhong, Lianwen Jin, and Zecheng Xie. High performance offline handwritten chinese character recognition using googlenet and directional feature maps. In Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, pages 846– 850. IEEE, 2015.

[21] KarenSimonyanandAndrewZisserman.Verydeepconvolutionalnetworksforlarge-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

[22] MatthewDZeilerandRobFergus.Visualizingandunderstandingconvolutionalnetworks. In European conference on computer vision, pages 818–833. Springer, 2014.

[23] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Jour- nal of Machine Learning Research, 15(1):1929–1958, 2014.

[24] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedfor- ward neural networks. In Aistats, volume 9, pages 249–256, 2010.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top