跳到主要內容

臺灣博碩士論文加值系統

(98.82.120.188) 您好!臺灣時間:2024/09/13 04:37
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:施彥安
研究生(外文):Yen-AnShih
論文名稱:使用具有空洞卷積的殘差網路於刑案現場鞋印分類
論文名稱(外文):Crime Scene Shoeprint Classification Using Residual Network with Atrous Convolution
指導教授:郭淑美郭淑美引用關係連震杰
指導教授(外文):Shu-Mei GuoJenn-Jier Lien
學位類別:碩士
校院名稱:國立成功大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:75
中文關鍵詞:司法科學深度殘差網路鞋印辨識影像分類空洞卷積增量學習
外文關鍵詞:Forensic ScienceDeep Residual NetworkShoeprint RecognitionImage ClassificationAtrous ConvolutionIncremental Learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:203
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
鞋印一直是犯罪現場中的重要證據,尤其是在入室搶劫或竊盜案中,扮演了重要的角色。為了幫助警察局在大量的資料中,自動找到相似的鞋印和找尋案件之間的相關性。本篇論文主要提出了一個鞋印分類的卷積神經網路架構,由於鞋印影像可能有遮擋、相似圖案、不清楚的邊緣的情形影響分類網路,此架構使用了殘差網路(residual network)再加上空洞卷積(atrous convolution),使得卷積視野更廣,能獲得更全域的特徵影像,所以在分類的準確度上也有所提升。另外,我們提出的基於邊界框的分類網路(bounding box-based classification network)因為增加了訓練資料,比起一般的基於影像(image-based)的網路準確度更好,也因為減少特徵擷取的時間,訓練時間也比基於補丁(patch-based)的網路來的少。
完成分類網路之後,我們也研究增量學習(incremental learning)應用在鞋印分類網路上。如果未來鞋印資料庫更新更多的種類,在不重新訓練網路的情況下,我們如何使模型去識別新的種類的鞋印。因為鞋子的種類推陳出新,因此必須考慮之後維護資料庫和分類模型的成本。在本論文中的增量學習演算法,可以在缺少舊的類別的資料的情況下訓練新的辨識網路,讓網路同時可以識別舊的類別與新的類別的鞋印資料。我們將對此演算法進行實驗,並討論其可行性。
Shoeprints have always been important evidences at crime scenes, especially at burglary or theft. In order to help the police automatically find the similar shoeprint in a large amount of shoeprint data and find the correlation between the cases, this paper mainly proposes a convolutional neural network architecture for shoeprint classification. Since the shoeprint images may have occlusion, similar patterns, and unclear edge conditions and make the classification task difficult, we propose a residual network with atrous convolution, which can make the field of view of filters larger and obtain more comprehensive feature information, so the accuracy of classification is also improved. In addition, our proposed bounding box-based classification network performs better than the general image-based network because we divide the images into many bounding boxes to increase the training data. The computational time at the training stage of bounding box-based network is also less than the training time of the patch-based network.
After completing the classification network, we also study the incremental learning and apply it on our shoeprint classification model. If the shoeprint database is updated in more classes, how we make the classification model recognize new classes of shoeprints without retraining the network is an issue. Because the number of shoeprint classes is unlimited, we have to consider the maintenance costs. The incremental learning algorithm can train the new network in the absence of the data of original classes, and make the network can recognize the shoeprint images of both old classes and new classes.
摘要 I
Abstract II
誌謝 III
Table of Contents V
List of Figures VIII
List of Tables XI
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Related Works 2
1.3 Contribution 5
Chapter 2 Shoeprint Classification Using Residual Network with Atrous Convolution 6
2.1 Framework of Shoeprint Classification Network 6
2.2 Bounding Box Creation 9
2.3 Residual Network with Atrous Convolution 10
2.3.1 ResNet 12
2.3.2 Atrous Convolution 14
2.4 Bounding Box Dimensionality Reduction 17
2.4.1 Crop and Resize 17
2.4.2 1×1 Convolution 19
2.5 Shoeprint Classification 20
Chapter 3 Shoeprint Recognition using Incremental Learning 24
3.1 Shoeprint Recognition using Fast RCNN: Training 25
3.1.1 Region Proposals Generation 25
3.1.2 Fast R-CNN network A 29
3.1.3 Training Process of Fast R-CNN 31
3.2 Shoeprint Recognition using Incremental Learning: Training 33
3.2.1 Fast R-CNN network B 37
3.2.2 Biased Distillation 38
3.2.3 Distillation Loss 40
3.3 Framework of Shoeprint Recognition using Fast RCNN: Inference 42
Chapter 4 Experimental Results 44
4.1 Data Collection 44
4.2 Experimental Results of Shoeprint Classification 52
4.2.1 Experimental Results 52
4.2.2 Analysis 57
4.2.3 Unseen Data 61
4.3 Experimental Results of Incremental Learning 66
Chapter 5 Conclusion, Discussion and Future Work 71
5.1 Conclusion 71
5.2 Discussion 72
5.3 Future Work 73
Reference 74
[1]K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016
[2] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
[3]F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
[4]L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915, 2016.
[5]L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
[6]L.-convolution for semantic image segmentation. arXiv preprint arXiv:1802.02611, 2018.
C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoder-decoder with atrous separable
[7]K. Shmelkov, C. Schmid, and K. Alahari. Incremental learning of object detectors without catastrophic forgetting. In ICCV, 2017.
[8]J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, 2013.
[9]R. Girshick. Fast R-CNN. In ICCV, 2015.
[10]C. L. Zitnick and P. Dollar. Edge boxes: Locating object proposals from edges. In ´ ECCV, 2014.
[11]S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
[12]M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results, 2007
[13] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV. 2014.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top