跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.169) 您好!臺灣時間:2025/01/25 06:38
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林亭彣
研究生(外文):Ting-Wen Lin
論文名稱:以雙重卷積神經網路實現容易更換前導者的跟隨自走車
論文名稱(外文):A double convolutional neural network for an automatic following navigation vehicle with easily changing guider
指導教授:曾定章曾定章引用關係
指導教授(外文):Din-Chang Tseng
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:中文
論文頁數:64
中文關鍵詞:卷積神經網路深度學習行人偵測前導者辨識
相關次數:
  • 被引用被引用:1
  • 點閱點閱:476
  • 評分評分:
  • 下載下載:133
  • 收藏至我的研究室書目清單書目收藏:0
近幾年來自走車是一項熱門的研究議題,希望能透過車輛自動駕駛來減少使用人的負擔及降低交通事故。小型慢速自走車的應用廣,並且保有裝載物品的空間,因此希望將車輛結合電腦視覺技術,讓車子能夠自動偵測並跟隨特定行人。在本研究中,我們將發展可跟隨前導者的自走車,協助貨物運送或協助區域性購物及觀光導覽等。由於相關應用領域可能需要頻繁性替換前導者,本系統將要能夠實現快速前導者替換,在應用上能更加便利。
本系統核心主要分為兩個架構,第一部分為行人偵測系統,用來找出可能是前導者的所有行人;第二部分為前導者確認系統,用來比對行人與前導者的相似性,找出真正的前導者。由於偵測行人的難度高,易受到行人姿態、環境變化等影響,過去使用傳統機器學習方法效果不佳。因此本研究使用深度學習技術來實作行人偵測,透過卷積神經網路提取能適應各種變異的行人特徵,以提升偵測準確度;在前導者確認系統中,我們也是使用卷積神經網路,此網路經過離線訓練後,能夠線上比對提前未訓練過的前導者。
在實驗中,我們以校園及實驗大樓拍攝的影片測試,在行人偵測方面,偵測率可達94%,誤判率為 4 × 10-7;在前導者確認方面,我們測試 2,200張影像,識別準確率可達94%。
Self-propelled vehicles have been a popular topic in the past few years. Self-propelled vehicle research aims at reducing human resources and traffic accidents. A small slowly self-propelled car is widely used and has the space to load the goods. Therefore we hope the self-propelled vehicle integrates self-propelled technology and computer vision would be able to automatically detect and follow a specific pedestrian. In this paper, we development of the automatic following guider vehicle that be used in the delivery service business, regional shopping and sightseeing tours, etc. Furthermore, due to the requirement that need high frequently switching guider in the relevant application areas, our system propose a convenient, fast and robustness system for the guider replacement.
The proposed system consists of two parts: pedestrian detection system for finding pedestrian location coordinates and guider identification system for comparing pedestrians and the pre-defined guider. It’s difficult to detect pedestrians in various environments. We have use a more accurate deep learning technique to achieve pedestrian detection. We are able to find variation-adapted features of pedestrians and promote detection rate by using a convolution neural network. The guider identification system uses another convolution neural network to compare the detected pedestrian and the pre-defined guider to identify the unique pedestrian.
In the experiments, we test several videos which are captured from campus streets and building lobby. In the pedestrian detection system, the detection rate can reach up to 94% and has only 4 × 10-7 false positive rate. We train the deep convolutional neural network model for identifying guider. In the case of 2200 images, the recognition accuracy rate reach up to 94%.
摘要 ii
Abstract iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章 緒論 1
1.1 研究動機 1
1.2 系統概述 2
1.3 論文架構 3
第二章 相關研究 4
2.1 行人偵測 4
2.2 前導者確認 10
第三章 行人偵測 13
3.1 SSD 網路架構 13
3.2 內定框生成 15
3.3 預測行人機率值及位置 17
3.4 訓練網路 18
3.5 系統偵測結果 23
第四章 前導者確認 24
4.1 Siamese模型原理 24
4.2 前導者確認系統 26
4.3 訓練資料集 28
4.4 實驗結果 31
第五章 自走車控制系統 33
5.1 前導者追蹤 33
5.2 自走車控制 34
第六章 實驗結果與討論 38
6.1 實驗設備介紹 38
6.2 行人偵測實驗結果與展示 38
6.3 前導者確認實驗結果與展示 43
第七章 結論及未來展望 50
參考文獻 51

[1] M. Enzweiler and D. M. Gavrila, ''Monocular pedestrian detection: survey and experiments,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol.31, no.12, pp.2179-2195, 2008.
[2] M. Bertozzi, A. Broggi, R. Chapuis, F. Chausse, A. Fascioli, and A. Tibaldi, ''Shape-based pedestrian detection and localization,'' in Proc. IEEE Int. Conf. Intelligent Transportation Systems, Shanghai, China, Oct.12-15, 2003, pp.328-333.
[3] J. Ge, Y. Luo, and G. Tei, "Real-time pedestrian detection and tracking at nighttime for driver-assistance systems," IEEE Trans. Intelligent Transportation Systems, vol.10, pp.283-298, 2009.
[4] C. Papageorgiou and T. Poggio, "A trainable system for object detection," Int. Journal of Computer Vision, vol.38, no.1, pp.15-33, 2000.
[5] N. Dalad and B. Triggs, ''Histograms of oriented gradients for human detection,'' in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, San Diego, CA, June 20-26, 2005, pp.886-893.
[6] Y. Zhao, Y. Zhang, R. Cheng, D. Wei, and G. Li, "An enhanced histogram of oriented gradients for pedestrian detection," IEEE Intelligent Transportation Systems Magazine, vol.7, is.3, pp.29-38, Jul. 2015.
[7] I. P. Alonso, D. F. Llorca, and M. Á. Sotelo, ''Combination of feature extraction methods for SVM pedestrian detection,'' IEEE Trans. Intelligent Transportation System, vol.8, no.2, pp.292-307, 2007.
[8] L. Andreone, F. Bellotti, A. D. Gloria, and R. Lauletta, ''SVM-based pedestrian recognition on near-infrared images,'' in Proc. 4th IEEE Int. Symp. on Image and Signal Processing and Analysis, Torino, Italy, Sep.15-17, 2005, pp.274-278.
[9] M. Bertozzi, A. Broggi, M. Del Rose, M. Felisa, A. Rakotomamonjy, and F. Suard, ''A pedestrian detector using histograms of oriented gradients and a support vector machine classifier,'' in Proc. IEEE Conf. Intelligent Transportation Systems, Seattle, WA, Sep.30-Oct.3, 2007, pp.143-148.
[10] X.-B. Cao, H. Qiao, and J. Keane, ''A low-cost pedestrian-detection system with a single optical camera,'' IEEE Trans. Intelligent Transportation Systems, vol.9, no.1, pp.58-67, 2008.
[11] T.-K. An and M.-H. Kim, ''A new diverse adaboost classifier,'' in Proc. Int. Conf. Artificial Intelligence and Computational Intelligence, Sanya, China, Oct.23-24, 2010, pp.359-363.
[12] P. Luo, X. Wang, X. Tang, "Pedestrian parsing via deep decompositional neural network," in Proc. IEEE Int. Conf. Computer Vision (ICCV), Shanghai, China, Dec.1-8, 2013, pp.2648-2655.
[13] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, Jun.23-28, 2014, pp.580-587.
[14] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeul-ders, “Selective search for object recognition,” Int. Journal of Computer Vision, vol.104, is.2, pp.154-171, 2013.
[15] R. Girshick, "Fast R-CNN," in Proc. of IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, Dec.11-18, 2015, pp.1440-1448.
[16] K. He, X. Zhang, S. Ren, and J. Sun, '' Spatial pyramid pooling in deep convolutional networks for visual recognition ,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol.37, is.9, pp.1904-1916, 2015.
[17] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks," in Proc. of Conf. on Neural Information Processing Systems (NIPS), Advances in neural information processing systems, Montreal, Canada, Dec.7-10, 2015, pp.1-14.
[18] D. G. Lowe, “Distinctive image features from scale-invariant keypoints," Int. Journal of Computer Vision, vol.60, is.2, pp.91-110, Nov. 2004.
[19] E. Tola, V. Lepetit, and P. Fua, ''A fast local descriptor for dense matching,'' in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, Alaska, Jun.23-28, 2008, pp.1-8.
[20] J. Bromley, J. W. Bentz, L. bottou, I. Guyon, Y. LeCun, C. Moore, E. Sackinger, and R. Shah, ''Signature verification using a siamese time delay neural network,'' Int. Journal of Pattern Recognition and Artificial Intelligence, vol.7, no.4, pp.669-687, Aug. 1993.
[21] S. Chopra, R. Hadsell, and Y. LeCun, ''Learning a similarity metric discriminatively with application to face verification,'' in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, Jun.20-25, 2005, pp.539-546.
[22] S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Jun.8-10, 2015, pp.4353-4361.
[23] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learn. Represent (ICLR), San Diego, CA, May 7-9, 2015, pp.1-14.
[24] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “ SSD: Single shot multibox detector,” in European Conf. on Computer Vision (ECCV), Amsterdam, Holland, Oct.8-16, 2016, pp.21-37.
[25] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, ” Overfeat: integrated recognition, localization and detection using convolutional networks,” in Proc. Int. Conf. Learn. Represent (ICLR), Banff, Canada, Apr.14-16, 2014, pp.1-15.
[26] J. Long, E. Shelhamer, and T. Darrell, ”Fully convolutional networks for semantic segmentation,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Jun.8-10, 2015, pp.3431-3440.
[27] B. Hariharan, P. Arbel´ aez, R. Girshick, and J. Malik, ”Hypercolumns for object seg- mentation and fine-grained localization,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Jun.8-10, 2015, pp.447-456.
[28] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, ”Scalable object detection using deep neural networks,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, Jun.24-27, 2014, pp.2155-2162.
[29] R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd Edition, Cambridge University Press, Glasgow, UK, 2004.
[30] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, "Caffe: convolutional architecture for fast feature embedding," in Proc. of the 22nd ACM Int. Conf. on Multimedia, Orlando, FL, Nov.3-7, 2014, pp.675-678.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊