跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.170) 您好!臺灣時間:2024/12/03 14:01
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:唐雪玲
研究生(外文):Hsueh-Ling Tang
論文名稱:基於3D點雲的多特徵行人偵測
論文名稱(外文):Multi-cue Pedestrian detection from 3D point cloud data
指導教授:花凱龍
指導教授(外文):Kai-Lung Hua
口試委員:花凱龍鄭文皇陳永耀郭景明鍾國亮
口試委員(外文):Kai-Lung HuaWen-Huang ChengYung-Yao ChenJing-Ming GuoKuo-Liang Chung
口試日期:2017-06-26
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:英文
論文頁數:35
中文關鍵詞:激光雷達行人偵測
外文關鍵詞:LidarPedestrian detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:212
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在現在的無人車駕駛技術中,行人偵測是相當重要的議題。為了防止交通碰
撞事故,無論白天或夜晚,行人都應該精準地被偵測。由於視覺影像在夜晚拍攝
時不夠清晰,本論文提出在不需要視覺影像的狀況下,利用超清晰度的LIDAR 進
行行人偵測的方法。為了處理LIDAR 點雲在長距離時點雲數量不夠的問題,本文
提出一種新的解決方法來提高正確率,透過距離感知擴展的方式將三維點雲映射
到二維平面,並提取出相對應的二維輪廓及其相關的二維特徵。除了人工設計的
特徵外,本文也考慮到深度特徵,基於多項特徵結合分類,本方法在F1 測量法中
取得比現有技術高出23% 的良好結果。
Pedestrian detection is one of the key technologies of driver assistance system. In order to prevent potential collisions, pedestrians should be always accurately identified whether during the day or at night. Since the visual images of the night are not clear, this thesis proposes a method for recognizing pedestrians by using a high-definition LIDAR without visual images. In order to handle the long-distance sparse point problem, a novel solution is introduced to improve the performance. The proposed method maps the three-dimensional point cloud to the two-dimensional plane by a distance-aware expansion approach and the corresponding 2D contour and its associated 2D features are then extracted. In addition to hand-crafted features, deep learned features are also considered in this thesis. Based on multiple cues, the proposed method obtains significant performance boosts over state-of-the-art approaches by 23% in terms of F1-measure.
摘要
Abstract .
Acknowledgement
Contents
List of Figures
List of Tables
1 Introduction
2 Relative-work
3 Method
3.0.1 Overview
3.0.2 Hand-crafted Feature Extraction
3.0.3 Deep learned Feature Extraction
4 Experimental Design
4.0.1 Experimental condition
5 Experimental Result
5.0.1 Examples of failures and future work
6 Conclusions
6.1 Future Work
References
Letter of Authority
[1] J. Hariyono and K. H. Jo. Detection of pedestrian crossing road. In 2015 IEEE International Conference on Image Processing (ICIP), pages 4585–4588, Sept 2015.
[2] Timo Ojala, Matti Pietikäinen, and David Harwood. A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1):51 – 59, 1996.
[3] Paul Viola and Michael J. Jones. Robust real-time face detection. International Journal of Computer Vision, 57(2):137–154, 2004.
[4] L. Wei, Y. Tian, Y. Wang, and T. Huang. Multi-view gait recognition with incomplete training data. In 2014 IEEE International Conference on Multimedia and Expo (ICME), pages 1–6, July 2014.
[5] L. Wu, J. Wang, G. Zhu, M. Xu, and H. Lu. Person re-identification via rich colorgradient feature. In 2016 IEEE International Conference on Multimedia and Expo (ICME), pages 1–6, July 2016.
[6] H. Ren and Z. N. Li. Boosted local binaries for object detection. In 2014 IEEE International Conference on Multimedia and Expo (ICME), pages 1–6, July 2014.
[7] Yao Zhou and Jiebo Luo. A practical method for counting arbitrary target objects in arbitrary scenes. In 2013 IEEE International Conference on Multimedia and Expo (ICME), pages 1–6, July 2013.
[8] T. Xu, H. Liu, Y. Qian, and Z. Wang. A fast and robust pedestrian detection framework based on static and dynamic information. In 2012 IEEE International Conference on Multimedia and Expo, pages 242–247, July 2012.
[9] K. O. Arras, O. M. Mozos, and W. Burgard. Using boosted features for the detection of people in 2d range data. In Proceedings 2007 IEEE International Conference on Robotics and Automation, pages 3402–3407, April 2007.
[10] K. Kidono, T. Miyasaka, A. Watanabe, T. Naito, and J. Miura. Pedestrian recognition using high-definition lidar. In 2011 IEEE Intelligent Vehicles Symposium (IV), pages 405–410, June 2011.
33
[11] Luis E. Navarro-Serment, Christoph Mertz, and Martial Hebert. Pedestrian detection and tracking using three-dimensional ladar data. The International Journal of Robotics Research, 29(12):1516–1528, 2010.
[12] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, June 1981.
[13] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, volume 1, pages I–511–I–518 vol.1,
2001.
[14] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 886–893 vol. 1, June 2005.
[15] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995.
[16] C. Premebida, O. Ludwig, and U. Nunes. Exploiting lidar-based features on pedestrian detection in urban scenarios. In 2009 12th International IEEE Conference on Intelligent Transportation Systems, pages 1–6, Oct 2009.
[17] W. Jun and T. Wu. Camera and lidar fusion for pedestrian detection. In 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pages 371–375, Nov 2015.
[18] Kaihua Zhang, Lei Zhang, Huihui Song, and Wengang Zhou. Active contours with selective local or global segmentation: A new formulation and level set method. Image and Vision Computing, 28(4):668 – 676, 2010.
[19] Joko Hariyono and Kang-Hyun Jo. Detection of pedestrian crossing road: A study on pedestrian pose recognition. Neurocomputing, 234:144 – 153, 2017.
[20] Jing Huang and Suya You. Point cloud labeling using 3d convolutional neural network. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages
2670–2675, Dec 2016.
[21] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[22] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR), 2013.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top