跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.31) 您好!臺灣時間:2025/12/18 04:40
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:蔡曜隆
研究生(外文):Yao-Long Cai
論文名稱:使用數位相機及影像模糊語意表達法執行自主式助行器路徑偵測
論文名稱(外文):Path Detection Using Digital Camera and Semantics-based Vague Image Representation for Autonomous Walking Aids Robot
指導教授:李祖添李祖添引用關係
指導教授(外文):Tsu-Tien Li
學位類別:碩士
校院名稱:中原大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:中文
論文頁數:77
中文關鍵詞:現場可程式邏輯閘陣列影像模糊語意表達法圖形辨識自主式助行器路徑偵測
外文關鍵詞:FPGApattern recognitionSVIRautonomous walking aid robotpath detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:224
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在近幾年來,由於社會上醫療照護人力逐漸的短缺,使得自主式輔助器材在居家照護領域的研究越來越受到重視。因此,以此為出發點,設計一台可替代人力的自主式輔助器實在有其必要性。在此,本研究則是專注在老人復健使用的自主式行動助行器方向的設計,其目的在於設計出一種兼具輕便、快速、低成本以及高可靠度之路徑偵測設備。
在本研究中,其利用數位相機偵測室內牆邊之踢腳板,以作為老人助行器的行駛路線。為了要達到此目的,系統設計必須具備色彩偵測、邊緣偵測、圖形辨識以及偏向偵測等功能。並且,為了降低系統負重,系統需要實現於低功率、低體積的單晶片上,可是還必須同時兼具高速影像處理的能力。除了上述的設計要求以外,本系統還特地採用了現場可程式邏輯閘陣列(Field Programmable Gate Array, FPGA)晶片以及低價、高解析度之數位相機以降低開發成本。
最後,依據上述之系統要求,本研究採用影像模糊語意表達法(Semantic-based Vague Image Representation, SVIR)作為踢腳板之特徵擷取功能。相較於傳統基於個人電腦之影像處理演算法,SVIR具有低運算量之優點,因此特別適用於低功率以及低體積之晶片式設計。由本論文之實驗結果顯示,助行器之方向矯正解析度為0.5度,以及量測之角度誤差約為1度,此結果不但滿足老人主動式助行器之實際操作需求,新式之設計概念亦可成為未來機器人路徑追蹤之設計範列。

In these years, due to the shortage of medical resources, the research of autonomous serving facilities becomes a main stream in home care systems. Of this issue, designing an autonomous walking aid robot to replace the load of manpower is urgent. Here this research focuses on the goal of elder’s rehabilitation with walking aids robot then designs a portable, real-time, low-cost, and reliable path detection system.
The detecting mechanism of this project is based on recognizing the baseboard with digital camera in indoor environment. For this purpose, entire system is implemented on single chip with the functions of color detection, edge detection, pattern recognition, and steering angle determination in real-time. Moreover, lower developing cost is achieved by using Field-programmable Gate Array (FPGA) for prototyping with low-cost and high resolution camera.
Finally, based on the design criteria, this research also particularly adopts the algorithm of Semantic-based Vague Image Representation (SVIR) for feature extraction of baseboard. Compared to the traditional image processing which is based on the designs with PCs, SVIR inherits the advantages of low computing load on chip and especially suitable for design in low-power and minimal dimension. According to the experimental results of this thesis, the resolution of steering angle is 〖0.5〗^° with calibration tolerance for 1^°. Novel design is not only satisfying the design criteria for elders’ autonomous walking aids robot but also a paradigm for robotic path tracking in the future.

目錄
摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 viii
縮寫說明 ix
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 1
1.3 研究方法 2
1.4 文獻探討 4
1.4.1 影像感測 4
1.4.2 雷射感測 6
1.4.3 其他感測法 8
1.5 論文架構 10
第二章 基於影像模糊語意表達法擷取影像特徵 11
2.1 前言 11
2.2 色彩偵測 14
2.2.1 數位影像擷取 14
2.2.2 色彩空間與二值化 17
2.3 影像模糊語意表達法(Yu et al., 2013) 21
2.3.1 雙極性編碼 21
2.3.2 重值重疊運算 24
2.3.3 水平合併運算 29
2.4 實驗結果與討論 33
2.5 結論 40
第三章 踢腳板偵測設計 41
3.1 前言 41
3.2 圖形分類規則 43
3.2.1 雙極性編碼演化 44
3.2.2 圖形起始編碼分類 44
3.2.3 主要圖形輪廓編碼分類 45
3.2.4 圖形組合及辨識 50
3.3 助行器行駛方向計算 51
3.4 實驗結果 54
3.5 結論 59
第四章 總結與未來展望 60
4.1 前言 60
4.2 晶片資源使用與討論 60
4.3 論文貢獻回顧 61
4.3.1 輕便以及低功率損耗 61
4.3.2 即時運算 62
4.3.3 可靠的路徑導航能力 62
4.4 未來展望 63
4.5 結論 63
參考文獻 64
圖目錄
圖 1.1 CCD結構圖,其訊號傳輸至輸出電路才進行首次放大。….....5
圖 1.2 CMOS結構圖,各元件訊號先經過放大,才傳輸至輸出電
路。…………………………………………...…………………5
圖 1.3 利用雷射測距儀感測環境繪製地圖。…………………...…….7
圖 1.4 利用LIDAR感測器對偵測前方兩邊的道路邊界以及地面是否有障礙物。…………………….…….……………………….8
圖 1.5 傳統條碼。…………………………………….………….....…..9
圖 1.6 QR碼。…………………………………………..……..……..…9
圖 1.7 射頻標籤。…………………………………..…………..…..…9
圖 2.1 利用不同的地形顏色特徵找出最佳通行路徑。……………..12
圖 2.2 利用灰階影像及直方圖所組成之車道感測系統。……....…..12
圖 2.3 利用濾光鏡將光源過濾,使得每一像素只具有單一色彩。....15
圖 2.4 Bayer patterm。…………………………………………...……15
圖 2.5 線性插補法圖形分類。……………………………..……..…..16
圖 2.6 RGB色彩空間。………………………………….….…..…….18
圖 2.7 HSV色彩空間。………………………………….….…..…….18
圖 2.8 YUV色彩空間。………………………………………...….….19
圖 2.9 色彩偵測示意圖。…………………………………..…..……..20
圖 2.10 色彩偵測後之踢腳板二值化圖像。…………..………....……21
圖 2.11 子視窗。……….……………..………………….………..……22
圖 2.12 互補圖形編碼結果。……………………...……………..…….23
圖 2.13 SVIR編碼進位範例。…………………………………….…...26
圖 2.14 單個梯形和多個長條形的垂直重疊編碼進化。……..…...….28
圖 2.15 梯形圖案疊加後形成近似長條圖案。…………………..……28
圖 2.16 梯形的水平合併範例。…………………………..……………33
圖 2.17 利用SVIR對子視窗進行雙極性編碼。…………..….……….34
圖 2.18 圖2.17的近似圖形。………………………………..…..……..35
圖 2.19 梯形特徵運算輸出及其近似結果。…………………..………36
圖 2.20 經圖形特徵運算後的近似圖形。………………………..……37
圖 2.21 SVIR運算時序圖。…………………………………….……...40
圖 3.1 原始圖像起始輪廓分類示意圖。…………………..……...….45
圖 3.2 三角形與其上下顛倒圖形的雙極性編碼結果。……….....….46
圖 3.3 直線線段輪廓以平行四邊形呈現之SVIR近似結果。…...….46
圖 3.4 輪廓編碼範例。……………………………………..……..…..48
圖 3.5 不同直線段片段的圖形組合情況。…………….…..…..…….51
圖 3.6 相機安裝空間位置關係圖。.………………………..…..….…53
圖 3.7 踢腳板圖像邊界座標擷取示意圖。…………….…………….53
圖 3.8 實驗平台及模擬環境。……………………………..…...…….55
圖 3.9 踢腳板取樣範圍示意圖。…………...………..……………….55
圖 3.10 踢腳板影像圖形。………………...……………………...……56
圖 3.11 取樣範圍(2,3)之圖形辨識。…………………………..………56
圖 3.12 三種典型踢腳板圖形經過分類規則後之輸出結果。…..……57

表目錄

表 2.1 線性插補法。………………………………..………..…………17
表 2.2 S^' [5]特殊範例。………………………………..…………...…..32
表 2.3 雙極性編碼的數字代碼及圖形涵義。…………………..……..34
表 2.4 SVIR系統設計之資源使用統計。..…….…………...….….......38
表 3.1 圖形分類編號及其所代表之原始圖形。..………………..……48
表 3.2 圖形分類規則表。………………………….…….…..…........…49
表 3.3 不同直線段片段可能的組合統計。……….………...................51
表 4.1 晶片資源使用量。…………………….…………….....….....….61表 4.2 晶片能量消耗統計。……………………………………..……..61
行政院內政部,人口政策白皮書:少子女化、高齡化及移民,中華民國:行政院內政部,2008.

Altera. (2013, May). Cyclone IV FPGA Device Family Overview [Online].
Available: http://www.altera.com/literature/lit-cyclone-iv.jsp

Assidiq, A.A.M., Khalifa, O.O., Islam, R., and Khan, S., “Real time lane detection for autonomous vehicles,” IEEE Int. Cinf. on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, pp. 82-88, 2008.

Batchelor, B.G., Machine Vision Handbook, USA: Springer Press, 2012

Davies, E.R., Computer & Machine Vision: Theory Algorithms Practicalities, Fourth Edition, USA: Academic Press, 2012.

Evans, J.M., Chang, T., Hong, T.H., Bostelman, R. and Bunch, W.R., Three Dimensional Data Capture in Indoor Environments for Autonomous Navigation, USA: US Department of Commerce, Technology Administration, National Institute of Standards and Technolog, 2002.

Fontanelli, D., Palopoli, L., and Rizano, T., “High speed robotics with low cost
hardware,” IEEE 17th Conf. on Emerging Technologies & Factory Automation (ETFA), Kraków, Poland, pp. 1-8, 2012.

Gallo, O., Manduchi, R., and Rafii, A., “Robust curb and ramp detection for safe
parking using the Canesta TOF camera,” IEEE Conf. on Computer Vision andPattern Recognition Workshops, AK, USA., pp. 1-8, 2008.

Gaspar, J., Winters, N., and Santos-Victor, J., “Vision-based navigation and
environmental representations with an omnidirectional camera,” IEEE Trans. on Robotics and Automation, Vol. 16, issue 6, pp. 890-898, 2002.

Guizzo, E., “Three Engineers, Hundreds of Robots, One Warehouse,” IEEE on pectrum Vol. 45, issue 7, pp. 26-37, 2008.

Guo, C., Mita, S., and McAllester, D., “Stereovision-based road boundary detection for intelligent vehicles in challenging scenarios,” IEEE/RSJ Int. Conf. on IntelligentRobots and Systems, MO, USA, pp. 1723-1728, 2009.

Guo, Y., Song, A., Bao, J., and Zhang, H., “Optimal path planning in field based on traversability prediction for mobile robot,” IEEE int. Conf. on Electric Information and Control Engineering (ICEICE), Wuhan, China., pp. 563-566, 2011.

Hata, A.Y. and Wolf, D.F., “Outdoor Mapping Using Mobile Robots and Laser Range
Finders,” IEEE Inl. Conf. on Electronics, Robotics and Automotive MechanicsConference, Morelos, Mexico, pp. 209-214, 2009.

Hautiere, N., Labayrade, R., Perrollaz, M., and Aubert, D., “Road Scene Analysis by Stereovision: a Robust and Quasi-Dense Approach,” IEEE Int. Conf. on Control,Automation, Robotics and Vision, Singapore, pp. 1-6, 2006.

Intel, (2007) Color Models [Online]. Available:
http://software.intel.com/sites/products/documentation/hpc/ipp/ippi/ippi_ch6/ch6_color_models.html

Intel, Mobile 4th Generation Intel® Core™ Processor Family, USA: Intel, June, 2013

Jeong, P., and Nedevschi, S., “Efficient and robust classification method using
combined feature vector for lane detection,” IEEE Journal. on Circuits andSystems for Video Technology, pp. 528-537, 2005.

Kobayashi, H., “A new proposal for self-localization of mobile robot by self-contained 2D barcode landmark,” IEEE Proc. on SICE Annual Conference (SICE)., Akita, Japan, pp. 2080-2083, 2012.

Kuyatt, B.L., Weaver, R. and Merlo, P., "Image capture methods" Advanced materials & processes, pp. 25-28, 2005.

Lee, J-W., and Cho, J-S., “Effective Lane Detection and Tracking Method Using
Statistical Modeling of Color and Lane Edge-Orientation,” IEEE Int. Conf. on Computer Sciences and Convergence Information Technology (ICCIT), Seoul, Korean, pp. 1586-1591, 2009.

Li, Q., Zheng, N., and Cheng, H., “Springrobot: a prototype autonomous vehicle and its algorithms for lane detection,” IEEE Trans. on Intelligent Transportation Systems, Vol. 5, Issue. 4, pp. 300-308, 2004.

Malvar, H.S., He, L-W., and Cutler, R., “High-quality linear interpolation for
demosaicing of Bayer-patterned color images,” IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP '04), Quebec, Canada, Vol. 3, pp. 485-488 , 2004.

Microsoft, (2013) About YUV Video [Online]. Available:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb530104(v=vs.85).aspx

Microsoft, (2013) Color [Online]. Available:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa511283.aspx

Park, S.,and Hashimoto, S., “Autonomous Mobile Robot Navigation Using Passive
RFID in Indoor Environment,” IEEE trans on Industrial Electronics, Vol. 56, issue 7, pp. 2366-2373, 2009.

Pomerleau, D., “RALPH: rapidly adapting lateral position handler,” IEEE
Proc. Int. on Intelligent Vehicles '95 Symposium., Detroit, USA, pp. 506-511, 1995.

Romanath, R., Snyder, W.E. Bilbro, G.L., "Demosaicking methods for Bayer color
arrays." Journal of Electronic imaging, pp.306-315, 2002.

Saudi, A, Teo, J., Hanafi, M., Hijazi, A., Sulaiman, J., “Fast lane detection with Randomized Hough Transform,” IEEE int, symp. on Information Technology, Vol. 4, Lumpur, Malaysia, pp. 1-5, 2008.

Schlenoff, C., Madhavan, R., Albus, J., Messina, E., Barbera, T., and Balakirsky, S., “Fusing disparate information within the 4D/RCS architecture,” IEEE Int. Conf. on Information Fusion, Pennsylvania, USA, 2005.

Smith, A.R., “Color Gamut Transform Pairs,” ACM Siggraph Computer Graphics. Vol.
12. No. 3., pp. 12-19, 1978.

Snyder, W.E., and Qi, H., Machine Vision, England: Cambridge Press, 2010.

Sonka, M., Hlavac, V., and Boyle, R., Image Processing, Analysis, and Machine Vision, USA: PWS Press, 1999.

Teledy DALSA, (2013) CCDvs.CMOS [Online]. Available:
http://www.teledynedalsa.com/imaging/knowledge-center/appnotes/ccd-vs-cmos/

Togashi, H., and Yamada, S. “Preliminary study on vehicle-to-roadside system using RFIDs for detecting road shoulders,” IEEE int. on Vehicies Symposium, Xi’an, China., pp 1148-1154, 2009.

Verschoor, C.R., and Visser, A., Integrating disparity and edge detection algorithms to autonomously follow linear-shaped structures at low altitude, Netherlands: University of Amsterdam, 2013.

Von Reyher, A., Joos, A., and Winner, H., “A lidar-based approach for near range lane Detection,” IEEE Proc. on Intelligent Vehicles Symposium, Nevada, USA, pp. 147-152, 2005.

Wang, C-C., Huang, S-S., and Fu, L-C., “Driver assistance system for lane detection and vehicle recognition with night vision,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2005), Alberta, Canada, pp. 3530-3535, 2005.

Weis, S.A., RFID (Radio Frequency Identification):Principles and Applications,
USA: University of Technology, 2011.

Wray, B.R., Bar Code Data Collection in the Blood Bank, USA: Computype , 2002.

Xu, Z., “Laser rangefinder based Road following,” IEEE Int. Conf. on Mechatronics and Automation Vol. 2, Ontario, Canada, pp. 713-717, 2005.

Yu, Y-H., Ha, Q.P., Kou, K-Y., and Lee, T-T., “Feature Extraction Using Vague
emantics Approach to Pattern Recognition,” IEEE Int. Conf. on Control,Automation and Information Sciences (ICCAIS), Ho Chi Minh City, Vietnam,pp.126-131, 2012.

Yu, Y-H., FPGA-Based Formation Control of Multiple Ubiquitous Indoor Robots,
Australia: University of Technology, 2011.

Zhang, H-B., Yuan, K., Qimel-S., and Zhou, Q-R., “Visual navigation of an automated guided vehicle based on path recognition,” IEEE Proc. Int. Conf. on Machine Learning and Cybernetics, Vol. 6 , Shanghai, China, pp. 3877-3881, 2004.
電子全文 電子全文(本篇電子全文限研究生所屬學校校內系統及IP範圍內開放)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top