(3.236.214.19) 您好!臺灣時間:2021/05/10 08:10
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

: 
twitterline
研究生:徐顥
研究生(外文):Hao Hsu
論文名稱:以機器視覺為基之智能導盲輔助系統開發
論文名稱(外文):Robot vision-based guidance for the visually impaired
指導教授:蔡篤銘蔡篤銘引用關係
指導教授(外文):Du-Ming Tsai
口試委員:孫天龍呂奇傑
口試委員(外文):Tien-Lung SunChi-Jie LU
口試日期:2013-11-22
學位類別:碩士
校院名稱:元智大學
系所名稱:工業工程與管理學系
學門:工程學門
學類:工業工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:102
語文別:中文
論文頁數:149
中文關鍵詞:機器視覺深度影像導盲輔助地形辨識
外文關鍵詞:Robot guide dogVisually impairedDepth imageGuidance
相關次數:
  • 被引用被引用:0
  • 點閱點閱:225
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:25
  • 收藏至我的研究室書目清單書目收藏:0
眼睛對於人類來說是一個非常重要的感官器官,因此有靈魂之窗的美稱,人類在處理認知訊息的歷程中,約有80%以上都是藉由眼睛來接收的,但對於一個視力有損傷或是失去視力的視覺障礙者來說,在行動上就會有非常大的不便性。根據內政部身心障礙者人數統計資料得知,全臺灣的視覺障礙人口數到民國101年為止,已高達5萬6千人之多。目前視覺障礙者大多使用導盲杖或少數使用導盲犬作為外出行動時的輔助用具,但這些行動輔具所給予視覺障礙者的環境資訊仍不夠充足,因此本研究探討利用影像處理技術提供視覺障礙者所處之環境資訊
,以提升其行動自主性。

本研究分析在不同環境之情況下,各類地形在深度影像的特徵,並藉由此特徵來判斷地形辨識之可行性及設計出地形的判斷法則,例如:平地、障礙物等地形類別,而本研究採用類同於Kinect硬體規格之Xtion感測器作為擷取深度影像資訊的來源,其體積小、重量輕之特性,相對於Kinect來說,能夠更適合配戴於視覺障礙者身上,避免增加視覺障礙者負重上的負擔。本研究設計之基本系統雛形,希望能避免視覺障礙者在行走時由於未知的環境地形而造成的危險或傷害
,也期望本系統所提供之環境地形資訊能增加視覺障礙者在行走時的安全與自主性。
Blind or visually impaired people lack of independence to get around freely, especially in a new environment. The mobility aids of the blind are mainly based on long canes or guide dogs. Long canes are commonly used by the blind as a simple tool to detect objects within a very limited area in the path. Guide dogs give the blind more flexibility to travel in an unfamiliar environment. The main function of guide dogs is to maneuver the blind people around obstacles, indicate the locations of curbs and stairs, and prevent hazardous areas. Despite the advantages of guide dogs, the training time is very long, and the training cost is also very high.

Due to recent advances in computer vision and mobile robot, the robot guide dog is an attractive alternative to provide mobility aids for blind people. A robot guide dog should be equipped with the capability to detect the pathway and obstacles in front the owner. This research aims at the feasibility study of vision-assist guidance for the visually impaired by using depth image and image processing techniques. The proposed robot vision system is implemented and tested with the Xtion depth sensor, which is basically the same as Kinect developed originally by PrimeSense. The Xtion device provides both depth image and color image. It is more compact and lighter weight and, thus, is more suitable for blind people to wear. Basic scene entities encountered in the environment have been evaluated with a simple prototype of the vision-assist guidance system.
中文摘要 I
英文摘要 II
誌謝 III
目錄 IV
表目錄 VII
圖目錄 VIII
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究範疇與限制 1
1.3 研究方法簡介 2
1.4 論文架構說明 3
第二章 文獻探討 5
2.1 視覺技術應用於行走機器人 5
2.2 Kinect於其他相關領域之應用 6
第三章 研究方法 8
3.1 Light Coding 3D量測技術 8
3.1.1 Kinect感測器 9
3.1.2 Xtion感測器 9
3.2 視障者的“眼睛” 9
3.3 研究方法概述 13
3.3.1 深度的範圍及精確度 13
3.3.2 深度量測的穩定性 13
3.3.3 系統限制 14
3.3.4 感測器配戴方式 17
3.4 導盲輔助系統架構 17
3.5 地勢影像轉換 20
3.5.1 深度值之角度的變化 20
3.5.2 地勢之特性 21
3.5.3 地勢的遊程與順序 26
3.6 各類地形之判斷法則與地形影像建立 28
3.6.1 平地與障礙物 29
3.6.2 臺階 29
3.6.3 樓梯 33
3.6.4 危險區域(斷面、坑洞) 34
3.6.5 懸掛物 38
3.6.6 無法判斷之地形(資訊不足) 39
3.6.7 地形影像建立 41
3.7 一維掃描線主要支配地形之建立 42
3.8 區域眾數結果 44
3.9 進階地形判斷-轉角處 46
3.10 門之偵測 51
3.10.1 障礙物地形(牆面)區域偵測 51
3.10.2 門框線偵測 52
3.10.3 門框線之深度落差偵測 56
3.10.4 門之寬度計算 57
3.11 流程圖簡述 59
3.12 結論 59
第四章 實驗結果與分析 65
4.1 系統架構與實驗環境 65
4.2 深度輪廓線角度變化對地勢影像的影響 67
4.3 各類地形之判斷結果 67
4.3.1 臺階向上 67
4.3.2 臺階向下 75
4.3.3 樓梯向上 81
4.3.4 樓梯向下 92
4.3.5 障礙物與無法判斷之地形 101
4.3.6 危險區域之地形 110
4.3.7 水溝與懸掛物之地形 116
4.3.8 轉角處 124
4.3.9 門 124
4.4 試用者的回饋 134
4.5 結論 134
第五章 結論與未來研究方向 135
參考文獻 136
附錄 程式說明 138
內政部統計處, (2013, February, 24), 內政部統計查詢網, Retrieved from
http://statis.moi.gov.tw/micst/stmain.jsp?sys=100

桃花島, (2013, November, 25),【動物】再見了,可魯, Retrieved from
http://www.taohuadao928.com/article/1419

財團法人愛盲基金會, (2013, November, 25), 視障情報站, Retrieved from
http://www.tfb.org.tw/knowus_6.html

財團法人惠光導盲犬教育基金會, (2013, February, 24), 導盲犬成長過程
Retrieved from http://www.guidedog.tw/

Chiang I.T., J. C. Tsai and S. T. Chen, 2012, “Using Xbox 360 Kinect Games on Enhancing Visual Performance Skills on Institutionalized Older Adults with Wheelchairs,” (DIGITEL), IEEE 4th International Conference on Digital Game and Intelligent Toy Enhanced Learning, Changhua, Taiwan, pp. 263-267.

Fernandez-Caballero, A., J.C. Castillo, and J. Martinez-Cantow, 2010, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robotics and Autonomous Systems, Vol. 58, pp. 1273-1281.

Gallo, L., A. P. Placitelli and M. Ciampi, 2011, “Controller-free exploration of medical image data: Experiencing the Kinect,” IEEE International Conference on Computer-Based Medical Systems, Naples, Italy, pp.1-6.

Ekelmann, J. and B. Butka, 2012, “Kinect controlled electro-mechanical skeleton,” Proceedings of IEEE Southeastcon, FL, USA, pp. 1-5.

Jung, B. and G.S. Sukhatme, 2004, “Detecting Moving Objects using a Single Camera
on a Mobile Robot in an Outdoor Environment,” Proceedings of the 8th Conference on Intelligent Autonomous Systems, Amsterdam, The Netherlands, pp. 980-987.

Jung, B. and G.S. Sukhatme, 2010, “Real-time Motion Tracking from a Mobile Robot,” International Journal of the Robotics Society, Vol. 2, pp. 63-78.

Lim, Y. W., H. Z. Lee, N. E. Yang and R. H. Park, 2012, “3-D reconstruction using the Kinect sensor and its application to a visualization system,” IEEE International Conference on System, Man, and Cybernetics, Seoul, South Korea, pp. 3361-3366.

Lookingbill, A., J. Rogers, D. Lieb, J. Curry and S. Thrun, 2007, “Reverse optical
flow for self-supervised adaptive autonomous robot navigation,” International
Journal of Computer Vision, Vol. 74, pp. 287-302.

Lui, A.K.-F., K. F. Wong, S. C. Ng and K. H. Law, 2012, “Development of a mental stress relaxation tool based on guided imagery and Microsoft Kinect,” 6th International Conference on Distributed Smart Cameras, Kowloon, China, pp. 1-6.

Okada, K., S. Kagami, M. Inaba and H. Inoue, 2001, “Walking human avoidance and detection from a mobile robot using 3D depth flow,” IEEE International Conference on Robotics and Automation, Seoul, Korea, Vol. 3, pp. 2307-2313.

Pedro, L.M. and G.A. de Paula Caurin, 2012, “Kinect evaluation for human body movement analysis,” 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, São Carlos, Brazil, pp. 1856-1861.

Raheja, J.L., A. Chaudhary and K. Singal, 2011, “Tracking of Fingertips and Centers of Palm Using KINECT,” 3th International Conference on Computational Intelligence, Modelling and Simulation, Pilani, India, pp.248-252.

Shi, J. and C. Tomasi, 1994, “Good features to track,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600.

Tomasi, C. and T. Kanade, 1991, “Detection and tracking of point features,” Carnegie Mellon University Technical Report, CMU-CA-91-132.

Tong X., P. Xu and X. Yan, 2012, “Research on Skeleton Animation Motion Data Based on Kinect,” 5th International Symposium on Computational Intelligence and Design, Beijing, China, pp.347-350.

Tsai, D. M. and T. H. Tseng, 2013, “A template reconstruction scheme for moving object detection from a mobile robot, ” Industrial Robot, vol.29, pp. 312–321.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔