跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.91) 您好!臺灣時間:2024/12/14 06:25
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳宗緻
研究生(外文):Tsung-Chih Chen
論文名稱:以電腦視覺技術做居家及病房內活動之防護與監測
論文名稱(外文):Protection and Monitoring of Home and Clinical Ward Activities Using Computer Vision
指導教授:陳士農陳士農引用關係
指導教授(外文):Shih-Nung Chen
學位類別:碩士
校院名稱:亞洲大學
系所名稱:資訊工程學系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2005
畢業學年度:94
語文別:中文
論文頁數:59
中文關鍵詞:電腦視覺跌倒影像處理
外文關鍵詞:Computer Visionfallsimage processing
相關次數:
  • 被引用被引用:4
  • 點閱點閱:314
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:3
本論文提出一套居家及病房的防護監測系統,系統如果偵測到有如跌倒之異常事件發生時,能即時在本機發送警報或透過網路通知遠端電腦,另外也可以利用傳送簡訊或多媒體圖片到手機的方式通知,以減輕看護者的負擔及避免因人力不足或疏忽所造成的延誤,失去了救助的黃金時機。
系統是利用Webcam拍攝的畫面做分析,首先將影片切割成一張張的畫面,再透過切割後之無人畫面建構出防護環境的背景為後景,將後景與前景相減後再做二值化,在此時會有許多雜訊產生,所以需要以中值濾波器(median filter)做去雜訊處理。之後再依水平及垂直各別做像素篩選統計,這個步驟可以篩選出要做人形分析的物體大小並獲得該物體的邊界座標,此時就可以進行人形偵測。最後再經已偵測到的人形邊界判斷行為狀態,在異常狀態下如有跌倒事件發生,就會啟動警報通知照護者協助。
本系統在監測部份的特色在於利用一個像素值不為0之連續次數篩選掉不想監測的物體,此值可以隨著監控環境的需要進行調整,當值小則偵測的敏感度會較高,值大則反之。以上方式可以迅速篩選出我們欲偵測的對象,並快速的將偵測結果做異常行為的判斷,使系統可在幾乎即時的狀況下判斷異常事件的發生。在實驗結果的部份,模擬了許多不同的動作,包括了站立、趴下、蹲下及各種不同角度方向的跌倒動作,以上動作並不會發生誤判的情形,相信本研究對於居家或病房的照護及監測是很有幫助的。
This thesis presents a monitoring system of falls for homecare and ward care applications. This system integrated with alarming devices and SMS (or MMS) services is able to mitigate the burden of caregivers and avoid the possible loss of life caused by delaying the treatment.
The functions of this system is firstly to analyze the pictures divided from the video catched by webcam, then apply background subtraction method and binary threshold filter for protecting background constructions. As the image subtracting processes will generate noise, therefore, median filter is applied for noise elimination. After that, image projection filter is utilized to filter the size and shape of the object and its boundary coordinates for detection in human-shape. Finally, we will be able to judge the human behavior of falls, for example, by detection in human-shape and immediately trigger the alarm system.
The key point of the falls monitoring system is to find the numbers of non-zero pixel value as reference for filtering the unwanted objects. The pixel value is tunable following the changing environments. The smaller the value, the higher the sensitivity. This way of detection is able to filter out the desired objects and make a judgment on unusual behavior quickly. Simulations of different human behaviors have been done include standing, prostration, squatting and falling down with different angles. Our experimental results show no error of judgement and conform that this monitoring system of falls is effective for home and ward care for those people who are in need.
中文摘要.......................................................i
英文摘要......................................................ii
目錄.........................................................iii
圖目錄.........................................................v
表目錄.......................................................vii
第一章 緒論....................................................1
1.1研究背景....................................................1
1.2研究動機....................................................3
1.3研究目的....................................................3
1.4研究方法....................................................4
1.5研究架構....................................................5
第二章 相關研究................................................6
2.1電腦視覺技術之相關研究......................................6
2.1.1應用範圍..............................................7
2.1.2行為偵測..............................................8
2.2行為防護與監測之相關研究...................................11
2.2.1穿戴式數位中風或跌倒之微監測裝置.....................11
2.2.2人類跌倒之行為分析與偵測.........................14
2.2.3相關跌倒偵測方法之比較...........................17
第三章 人形偵測及人形異常行為判定.............................18
3.1 系統架構................................................18
3.2 軟體環境................................................18
3.3 系統流程................................................19
3.4 影片切割................................................21
3.5 前後景相減..............................................21
3.6 二值化..................................................23
3.7 去雜訊處理..............................................25
3.8 像素統計篩選............................................27
3.9 人形偵測...............................................28
3.10 人形異常行為判定.......................................29
3.11 發出警報...............................................32
第四章 實驗結果...............................................34
4.1 實驗環境...............................................34
4.2 人形偵測實驗結果.......................................34
4.3 人形異常行為判定實驗結果...............................37
第五章 結果與未來研究方向.....................................44
5.1 結論......................................................44
5.2 未來研究方向..............................................45
參考文獻......................................................46
誌謝..........................................................48
簡歷..........................................................49
[1]http://sowf.moi.gov.tw/04/01.htm.
[2]http://sowf.moi.gov.tw/04/07/07.htm.
[3]http://www.doh.gov.tw/statistic/data/死因摘要/94年/94.htm.
[4]Liang Wang, Weiming Hu and Tieniu Tan, “A Survey of Visual Analysis of Human Motion,” Chinese Journal of Computers, vol. 25, no. 3, pp.225-237, 2002.
[5]Gavrila D, “The visual analysis of human movement: a survey,” Computer Vision and Image Understanding, vol. 73, no. 1, pp.82-98, Jan. 1999.
[6]Aggarwal J and Cai Q, “Human motion analysis: a review,” Computer Vision and Image Understanding, vol. 73, no. 3, pp. 428-440, Mar. 1999.
[7]Pentland A, “Looking at people: sensing for ubiquitous and wearable computing,” IEEE Trans Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 107-119, Jan. 2000.
[8]Haritaoglu I, Harwood D and Davis L, “W4: real-time surveillance of people and their activities,” IEEE Trans Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 809-830, Aug. 2000.
[9]McKenna S et al, “Tracking groups of people,” Computer Vision and Image Understanding, vol. 80, no. 1, pp. 42-56, 2000.
[10]Karmann, K. P., and von Brandt, A., “Moving object recognition using and adaptive background memory,” in Time-Varying Image Processing and Moving Object Recognition, Elsevier Science Publishers B.V., 1990.
[11]Kilger M, “A shadow handler in a video-based real-time traffic monitoring system,” in Proceedings of IEEE Workshop on Applications of Computer Vision, Palm Springs, CA, 1992, pp. 1060-1066.
[12]Stauffer C and Grimson W, “Adaptive background mixture models for real-time tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, 1999, pp. 246-252.
[13]Wren C, Azarbayejani A, Darrell T and Pentland A, “Pfinder: real-time tracking of the human body,” IEEE Trans on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, 1997.
[14]Arseneau S and Cooperstock J, “Real-time image segmentation for action recognition,” in Proceedings of IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, Victoria, Canada, 1999, pp. 86-89.
[15]Sun H, Feng T and Tan T, “Robust extraction of moving objects from image sequences,” in Proceedings of the Fourth Asian Conference on Computer Vision, Taiwan, 2000, pp. 961-964.
[16]Lipton A, Fujiyoshi H and Patil R, “Moving target classification and tracking from real-time video,” in Proceedings of IEEE Workshop on Applications of Computer Vision, Princeton, NJ, 1998, pp. 8-14.[17]Anderson C, Bert P and Vander Wal G, “Change detection and tracking using pyramids transformation techniques,” in Proceedings of SPIE Conference on Intelligent Robots and Computer Vision, Cambridge, MA, 1985, pp. 72-78.
[18]Collins R et al, “A system for video surveillance and monitoring: VSAM final report,” Technical Report: CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, May, 2000.
[19]Meyer D, Denzler J and Niemann H, “Model based extraction of articulated objects in image sequences for gait analysis,” in Proceedings of Proc IEEE International Conference on Image Processing, Santa Barbara, California, 1997, pp.78-81.
[20]Barron J, Fleet D and Beauchemin S, “Performance of optical flow techniques,” International Journal of Computer Vision, vol. 12, no. 1, pp. 42-77, 1994.
[21]Verri A, Uras S and DeMicheli E, “Motion Segmentation from optical flow,” in Proceedings of the 5th Alvey Vision Conference, Brighton, UK, 1989, pp. 209-214.
[22]Friedman N and Russell S, “Image segmentation in video sequences: a probabilistic approach,” in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, Rhode Island, USA, 1997.
[23]McLachlan G and Krishnan T, “The EM Algorithm and Extensions,” Wiley Interscience, 1997.
[24]Stringa E, “Morphological change detection algorithms for surveillance applications,” British Machine Vision Conference, Bristol, UK, 2000, pp. 402-411.
[25]許宏駿,以個人數位助理(PDA)為基礎之可穿戴式跌倒即時監測系統,逢甲大學自動控制工程學系碩士論文,民國93年。
[26]林金泉,人類跌倒之行為分析與偵測,國立中央大學資訊工程研究所碩士論文,民國93年。
[27]P. Dempster, N. M. Laird and D. B. Rubin, “Maximum Likelihood from Incomplete Data via the EM Algorithm,” Journal of the Royal Statistical Society. Series B(Methodological), Vol. 39, No. 1, pp. 1-38, 1977.
[28]http://www.ce.ntu.edu.tw/~survey/database/seminar/94_1/11-18%20seminar.doc.
[29]http://cslin.auto.fcu.edu.tw/scteach/mech/text/yin03_12.html.
[30]余松煜、周源華、吳時光,數位影像處理,儒林圖書,1993。
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top