(3.234.221.67) 您好!臺灣時間:2021/04/11 16:24
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:林子怡
研究生(外文):ZI-YI LIM
論文名稱:以視覺為基礎之師生互動行為分析系統
論文名稱(外文):Vision-based Analysis System of Interactions between Teacher and Students
指導教授:廖珗洲廖珗洲引用關係
指導教授(外文):HSIEN-CHOU LIAO
口試委員:林春宏廖珗洲鄭文昌
口試委員(外文):CHUEN-HORNG LINHSIEN-CHOU LIAOWEN-CHANG CHENG
口試日期:2016-07-04
學位類別:碩士
校院名稱:朝陽科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:中文
論文頁數:59
中文關鍵詞:視覺處理教室錄影教師行為分析學生行為分析
外文關鍵詞:Visual processingClassroom recordingTeacher’s behavior analysisStudents’ behavior analysis
相關次數:
  • 被引用被引用:0
  • 點閱點閱:135
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
課程錄影系統主要作為錄製學生課後複習的影音教材,而本研究的動機來自於一個產學合作案,公司期望在多攝影機的環境中可以錄製教師與學生在課堂上的互動過程,並自動將多個影片自動剪輯為單一影片。本研究中,教師端使用1台廣角攝影機,學生端則依據教室大小使用2台以上的攝影機進行分析。依據分析教師與學生互動所產生的事件,將會在相關攝影機的影片上記錄標記(Tag),以利於公司現有的課錄系統可以在課程結束後依據標記進行自動剪輯。
教師端的分析上會先在攝影機的FOV (Field-of-View)預先設定一組區域,而教師端分析的主要目的是可以確認教師的所在區域。分析方法使用GMM (Gaussian Mixture Model)建立背景模型,再將即時影像進行背景相減取得前景物體,並將最大的物體將視為教師,在教師的座標上也使用了卡爾曼濾波器來避免座標跳動,最後再利用一組條件過濾的機制確認教師所在區域。
學生端分析的主要目的是為了取得學生站立的事件。分析方法同樣使用GMM來取得前景物體影像。接著過濾物體的幾何特徵以確認是否為學生,並進一步分析區塊移動方向以確認學生是否有站立行為。
上述分析所得到之教師或學生的分析事件透過HTTP的方式傳送至課錄系統,經過公司實際與安裝測試,本研究所開發之系統已完成商品化,並且公司將系統命名為“iTrace”實際列入公司的服務項目中。

Lecture recording is an important function for students to review the lesson after class. In an academic-industrial project, the interactions between the teacher and students are expected to be recorded in a multiple camera environment. A wide-angle camera is installed for the teacher and two or more cameras are installed for the students according to the size of the classroom. The interactions are then analyzed and generate events to put tags in the corresponding video description. Then, all the videos recorded from different cameras can be blended based on the tags into a single video after the class automatically.
A set of areas is pre-defined in the FOV (Field-of-View) of the teacher’s camera. The purpose of the teacher’s video analysis is to get the correct area where the teacher is contained. In the analysis of the teacher events, GMM (Gaussian Mixture Model) is used to construct the background model and segment foreground objects. The largest blob is deemed as the teacher. The Kalman filter is utilized to keep the teacher’s coordinates stable. Finally, a set of rules is designed to ensure the area is correct.
The purpose of the student’s video analysis is to get the stand-up events. The same foreground object segmentation method of teacher’s analysis is used. The geometric features are filtered to ensure the blob is a student. Then, the moving direction is also checked to ensure whether the student stands up or not.
The above teacher or student events are transmitted to the lecture recording system via the HTTP connection. The system is commercialized and becomes a service item of the company called “iTrace”.

目錄
中文摘要........................................................I
Abstract.......................................................II
誌謝..........................................................III
目錄............................................................V
表目錄.........................................................VI
圖目錄........................................................VII
第一章 簡介....................................................1
第二章 文獻探討................................................2
第三章 師生分析方法............................................6
3.1 環境設計................................................6
3.2 教師端..................................................7
3.3 學生端.................................................18
3.4 環境光亮度變化偵測.....................................28
第四章 實驗分析...............................................31
4.1 系統功能...............................................31
4.1.1 分析系統設定工具.......................................31
4.1.2 訊息發送...............................................31
4.2 實驗軟硬體環境環境設計.................................32
4.3 使用者介面說明.........................................37
4.4 系統測試與問題排除.....................................44
第五章 結論...................................................47
參考文獻.......................................................49
附錄A:其他操作介面說明........................................52
附錄B:學生端使用光流方法失敗原因..............................58

表目錄
表 1:自動課錄系統的相關研究總結................................3
表 2:教師端攝影機參數設定.....................................34
表 3:教師端影像Profile參數設定................................34
表 4:學生端攝影機參數設定.....................................36
表 5:學生端影像Profile參數設定................................36
表 6:系統設定主畫面欄位說明...................................53
表 7:攝影機狀態說明...........................................56
表 8:SES狀態說明..............................................57
表 9:狀態欄符號狀態說明.......................................57

圖目錄
圖 1:實際在教室內攝影機所拍攝影像..............................2
圖 2:傳統課錄的方式[18]........................................3
圖 3:教室環境與硬體設備架構示意圖..............................6
圖 4:教師教課情境..............................................7
圖 5:教師端分析流程圖..........................................8
圖 6:規則1:教師大幅移動(LR:-1.000)...........................10
圖 7:規則2:教師剛進入區域且稍微移動(LR:-1.000)...............11
圖 8:規則3:教師稍微移動(LR:-0.005)...........................11
圖 9:規則4:教師移動且講台上有多物體(LR:0.000)................11
圖 10:規則5:教師稍微移動且講台上有多物體(LR:-1.000)..........12
圖 11:規則6:講台上只有教師(LR:0.000).........................12
圖 12:教師端前景偵測..........................................13
圖 13:物體重心點(綠點)偵測....................................13
圖 14:使用卡爾曼濾波器序列圖,進行穩定濾波,如藍點處..........15
圖 15:教師進入區域的確認過程..................................17
圖 16:學生站立情境............................................18
圖 17:學生端實際拍攝影像(黃線為中心線)........................18
圖 18:學生端分析流程圖........................................20
圖 19:學生端原始輸入影像......................................21
圖 20:學生端ROI遮罩後影像.....................................21
圖 21:學生端前景偵測..........................................22
圖 22:學生端過濾並且取得最大區塊Blob..........................23
圖 23:學生端站立行為確認......................................24
圖 24:學生端偵測到候選區域並確認物體交集比例(站立時)..........25
圖 25:學生端偵測到候選區域並確認物體交集比例(移動時)..........26
圖 26:學生端確認物體站立......................................26
圖 27:學生站立確認的過程......................................27
圖 28:一般亮度的影像..........................................29
圖 29:亮度偵測的ROI區域.......................................29
圖 30:關燈後的數值變化........................................29
圖 31:亮度偵測的ROI區域(關燈後)...............................30
圖 32:關燈後所偵測到的前景....................................30
圖 33:IP Cam影像設定..........................................33
圖 34:IP Cam影像Profile設定...................................33
圖 35:學生端實驗教室環境圖....................................35
圖 36:點選iTrace捷徑,必須以管理員身份開啟....................37
圖 37:開啟系統需要密碼,預設密碼為空..........................37
圖 38:系統主畫面..............................................38
圖 39:課錄主機設定主畫面......................................39
圖 40:攝影機設定主畫面........................................40
圖 41:ROI設定畫面.............................................41
圖 42:刪除及編輯ROI...........................................42
圖 43:編輯ROI之步驟...........................................42
圖 44:Live View 主畫面........................................43
圖 45:ROI的設定所造成的誤判...................................44
圖 46:教師離開ROI範圍或攝影機問題.............................45
圖 47:教師教課中,學生上台後導致追蹤錯誤......................46
圖 48:合作公司上架之網頁截圖[19]..............................48
圖 49:記錄查看主畫面..........................................52
圖 50:系統設定主畫面..........................................53
圖 51:開啟狀態(點擊iTrace圖示點選『Show Status』開啟狀態).....54
圖 52:SES及攝影機分析狀態視窗.................................55
圖 53:開啟關於查看iTrace版本..................................56
圖 54:學生端行為分析結果(實際教室)............................59


[1]M. Bianchi, "Automatic Video Production of Lectures Using an Intelligent and Aware Environment," Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia, pp. 117-123, 2004.
[2]F. Lampi, S. Kopf, M. Benz, and W. Effelsberg, "A Virtual Camera Team for Lecture Recording," IEEE MultiMedia, Vol. 15, pp. 58-62, 2008.
[3]H. P. Chou, J. M. Wang, C. S. Fuh, S. C. Lin, and S. W. Chen, "Automated Lecture Recording System,", International Conference on System Science and Engineering, pp. 167-172, 2010.
[4]A. Ranjan, R. Henrikson, J. Birnholtz, R. Balakrishnan, and D. Lee, "Automatic Camera Control Using Unobtrusive Vision and Audio Tracking," Proceedings of Graphics Interface, pp. 47-54, 2010.
[5]A. L. Ronzhin, "An Audiovisual System of Monitoring of Participants in the Smart Meeting Room," Proceeding of the 9th Conference of Open Innovations Community Fruct, pp. 127-132, 2011.
[6]M. B. Winkler, K. M. Hover, A. Hadjakos, and M. Muhlhauser, "Automatic Camera Control for Tracking a Presenter during a Talk," IEEE International Symposium on Multimedia, pp. 471-476, 2012.
[7]H. C. Liao, and M. H. Pan, “An Automatic Lecture Recording System Using Pan-Tilt-Zoom Camera to Track Lecturer and Handwritten Data,” International Journal of Applied Science and Engineering, Vol. 13, No. 1, pp. 1-18, 2015
[8]J. L. Guo, C. Y. Fang, Y. C. Li, and S. W. Chen, " A Video-Based Face Detection Method Using Graph Cut Algorithm in Classrooms,", Dept. of Computer Science and Information Engineering, National Taiwan Normal University, 2014.
[9]D. Hulens, T. Goedeme, and T. Rumes, " Autonomous Lecture Recording with a PTZ Camera While Complying with Cinematographic Rules," Canadian Conference on Computer and Robot Vision (Crv), pp. 371-377, 2014.
[10]S. K. D’Mello, A. M. Olney, N. Blanchard, X. Sun, and B. Ward, "Multimodal Capture of Teacher-Student Interactions for Automated Dialogic Analysis in Live Classrooms," Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 557-566, 2015.
[11]C. Stauffer, and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 247-252, 1999.
[12]G. Welch, and G. Bishop, “An Introduction to the Kalman Filter,” University of North Carolina, Chapel Hill, Technical Report, TR95-041, 2004, 16 pages.
[13]R. M. Haralick, S. R. Sternberg, and X. Zhuang, “Image Analysis Using Mathematical Morphology,” IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. PAMI-9, No. 4, 1987.
[14]N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions On Systrems, Man, And Cybernetics, Vol. SMC-9, No. 1, 1979.
[15]B. D. Lucas, and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. DARPA Image Understanding Workshop, pp. 121-130, 1981.
[16]A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/Kanade Meets Horn/Schunck: Combining Local and Global Optic Flow Methods,” International Journal of Computer Vision, Vol. 61, pp. 211-231, 2005.
[17]B.D. Lucas, “Generalized Image Matching by the Method of Differences,” Ph.D. dissertation, Dept. of Computer Science, Carnegie-Mellon University, 1984.
[18]朝陽科技大學圖資處管理組:http://system.cyut.edu.tw/tela/t2415.html
[19]藍眼科技iTrace:http://blueeyes.com.tw/iLearning_iTrace.php

電子全文 電子全文(網際網路公開日期:20210704)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔