跳到主要內容

臺灣博碩士論文加值系統

(100.28.0.143) 您好!臺灣時間:2024/07/23 09:41
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:徐懿志
研究生(外文):Yi-Jhih-Syu
論文名稱:人臉偵測應用於遠距課程學生專注度分析之研究
論文名稱(外文):A Study on Facial Detection for the Concentration Analysis of Students in Remote Curriculum
指導教授:洪盟峯謝欽旭謝欽旭引用關係郭書瑋
指導教授(外文):Mong-Fong HorngChin-Shiuh ShiehShu-Wei Guo
口試委員:程毓明洪盟峯謝欽旭郭書瑋
口試委員(外文):Yuh-Ming-ChengMong-Fong HorngChin-Shiuh ShiehShu-Wei Guo
口試日期:2024-01-29
學位類別:碩士
校院名稱:國立高雄科技大學
系所名稱:電子工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:72
中文關鍵詞:人臉偵測深度學習專注度分析遠距課程人臉網格
外文關鍵詞:facial detectiondeep learningConcentration AnalysisRemote Curriculumface mesh
相關次數:
  • 被引用被引用:0
  • 點閱點閱:57
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
近幾年,COVID-19病毒的擴散影響全球企業和學校的運作。在COVID-19高傳染力下,許多學校開始採取線上學習的方式來降低接觸避免疫情加速擴散。然而,在線上學習時,教師很難即時察覺到學生在上課中的狀況,因此不容易了解學生是否有在專注上課,導致教學的成效不佳。如果有一套技術可以在線上學習時評估學生上課的專注度,讓授課教師能夠隨時得知學生在課堂上的專注程度,就可以協助授課教師掌握學生的學習狀況,進而調整授課方式,提高學生線上學習的成效。因此,本研究採用兩種深度學習技術進行人臉偵測與臉部數據採集。再透過臉部數據分析評估學生上課時的專注度。第一種深度學習技術是YOLOv5,使用學生線上學習影像訓練人臉偵測模型,再利用訓練的模型對學生上課影片進行臉部偵測後,利用偵測到的臉和眼睛來判斷學生是否專注,如果沒有偵測到臉、眼睛或兩者都沒偵測到就會判定不專注,兩者都有偵測到時判斷為專注。第二種技術是使用MediaPipe的人臉網格模型,透過計算各部位的標記點變化來判斷在螢幕前學生的行為,例如閉眼、轉頭或離開座位等資訊。在專注度評估方面會針對學生不專注行為發生的頻率與時間長短進行計算,計算出一個專注度參考數值作為學生專注程度的參考依據。根據實驗結果顯示,YOLOv5訓練出來的模型在偵測人臉的平均準確率為99.8%、召回率為97.8%、信心度為98.8%;偵測眼睛的平均準確率為99.8%、召回率為94.4%、信心度為96.8%。MediaPipe模型透過軟體設計後,能夠判斷低頭、閉眼、轉頭三個動作,利用這三個動作來計算學生在線上學習時的專注度,實驗結果顯示在偵測人臉的平均準確率為99.9%、召回率為98%、信心度為98.9%;偵測閉眼的平均準確率為98.2%、召回率為93.5%、信心度為94.9%;偵測低頭的平均準確率為99.2%、召回率為95.3%、信心度為96.9%;偵測轉頭的平均準確率為95.9%、召回率為97.2%、信心度為96.5%,並透過專注度評估機制給予對應的評價。本研究的實驗結果證明本研究所使用專注度分析方法在遠距課程的可行性和有效性。
In recent years, the global operations of businesses and schools have been significantly affected by the spread of the COVID-19 virus. Due to the highly contagious nature of COVID-19, many schools have adopted online learning to reduce physical contact and mitigate the accelerated spread of the pandemic. However, during online learning, teachers find it challenging to promptly assess the students' engagement in class, making it difficult to determine whether students are paying attention, leading to suboptimal teaching effectiveness. If there were a technology capable of assessing students' focus during online learning, it would enable teachers to be aware of students' level of concentration in real-time. This information could assist teachers in understanding students' learning conditions and adjusting teaching methods to enhance the effectiveness of online learning.
Therefore, this study employs two deep learning techniques for facial detection and data collection. Facial data analysis is then conducted to evaluate students' concentration during class. The first deep learning technique is YOLOv5, which involves training a facial detection model using students' online learning images. The trained model is then applied to detect faces in students' class videos. The presence or absence of detected faces and eyes is used to determine whether students are attentive. If no faces or eyes are detected, it is considered a lack of focus; otherwise, if both are detected, it is deemed focused attention. The second technique involves using MediaPipe's facial mesh model, which calculates changes in landmark points to determine students' behaviors in front of the screen, such as closing eyes, turning heads, or leaving seats. In the concentration assessment, the frequency and duration of inattentive behaviors are calculated to produce a concentration reference value as an indicator of students' focus level.
According to the experimental results, the YOLOv5 model achieves an average accuracy of 99.8% in detecting faces, with a recall rate of 97.8% and a confidence level of 98.8%. In detecting eyes, the average accuracy is 99.8%, with a recall rate of 94.4% and a confidence level of 96.8%. The MediaPipe model, designed through software, can detect three actions—lowering head, closing eyes, and turning head. Utilizing these actions, an analysis of students' concentration during online learning is conducted. The average accuracy of facial detection is 99.9%, with a recall rate of 98% and a confidence level of 98.9%. The average accuracy of eye closure detection is 98.2%, with a recall rate of 93.5% and a confidence level of 94.9%. The average accuracy of head lowering detection is 99.2%, with a recall rate of 95.3% and a confidence level of 96.9%. The average accuracy of head turning detection is 95.9%, with a recall rate of 97.2% and a confidence level of 96.5%. The concentration assessment mechanism provides corresponding evaluations. The experimental results of this study demonstrate the feasibility and effectiveness of the concentration analysis method in distance learning..

目錄
中文學位論文考試審定書 II
學位論文著作權歸屬協議書 III
摘要 IV
Abstract VI
誌謝 IX
目錄 X
圖目錄 XIII
表目錄 XV
第一章 緒論 1
1.1研究動機 1
1.2 研究目的 2
1.3 論文架構 3
第二章 相關研究 4
2.1 傳統教學、數位學習與遠距教學比較 4
2.2人臉偵測方法 6
2.2.1 物件偵測 6
2.2 2 YOLOv5 7
2.2.3 MediaPipe 9
2.3 專注度分析與評估 11
2.3.1 腦電訊號偵測技術 11
2.3.2 皮膚溫度偵測技術 12
2.3.3 臉部表情和注視方向、課堂手勢和行為偵測 12
2.3.4 人臉和眨眼偵測 13
第三章 專注度分析與評估方法 14
3.1應用情境與技術架構 14
3.1.1流程情境說明 14
3.1.2 環境架構 14
3.1.3軟體流程 16
3.2基於YOLOv5的人臉偵測方法 17
3.2.1 人臉、眼睛特徵圈選 19
3.2.2 訓練人臉偵測模型 20
3.2.3人臉偵測模型測試 21
3.2.4 不專注行為判斷 23
3.3基於MediaPipe的人臉偵測方法 23
3.3.1人臉偵測 24
3.3.2不專注行為判斷 25
3.4上課專注度分析 29
第四章 實驗 32
4.1實驗方法說明 32
4.1.1模型評估指標 32
4.1.2實驗資料與環境介紹 33
4.1.3 實驗設計 34
4.2 基於YOLOv5方法的專注度評估 34
4.2.1人臉和眼睛的偵測準確率 35
4.2.2 YOLOv5遠距課程專注度分析與評估 37
4.2.3 YOLOv5數位學習專注度分析 39
4.3 基於MediaPipe方法的專注度分析與評估 41
4.3.1人臉偵測準確率結果 43
4.3.2 不專注行為偵測結果 44
4.3.3 MediaPipe遠距課程專注度分析與評估 45
4.3.3 MediaPipe數位學習專注度分析與評估 47
4.4實驗結果分析 49
4.4.1 人臉偵測準確率 49
4.4.2專注度評估結果 49
第五章 結論和未來方向 51
參考文獻 53
[1] 衛生福利部,2023/04/30,”臺灣嚴重特殊傳染性肺炎(COVID 19)防疫關鍵決策時間軸” 臺灣嚴重特殊傳染性肺炎(COVID 19)防疫關鍵決策網 (mohw.gov.tw)
[2] 衛生福利部,2021/05/18,”全國各級學校因應疫情停課居家線上學習” 全國各級學校因應疫情停課居家線上學習 (www.edu.tw)
[3] W. McKibbin and R. Fernando, The economic impact of covid-19. economics in the time of covid-19, Baldwin, B. Weder di Mauro(red.), pp. 45–51, Centre for Economic Policy Research(CEPR), London, UK, 2020.
[4] E. Koçoglu and D. Tekdal, “Analysis of distance education activities conducted during covid-19 pandemic,” Educational Research and Reviews, vol. 15, no. 9, pp. 536–543, 2020.
[5] Watts, Lynette, SYNCHRONOUS AND ASYNCHRONOUS COMMUNICATION IN DISTANCE LEARNING: A Review of the Literature, Quarterly Review of Distance Education; Charlotte Vol. 17, Iss. 1, (2016): 23-32,56.
[6] S. Marouane, S. Najlaa, T. Abderrahim, and E. K. Eddine, “Towards measuring learner’s concentration in e-learning systems,” International Journal of Computer Techniques, vol. 2, no. 5, pp. 27–29, 2015.
[7] 曾璧光、陳信正,2021,”高級中等學校實施遠距學習之問題與討論”,臺灣教育評論月刊,10(6),頁 41-43,07.pdf (ater.org.tw)
[8] 王翰揚,2022,”疫情下線上教學之探究-以數位學伴傑出大學伴為例”,臺灣教育評論月刊,11(9),頁 145-150,12.pdf (ater.org.tw)
[9]曾芳琪,2021,”遠距教學的挑戰— 如何因應疫情下學習樣態改變的衝擊”,臺灣教育評論月刊,10(9),頁 145-152,11.pdf (ater.org.tw)
[10] 張筱祺、鐘映庭,2020,”智慧學習產業產值調查報告”,經濟部工業局, 109年度智慧學習產值調查期末報告_v5.pdf (epark.org.tw)
[11]彭怡倩,2022,”遠距與實體教學之遠差異分析”,1111-21.pdf
[12]王幸華,2018,”數位遠距課程與傳統課室教學成效之比較-以「文學與人生」數位課 程為例” ,107 年度「教育部教學實踐研究計畫」,教育部,107年度教學實踐研究成果報告書-王幸華.pdf (ctust.edu.tw)
[13] Zhengxia Zou; Keyan Chen; Zhenwei Shi; Yuhong Guo; Jieping Ye” Object Detection in 20 Years: A Survey”, Vol. 111, No. 3, March 2023
[14] Zhong-Qiu Zhao , Peng Zheng, Shou-Tao Xu, and Xindong Wu , 2019,” Object Detection With Deep Learning: A Review”.
[15] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, 2016, “You Only Look Once: Unified, Real-Time Object Detection,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, Las Vegas, NV, USA, Jun.
[16] J. Redmon and A. Farhadi, 2017, "YOLO9000: Better, Faster, Stronger," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA,pp. 6517-6525, doi: 10.1109/CVPR.2017.690.
[17] Joseph Redmon , Ali Farhadi ,2018, “YOLOv3: An Incremental Improvement”, arXiv:1804.02767.
[18] Alexey Bochkovskiy , Chien-Yao Wang , Hong-Yuan Mark Liao ,2020, “YOLOv4: Optimal Speed and Accuracy of Object Detection”,arXiv:2004.10934
[19] 王敏紋,2023,”使用YOLOv5與FaceNet於未戴口罩人臉辨識之研究”﹝碩士論文。國立雲林科技大學﹞臺灣博碩士論文知識加值系統。
[20] Qingqing Xu Zhiyu Zhu , Huilin Ge , Zheqing Zhang, and Xu Zang,2021,” Effective Face Detector Based on YOLOv5 and Superresolution Reconstruction”,Volume , Article ID 7748350, 9 pages
[21] N. C. B. Putra, E. M. Yuniarno and R. F. Rachmadi, "Driver Visual Distraction Detection Based on Face Mesh Feature Using Deep Learning," 2023 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 2023, pp. 6-11, doi: 10.1109/ISITIA59021.2023.10221144.
[22] N.-H. Liu, C.-Y. Chiang, and H.-C. Chu, 2013, “Recognizing the degree of human attention using eeg signals from mobile sensors,” Sensors, vol. 13, no. 8, pp. 10273–10286.
[23] S. Nomura, M. Hasegawa-Ohira, Y. Kurosawa, Y. Hanasaka, K. Yajima, and Y. Fukumura,2012, “Skin tempereture as a possible indicator of studentaˆ€TM s involvement in e-learning sesions,” International Journal of Electronic Commerce Studies.
[24] Mu-Chun Su , , Chun-Ting Cheng, , Ming-Ching Chang , , and Yi-Zeng Hsieh ,” A Video Analytic In-Class Student Concentration Monitoring System” IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, VOL. 67, NO. 4, NOVEMBER 2021
[25] 劉育名,2021,”植基於臉部特徵與眨眼檢測之線上學習專注力評估系統”﹝碩士論文。國立臺中科技大學﹞臺灣博碩士論文知識加值系統。
[26]Glenn Jocher," YOLOv5 ???? in PyTorch > ONNX > CoreML > TFLite”,github, GitHub - ultralytics/yolov5: YOLOv5 ???? in PyTorch > ONNX > CoreML > TFLite

電子全文 電子全文(網際網路公開日期:20250216)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊