跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.173) 您好!臺灣時間:2024/12/07 13:35
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:薛承祐
研究生(外文):Cheng Yu Hsueh
論文名稱:透過視訊攝影機及MTCNN建立眼球追蹤用於評估槍戰遊戲攻防意識之研究
論文名稱(外文):A Study on Evaluating the Offensive and Defensive Awareness in Shooter Games through the Web-CAM Eye-Tracking Technology by Using MTCNN
指導教授:洪啟舜曾建維曾建維引用關係
指導教授(外文):Jason C. HungJian-Wei Tzeng
口試委員:王俊嘉
口試委員(外文):Chun-Chia Wang
口試日期:2024-06-13
學位類別:碩士
校院名稱:國立臺中科技大學
系所名稱:資訊工程系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:73
中文關鍵詞:Web-CAM眼球資訊蒐集攻防意識槍戰遊戲ROI自動框取超參數調整
外文關鍵詞:Web-CAM eye collectionOffensive and defensive awarenessShooter gamesAutomatic ROI extractionHyperparameter tuning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:9
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
近年來槍戰遊戲在運動科技領域百花齊放,透過攻擊敵人或躲避敵人,最終達成勝利,由於不同的玩家有著不同的操作行為,玩家的攻與防成為勝敗的重要關鍵;近年在其他論文中,並未探討攻防意識之概念,本論文以探討攻防意識為主軸,整合眼球追蹤、槍戰遊戲物件偵測,探討玩家操作行為並採用自行編列的攻防意識問卷進行評估。本研究以新穎的方式研究眼球追蹤,使用原始無修改過的視訊攝影機裝置,整合MTCNN (multi-task Cascaded Convolutional Networks) 模型來蒐集臉部與眼球的資訊,運用自動化校正的方式可使RMSE下降至51.62,接續運用CNN模型來進行眼球模型方向的訓練,訓練集準確率達96%與測試集準確率達92%;最終使用無監督式眼球追蹤方式蒐集眼球座標,並建置眼球追蹤熱區圖與眼動掃視路徑,來了解玩家注視狀況、眼球之動向,並蒐集眼球大小,與臉部移動等狀況,以更符合現實世界真實情況。

本研究採用自編的攻防意識問卷,將攻防意識進行量化分析,問卷以攻防透過分群瞭解高手程度,與普通玩家之差異性,問卷結論的攻擊意識,可得高手玩家明顯高於普通與資歷低玩家;同時建立一個PUBG槍戰遊戲的資料集,其涵蓋十一種物件的分類,並透過YOLOv7模型進行自動框取興趣區域 (ROI)之研究,YOLOv7模型準確率為87%,並透過OPTUNA尋找最佳超參數,達成將單一物件辨識從39%提升至60%的準確率。運用實際遊玩槍戰遊戲之攻防意識量化,使用玩家死亡前一分鐘的影片,採用YOLOv7槍戰遊戲之物件偵測,整合眼球追蹤之技術,達成了解玩家之操作行為。最終得出,生存較久的玩家對敵人專注高且關注攻擊武器及防禦物品,以及另一種玩家專注度地圖、準鏡較多的遠攻,對於存活狀態也會較長;若一次注視多個物件或未在單一物件上有一定的專注,不容易在遊戲中生存得更久。
In recent years, shooting games have flourished in sports technology. Players achieve victory by attacking enemies or evading attacks, and the operational behaviors of different players significantly impact the outcome of the game. However, existing literature has yet to delve deeply into offensive and defensive awareness. This study investigates offensive and defensive awareness, integrates eye-tracking technology and shooting game object detection to analyze player behaviors, and employs a self-developed offensive and defensive awareness questionnaire for evaluation. This research uses an innovative approach to eye-tracking, utilizing unmodified video camera devices in conjunction with multi-task Cascaded Convolutional Networks (MTCNN) model to collect facial and eye information. Automated calibration reduced the root mean square error (RMSE) to 51.62. Subsequently, a convolutional neural network (CNN) model was used to train the eye direction model, achieving an accuracy of 96% for the training set and 92% for the test set. Ultimately, unsupervised eye-tracking technology was used to collect eye coordinates, construct eye-tracking heatmaps, and map eye movement paths to understand player gaze patterns and eye movements. Additionally, eye size and facial movement data were collected to reflect real-world conditions better.

 This study designed an offensive and defensive awareness questionnaire for quantitative analysis to understand the differences between expert players and ordinary players through clustering. The questionnaire results indicate expert players have significantly higher offensive awareness than ordinary and less experienced players. Additionally, this study established a PUBG shooting game dataset containing eleven object categories and employed the YOLOv7 model for automatic region of interest (ROI) extraction. The YOLOv7 model achieved an accuracy of 87%, and by using OPTUNA to find the optimal hyperparameters, the accuracy of single object recognition was improved from 39% to 60%. Furthermore, this study utilized videos of the minute preceding player deaths, integrating YOLOv7 object detection and eye-tracking technology to analyze offensive and defensive awareness in shooting games quantitatively. The results showed that players who survived longer were more focused on enemies, weapons, and defensive_weapon items or, in the case of long-range attacks, paid more attention to the map and scope. Conversely, players who simultaneously focused on multiple objects or did not maintain focus on a single object tended to have shorter survival times.
目次
中文摘要 i
英文摘要 ii
目次 iii
表目錄 v
圖目次 vii
一、緒論 1
1.1研究背景 1
1.2研究動機 4
1.3研究問題 5
二、文獻探討 6
2.1槍戰遊戲行為 6
2.2槍戰遊戲及槍戰遊戲ROI 7
2.3眼球追蹤設備 10
2.4眼球追蹤現況功能 13
2.5類神經網路 15
三、實驗方法 19
3.1人臉辨識與眼球捕捉 20
3.2校正方法 21
3.3眼球及臉部特徵蒐集 22
3.4眼球方向偵測 24
3.5 意識攻防的問卷 26
3.6 槍戰遊戲物件偵測 29
3.7槍戰遊戲與攻防意識評估 31
四、實驗結果與討論 34
4.1問卷分析 34
4.1.1性別交叉分析 34
4.1.2教育程度交叉分析 35
4.1.3遊戲時長與遊戲經歷交叉分析 37
4.1.4教育程度與遊玩經歷、時間交叉分析 37
4.1.5性別與遊玩經歷、時間交叉分析 39
4.2問卷攻防意識評估 40
4.2.1攻防意識評估-攻擊 40
4.2.2攻防意識評估-防禦 46
4.3資歷成績分布 52
4.4環境設置 53
4.4.1 即時捕捉臉部與螢幕錄影 53
4.5實驗模型分析與驗證 54
4.5.1眼球模型訓練與驗證 54
4.5.2 YOLOv7模型建置 55
4.5.3 OPTUNA超參數優化之參數最佳化 56
4.5.4 YOLOv7模型框取狀況 59
4.5.5槍戰遊戲物件出現序列 60
4.5.6眼動熱區圖 60
4.5.7眼動掃視路徑 61
4.5.8 YOLOv7合併眼球掃視路徑 61
4.5.9槍戰遊戲凝視物件總幀數 61
4.6攻防意識評估 62
4.6.1 不同玩家遊玩槍戰遊戲凝視ROI之分析 62
五、結論及建議 63
5.1研究結論 63
5.2研究限制 63
5.3研究建議 64
5.4未來展望 65
六、參考文獻 66
附件問卷 69

表目錄
表1知名槍戰遊戲對比 7
表2 眼動追蹤方法 10
表3眼球追蹤裝置優點 13
表4眼球追蹤裝置缺點 13
表5 左眼球與眼睛區域X與Y軸的平均誤差 21
表6校正點的眼球座標蒐集 22
表7眼球注視方位資料集 25
表8 CNN模型參數設定 26
表9填卷人數 27
表10教育程度 27
表11槍戰遊戲資歷 28
表12槍戰遊戲時長 28
表13槍戰ROI資料集 29
表14 YOLOv7超參數 30
表15性別與遊戲經歷交叉分析 34
表16 性別與遊戲經歷交叉分析 35
表17:教育程度與遊玩經歷交叉分析 35
表18 教育程度與遊玩時間交叉分析 36
表19遊玩經歷與遊玩時間交叉分析 37
表20教育程度與遊玩經歷、時間交叉分析 38
表21性別與遊玩經歷、時間交叉分析 39
表22攻擊意識一到五題 40
表23攻擊意識六到十題 41
表24攻擊意識十一到十五題 42
表25攻擊意識十六到十八題 43
表26攻擊意識第十九題至第二十二題 43
表27攻擊意識第二十三題至第二十五題 44
表28不同攻擊意識的槍種喜好 45
表29攻擊意識問卷中玩家建議新增題目方向 45
表30防禦意識強第一題至第五題 46
表31防禦意識強第六題至第十題 47
表32防禦意識第十一題至第十四題 48
表33 防禦意識第十五題至第十七題 49
表34防禦意識第十八題至第二十一題 50
表35防禦意識第二十二題至第二十四題 51
表36攻防意識高 52
表37攻防意識低 52
表38攻防意識總和分數 52
表39模型訓練規格表 53
表40眼球追蹤紀錄特徵 54
表41眼球追蹤紀錄特徵 54
表42 YOLOv7混淆矩陣解釋 55
表43 YOLOv7評估方法 55
表44最佳超參數 59
表45驗證槍戰物件準確率及框取狀況 59
表46玩家遊玩槍戰遊戲凝視狀況 62


圖目錄
圖1槍戰遊戲場景及流程 8
圖2武器分類 9
圖3防禦武器與其他裝備 9
圖4交通載具 9
圖5眼球角度 10
圖6下巴架式眼動儀設備 11
圖7穿戴式眼動儀設備 11
圖8 Web-CAM眼球追蹤 12
圖9五點校正 13
圖10眼動熱區圖 14
圖11眼動掃視路徑 14
圖12 CNN卷積神經網路 15
圖13卷積特徵萃取 15
圖14 ReLU激活函數 16
圖15最大池化萃取的特徵 16
圖16全連接層 17
圖17 YOLOv7架構 18
圖18 DenseNet與VoVNet差異 18
圖19槍戰遊戲攻防意識架構 19
圖20 MTCNN臉部特徵截取 20
圖21 MTCNN眼球區域捕捉 20
圖22 MTCNN加入霍夫圓形框取效果 20
圖23校正點9點轉變為13點 21
圖24自動化校正點方法 22
圖25攝影機計算人臉移動的距離 22
圖26眼球大小蒐集 23
圖27眼角蒐集 23
圖28眼球座標映射 24
圖29眼球座標轉換為眼球區塊座標 24
圖30眼球注視方位模型流程圖 25
圖31眼球方位預測 25
圖32問卷攻擊意識定義 26
圖33問卷防禦意識定義 27
圖34 槍戰遊戲高低資歷分群 28
圖35 YOLOv7及超參數優化流程 29
圖36 YOLOv7槍戰遊戲ROI框取 30
圖37 OPTUNA超參數優化 31
圖38槍戰遊戲攻防意識評估 31
圖39 攻防意識評估流程圖 32
圖40 眼動掃視路徑繪製 32
圖41 眼動熱區圖繪製 33
圖42性別與遊戲經歷交叉分析視覺化圖 34
圖43 性別與遊戲時長交叉分析視覺化圖 35
圖44 教育程度與遊玩經歷交叉分析視覺化圖 36
圖45 教育程度與遊玩經歷交叉分析視覺化圖 36
圖46遊玩經歷與遊玩時間交叉分析視覺化圖 37
圖47 性別與遊玩經歷、時間交叉分析視覺化圖 39
圖48攻擊意識一到五題折線圖 40
圖49攻擊意識六到十題折線圖 41
圖50 攻擊意識十一到十五題折線圖 42
圖51 攻擊意識十六到十八題折線圖 43
圖52攻擊意識第十九題至第二十二題折線圖 44
圖53攻擊意識第二十三題至第二十五題折線圖 44
圖54防禦意識強第一題至第五題 46
圖55防禦意識強第六題至第十題 47
圖56 防禦意識第十一題至第十四題折線圖 48
圖57 防禦意識第十五題至第十七題折線圖 49
圖58 防禦意識第十八題至第二十一題折線圖 50
圖59防禦意識第二十二題至第二十四題折線圖 51
圖60 實驗時臉部捕捉與螢幕錄影 53
圖61實驗建置框架與眼球行為蒐集 54
圖62眼球視覺方向準確率 55
圖63 YOLOv7槍戰遊戲混淆矩陣 56
圖64 YOLOv7槍戰遊戲評估指標 56
圖65 OPTUNA超參數優化之第一次實驗其混淆矩陣 57
圖66 OPTUNA超參數優化之第一次實驗其評估指標 57
圖67 OPTUNA超參數最佳模型之混淆矩陣 58
圖68 OPTUNA超參數最佳模型之評估指標 58
圖69槍戰遊戲物件出現的時間序列 60
圖70不同的玩家遊玩槍戰遊戲眼動熱區圖 60
圖71不同的玩家遊玩槍戰遊戲眼動掃視路徑 61
圖72 YOLOv7合併眼球掃視路徑 61
圖73 槍戰遊戲物件總凝視幀數 62
[1] Rayner, K., & Reingold, E. M. (2015). Evidence for direct cognitive control of fixation durations during reading. Current opinion in behavioral sciences, 1, 107-112.
[2] Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual search. The quarterly journal of experimental psychology, 62(8), 1457-1506.
[3] Posner, M. I., & Boies, S. J. (1971). Components of attention. Psychological review, 78(5), 391.
[4] Chen, H. C., Wang, C. C., Hung, J. C., & Hsueh, C. Y. (2022). Employing Eye Tracking to Study Visual Attention to Live Streaming: A Case Study of Facebook Live. Sustainability, 14(12), 7494.
[5] García-Carrión, B., Del Barrio-García, S., Muñoz-Leiva, F., & Porcu, L. (2023). Effect of social-media message congruence and generational cohort on visual attention and information-processing in culinary tourism: An eye-tracking study. Journal of Hospitality and Tourism Management, 55, 78-90.
[6] Park, B., Knörzer, L., Plass, J. L., & Brünken, R. (2015). Emotional design and positive emotions in multimedia learning: An eyetracking study on the use of anthropomorphisms. Computers & Education, 86, 30-42.
[7] Lin, Z., Liu, Y., Wang, H., Liu, Z., Cai, S., Zheng, Z., ... & Zhang, X. (2022). An eye tracker based on webcam and its preliminary application evaluation in Chinese reading tests. Biomedical Signal Processing and Control, 74, 103521.
[8] Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. (2016). Webgazer: scalable webcam eye tracking using user interactions. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence
[9] Pathirana, P., Senarath, S., Meedeniya, D., & Jayarathna, S. (2022). Eye gaze estimation: A survey on deep learning-based approaches. Expert Systems with Applications, 199, 116894.
[10] Przybyło, J., Kańtoch, E., & Augustyniak, P. (2019). Eyetracking-based assessment of affect-related decay of human performance in visual tasks. Future Generation Computer Systems, 92, 504-515.
[11] Scott, G. G., Pinkosova, Z., Jardine, E., & Hand, C. J. (2023). “Thinstagram”: Image content and observer body satisfaction influence the when and where of eye movements during instagram image viewing. Computers in Human Behavior, 138, 107464.
[12] Yee, N. (2006). Motivations for play in online games. CyberPsychology & behavior, 9(6), 772-775.
[13] Locke, E. A. (2012). Construct validity vs. concept validity. Human Resource Management Review, 22(2), 146-148.
[14] Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational research methods, 1(1), 104-121.
[15] de Mesquita Neto, J. A., & Becker, K. (2018). Relating conversational topics and toxic behavior effects in a MOBA game. Entertainment computing, 26, 10-29.
[16] Bopp, J. A., Mekler, E. D., & Opwis, K. (2016, May). Negative emotion, positive experience? Emotionally moving moments in digital games. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 2996-3006).
[17] Gutwin, C., Vicencio-Moreira, R., & Mandryk, R. L. (2016, October). Does helping hurt? Aiming assistance and skill development in a first-person shooter game. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play (pp. 338-349).
[18] Heitz, R. P. (2014). The speed-accuracy tradeoff: history, physiology, methodology, and behavior. Frontiers in neuroscience, 8, 150.
[19] Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of consciousness studies, 2(3), 200-219.
[20] James, W. (2007). The principles of psychology (Vol. 1). Cosimo, Inc..
[21] Ginsburg, S., & Jablonka, E. (2019). The evolution of the sensitive soul: learning and the origins of consciousness. MIT Press.
[22] Sattin, D., Magnani, F. G., Bartesaghi, L., Caputo, M., Fittipaldo, A. V., Cacciatore, M., ... & Leonardi, M. (2021). Theoretical models of consciousness: a scoping review. Brain Sciences, 11(5), 535.
[23] Hayes, S. C., & Hofmann, S. G. (2023). A biphasic relational approach to the evolution of human consciousness: Un enfoque relacional bifásico para la evolución de la conciencia humana. International Journal of Clinical and Health Psychology, 23(4), 100380.
[24] Lieberman, J. D., Solomon, S., Greenberg, J., & McGregor, H. A. (1999). A hot new way to measure aggression: Hot sauce allocation. Aggressive Behavior: Official Journal of the International Society for Research on Aggression, 25(5), 331-348.
[25] Buss, A. H., & Perry, M. (1992). The aggression questionnaire. Journal of personality and social psychology, 63(3), 452.
[26] Denson, T. F., Pedersen, W. C., & Miller, N. (2006). The displaced aggression questionnaire. Journal of personality and social psychology, 90(6), 1032.
[27] Herpin, G., Gauchard, G. C., Lion, A., Collet, P., Keller, D., & Perrin, P. P. (2010). Sensorimotor specificities in balance control of expert fencers and pistol shooters. Journal of electromyography and kinesiology, 20(1), 162-169.
[28] Peleg, K., Rivkind, A., Aharonson-Daniel, L., & Israeli Trauma Group. (2006). Does body armor protect from firearm injuries?. Journal of the American College of Surgeons, 202(4), 643-648.
[29] Miller, N., Pedersen, W. C., Earleywine, M., & Pollock, V. E. (2003). A theoretical model of triggered displaced aggression. Personality and Social Psychology Review, 7(1), 75-97.
[30] Lajunen, T., Parker, D., & Stradling, S. G. (1998). Dimensions of driver anger, aggressive and highway code violations and their mediation by safety orientation in UK drivers. Transportation Research Part F: Traffic Psychology and Behaviour, 1(2), 107-121.
[31] Chow, R. M., Tiedens, L. Z., & Govan, C. L. (2008). Excluded emotions: The role of anger in antisocial responses to ostracism. Journal of Experimental Social Psychology, 44(3), 896-903.
[32] Löw, A., Weymar, M., & Hamm, A. O. (2015). When threat is near, get out of here: Dynamics of defensive behavior during freezing and active avoidance. Psychological science, 26(11), 1706-1716.
[33] Mobbs, D., Hagan, C. C., Dalgleish, T., Silston, B., & Prévost, C. (2015). The ecology of human fear: survival optimization and the nervous system. Frontiers in neuroscience, 9, 55.
[34] Hendrie, C. A., Weiss, S. M., & Eilam, D. (1998). Behavioural response of wild rodents to the calls of an owl: a comparative study. Journal of Zoology, 245(4), 439-446.
[35] Wade, N., & Tatler, B. W. (2005). The moving tablet of the eye: The origins of modern eye movement research. Oxford University Press, USA.
[36] 蔡介立(2000)。《從眼動控制探討中文閱讀的訊息處理歷程:應用眼動誘發呈現技術之系列研究》,國立政治大學心理學研究所未發表之博士論文。
[37] Chen, Y., & Tsai, M. J. (2015). Eye-hand coordination strategies during active video game playing: An eye-tracking study. Computers in Human Behavior, 51, 8-14.
[38] Mikalef, P., Sharma, K., Chatterjee, S., Chaudhuri, R., Parida, V., & Gupta, S. (2023). All eyes on me: Predicting consumer intentions on social commerce platforms using eye-tracking data and ensemble learning. Decision Support Systems, 114039.
[39] Meo, M., Del Punta, J. A., Sánchez, I., de Luis García, R., Gasaneo, G., & Martin, R. (2023). A dynamical method to objectively assess infantile nystagmus based on eye tracking. A pilot study. Journal of Optometry, 16(3), 221-228.
[40] Park, S., Spurr, A., & Hilliges, O. (2018). Deep pictorial gaze estimation. In Proceedings of the European conference on computer vision (ECCV) (pp. 721-738).
[41] Schmitz, I., & Einhäuser, W. (2023). Gaze estimation in videoconferencing settings. Computers in Human Behavior, 139, 107517.
[42] de Chambrier, A. F., Pedrotti, M., Ruggeri, P., Dewi, J., Atzemian, M., Thevenot, C., ... & Terrier, P. (2023). Reading numbers is harder than reading words: An eye-tracking study. Acta Psychologica, 237, 103942.
[43] Andersson, R., Nyström, M., & Holmqvist, K. (2010). Sampling frequency and eye-tracking measures: how speed affects durations, latencies, and more. Journal of Eye Movement Research, 3(3).
[44] 吳昇儒(2015)。自然光源照明眼動儀系統設計。﹝碩士論文。國立臺灣師範大學﹞臺灣博碩士論文知識加值系統。
[45] Meißner, M., Pfeiffer, J., Pfeiffer, T., & Oppewal, H. (2019). Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research. Journal of Business Research, 100, 445-458.
[46] Yoshimura, Y., Kizuka, T., & Ono, S. (2021). Properties of fast vergence eye movements and horizontal saccades in athletes. Physiology & Behavior, 235, 113397.
[47] Wang, H., Pi, J., Qin, T., Shen, S., & Shi, B. E. (2018, June). SLAM-based localization of 3D gaze using a mobile eye tracker. In Proceedings of the 2018 ACM symposium on eye tracking research & applications (pp. 1-5).
[48] Wan, Z. H., Xiong, C. H., Chen, W. B., & Zhang, H. Y. (2021). Robust and accurate pupil detection for head-mounted eye tracking. Computers & Electrical Engineering, 93, 107193.
[49] Al-Kassim, Z., & Memon, Q. A. (2017). Designing a low-cost eyeball tracking keyboard for paralyzed people. Computers & Electrical Engineering, 58, 20-29.
[50] Zhang, X., Sugano, Y., Fritz, M., & Bulling, A. (2017). It's written all over your face: Full-face appearance-based gaze estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 51-60).
[51] Blignaut, P., & Wium, D. (2014). Eye-tracking data quality as affected by ethnicity and experimental design. Behavior research methods, 46, 67-80.
[52] Wang, Y. (2023). The impact of linguistic metaphor on translation unit in target text processing: An eye tracking and keylogging English-Chinese translation study. Ampersand, 100129.
[53] Jin, J., Wang, A., Wang, C., & Ma, Q. (2023). How do consumers perceive and process online overall vs. individual text-based reviews? Behavioral and eye-tracking evidence. Information & Management, 60(5), 103795.
[54] Blascheck, T., Schweizer, M., Beck, F., & Ertl, T. (2017, June). Visual comparison of eye movement patterns. In Computer Graphics Forum (Vol. 36, No. 3, pp. 87-97).
[55] O'Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.
[56] Lindsley, D. B. (1951). Emotion.
[57] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[58] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[59] Yarotsky, D. (2017). Error bounds for approximations with deep ReLU networks. Neural Networks, 94, 103-114.
[60] Song, L., Fan, J., Chen, D. R., & Zhou, D. X. (2023). Approximation of nonlinear functionals using deep relu networks. arXiv preprint arXiv:2304.04443.
[61] Kutlugün, M. A., Sirin, Y., & Karakaya, M. (2019, September). The effects of augmented training dataset on performance of convolutional neural networks in face recognition system. In 2019 Federated Conference on Computer Science and Information Systems (FedCSIS) (pp. 929-932). IEEE.
[62] Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10), 1499-1503.
[63] Zhang, D., Li, J., & Shan, Z. (2020, November). Implementation of Dlib deep learning face recognition technology. In 2020 International Conference on Robots & Intelligent System (ICRIS) (pp. 88-91). IEEE.
[64] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2023). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7464-7475).
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top