跳到主要內容

臺灣博碩士論文加值系統

(44.211.239.1) 您好!臺灣時間:2023/01/31 04:46
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃筠庭
研究生(外文):YUN-TING HUANG
論文名稱:結合動態興趣物件與凝視行為編輯摘要影片
論文名稱(外文):Abstracting Video Based on the Objects of Interest and Gaze Tracking
指導教授:黃博俊黃博俊引用關係陳惠惠陳惠惠引用關係
指導教授(外文):BOR-JIUNN HWANGHUI-HUI CHEN
學位類別:碩士
校院名稱:銘傳大學
系所名稱:資訊傳播工程學系碩士班
學門:傳播學門
學類:一般大眾傳播學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:中文
論文頁數:39
中文關鍵詞:動態興趣物件物件追蹤視覺追蹤摘要影片適應性
外文關鍵詞:object trackingabstracting videogaze trackingdynamic objects of interestadaptive
相關次數:
  • 被引用被引用:0
  • 點閱點閱:117
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
摘要影片(Abstracting Video)為透過影片擷取法則(Filter Function) 編輯閱聽者的動態興趣物件(Dynamic Objects Of Interest, DOOI)而成,是分析閱聽者凝視行為之重要技術,其流程為利用視覺追蹤技術來取得使用者的凝視訊息,並以物件追蹤技術來取得物件位置,再對映出使用者觀看之動態興趣物件,最後使用影片擷取法則來編輯其摘要影片。由於凝視點估算及物件座標之取得為影響最後成效關鍵因素,因此本論文以提升物件追蹤準確率及關聯相同物件影格(Frame),來改善動態興趣物件擷取成效,並提出影片擷取法則編輯摘要影片。提出適應性物件追蹤法則(Adaptive object tracking algorithm)以提升物件追蹤準確率,包括結合物件的顏色、形狀及紋理特徵並動態調整權重值,針對因預測路徑失敗所造成之問題,以視覺落點密度來調整預測物件中心位置,最後透過所提出之物件關聯方法將相同物件的影格串聯起來,提升動態興趣物件擷取之準確率。
Abstracting video is obtained by filter function based on user’s dynamic objects of interest (DOOI). It is an important technique to analyze user’s visual behaviors. The gaze and object tracking techniques are used to obtain user’s gaze information and the object coordinate. Then, the filter function is used to edit the abstracting video after mapping user’s DOOI. The estimations of gaze point and object coordinate are the key factors to affect the final results. This paper proposed an adaptive object tracking algorithm to improve the accuracy on object tracking. Using the dynamic weights scheme and the information of the gaze point density, possible areas where the objects may appear are constantly researched to target and trace the aimed objects. It is proven to be effective on tracking non-inertia driven moving objects. Finally, the mapping technique of relating objects is used to improve the accuracy on detecting the DOOI.
摘要 ii
Abstractiii
誌謝 iv
目錄 v
表目錄 vii
圖目錄 viii
第一章 緒論1
1.1 研究背景與動機1
1.2 研究目的 2
1.3 問題探討 4
1.4 論文架構 4
第二章 相關文獻探討5
2.1 視覺追蹤技術與應用5
2.2 物件追蹤技術與應用6
2.3 基於內容的影像檢索6
2.4 摘要影片 7
第三章 研究方法8
3.1適應性物件追蹤法則9
3.2 調整預測物件中心位置12
3.3 物件關聯 14
3.4 影片擷取法則 16
第四章 實驗結果 18
4.1 動態權重物件追蹤22
4.2 適應性物件追蹤法則24
4.3 摘要影片 26
第五章 結論與未來展望27
參考文獻28
[1]C.W. Kao, Y.W. Chen, C.W. Yang, K.C. Fan, B.J. Hwang and C.P. Huang, “Eye Gaze Tracking based on Pattern Voting Scheme for Mobile Device,” First International Conference on Instrumentation, Measurement, Computer, Communication and Control, pp. 337-340, 2011.
[2]C.W. Kao, Y.J. Huang, K.C. Fan, H.H. Chen, W.C. Chung, B.J. Hwan and C.H. Hsieh, “The integrated gaze, web and object tracking techniques for the web-based e-learning platform,” IEEE International Conference on Teaching, Assessment and Learning for Engineering, pp. 720-724, 2013.
[3]H.T. Chen, T.L. Liu, and C.S. Fuh. “Learning Effective Image Metrics from Few Pairwise Examples,” 10th IEEE International Conference on Computer Vision, vol. 2, pp. 1371-1378, 2005.
[4]孫維辰,結合顏色與空間特性的物件追蹤演算法,臺北科技大學資訊工程系研究所學位論文,2009。
[5]C.W. Kao, B.J. Hwang, C.W. Yang, K.C. Fan and C.P. Huang, “A Novel with Low Complexity Gaze Point Estimation Algorithm,” First International Conference on Instrumentation, Measurement, Computer, Communication and Control, vol. 1, pp. 204-208, 2012.
[6]Y. Sugano, Y. Matsushita and Y. Sato, “Appearance-Based Gaze Estimation Using Visual Saliency,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 329-341, 2013.
[7]S.V. Sheela and P.A. Vijaya, “An Appearance based Method for Eye Gaze Tracking,” Journal of Computer Science, vol. 7, no. 8, pp.1194-1203, 2011.
[8]H.C. Lu, G.L. Fang, C. Wang and Y.W. Chen, “A novel method for gaze tracking by local pattern model and support vector regressor,” Signal Processing, vol. 90, issue 4, pp. 1290-1299, 2010.
[9]C.W. Kao, B.J. Hwang, C.W. Yang, K.C. Fan and C.P. Huang, “A Novel with Low Complexity Gaze Point Estimation Algorithm,” Proceedings of the International MultiConference of Engineers and Computer Scientists, vol. 1, 2012.
[10]C. Kwan, Calvin, J. Purnama, and K. Eng, “Kinect 3D camera based eye-tracking to detect the amount of indoor advertisement viewer,” IEEE International Conference Advanced Informatics: Concept, Theory and Application, pp. 123-128, 2014.
[11]J. Han, L. Sun, X. Hu, J. Han and L. Shao, “Spatial and temporal visual attention prediction in videos using eye movement data,” Neurocomputing, vol. 145, pp. 140-153, 2014.
[12]W.T. Li, H.S. Chang, K.C. Lien, H.T. Chang and Y.F. Wang, “Exploring Visual and Motion Saliency for Automatic Video Object Extraction,” IEEE Transactions on Image Processing, vol. 22,issue 7, pp. 2600-2610, 2013.
[13]P. Hidayatullah and H. Konik, “CAMSHIFT Improvement on Multi-Hue and Multi-Object Tracking,” IEEE International Conference on Electrical Engineering and Informatics, pp.1-6, 2011.
[14]L. Wang and J. Cheng, “Object Tracking and Trajectory Recognition Using Improved CAMSHIFT and Hidden Markov Model,” International Conference on Automatic Control and Artificial Intelligence, pp. 1451-1454, 2012.
[15]H.S. Park and K.H. Jo, “Real-time hand gesture recognition for augmented screen using average background and camshift,” 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision, pp.18-21, 2013.
[16]K.H. Wong, Y. Gong and H.K. Fung, “A hybrid Particle filter-CAMSHIFT model based solution for aerial maritime survivor search,” In Technological Advances in Electrical, Electronics and Computer Engineering, pp. 178-182, 2013.
[17]Z. Zhou, X. Peng, D. Wu, Z. Zhu, C. Wu and J. Wu, “Object Tracking Based on Particle Filter with Improved Camshift,” Journal of Applied Sciences, Asian Network for Scientific Information, vol. 14, no. 2, pp. 121-128, Oct. 21-23, 2014.
[18]S.W. Chou, C.H. Hsieh, B.J. Hwang and H.W. Chen, “A Modified Mean Shift Algorithm for Visual Object Tracking,” Signal and Information Processing Association Annual Summit and Conference, pp. 1-5, 2013.
[19]J.L. Koradiya and P.B. Swadas, “Content Based Image Retrieval,” International Journal Of Advanced And Innovative Research, vol. 2, no. 4, pp. 1324-1329, 2013.
[20]G.H. Liu and J.Y. Yang, “Content-based image retrieval using color difference histogram,” Pattern Recognition, vol. 46, issue 1, pp. 188-198, 2012.
[21]S.M. Singh and K. Hemachandran, “Content-Based Image Retrieval using Color Moment and Gabor Texture Feature,” International Journal of Computer Science Issues, vol. 9, no. 1, pp. 299-309, 2012.
[22]J. Almeida, N. J. Leite and R.D.S. Torres, “Online video summarization on compressed domain,” Journal of Visual Communication and Image Representation, vol. 24, issue 6, pp. 729-738, 2013.
[23]G. Ghinea, R. Kannan, S. Swaminathan and S. Kannaiyan, “A novel user-centered design for personalized video summarization,” Multimedia and Expo Workshops, pp. 1-6, 2014.
[24]D. Rudoy, D.B. Goldman, E. Shechtman, and L. Zelnik-Manor, “Learning video saliency from human gaze using candidate selection,” Computer Vision and Pattern Recognition, pp. 1147-1154, 2013.
[25]W.T. Peng, W.T. Chu, C.H. Chang, C.N. Chou, W.J. Huang, W.Y. Chang and Y.P. Hung, “Editing by viewing: automatic home video summarization by viewing behavior analysis,” IEEE Transactions on Multimedia, vol. 13, issue 3, pp. 539-550, 2011.
[26]陳志學、賴惠德、邱發忠,「眼球追蹤技術在學習與教育上的應用」,教育科學研究期刊,第55卷,第4期,頁39-68,2010。
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top