跳到主要內容

臺灣博碩士論文加值系統

(44.192.247.184) 您好!臺灣時間:2023/02/07 13:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:廖廷浩
研究生(外文):Ting-Hao Liao
論文名稱:跨域知識遷移學習於人體骨架動作辨識
論文名稱(外文):Cross-Domain Knowledge Transfer for Skeleton-Based Action Recognition
指導教授:鄭士康陳駿丞
指導教授(外文):Shyh-Kang JengJun-Cheng Chen
口試委員:歐陽明王鈺強傅楸善
口試委員(外文):Ouh-Young MingYu-Chiang WangChiou-Shann Fuh
口試日期:2021-01-14
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資料科學學位學程
學門:電算機學門
學類:軟體發展學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:英文
論文頁數:23
中文關鍵詞:骨架動作識別跨域遷移學習圖形卷積
外文關鍵詞:Action RecognitionSkeletonTransferCross-domain
DOI:10.6342/NTU202100362
相關次數:
  • 被引用被引用:0
  • 點閱點閱:121
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
人體動作基於人體骨架之識別(Skeleton­based Action Recognition)目前有許多的資料集。但是每一個資料集之間存在著非常多的相異之處。像是拍攝方向、不同的人體關鍵點、以及不同種類的行為等等。我們通常都是直接讓同一個資料集當作訓練及測試集,因此常常不會用到其他資料集的知識。為了解決這個問題,我們提出了一個跨域知識遷移學習基於梯度翻轉層(Gradient Reversal Layer)與圖形卷積網路的模型來有效的轉換並利用不同資料集來的知識來影響其他資料集的結果。根據NTU­RGB+D 60遷移到其他資料集的實驗,我們提出的方法可以讓這些其他資料集的表現大幅的增加準確度並且超過目前只做在目標資料集基於時空圖型卷積網路的最佳演算法,並且證明我們提出的方法的影響力。
For skeleton­-based action recognition, there are many different datasets; however, since there also exist many differences between skeleton action datasets, including view­points, the number of available joints for a skeleton, the type of actions, etc, we can only train an individual model for each dataset respectively and cannot effectively leverage the knowledge from one dataset to another. To address this issue, we propose a cross­domainknowledge transfer module based on gradient reversal layer for graph convolutional net­work to effectively transfer the knowledge from one domain to another. With extensive experiments from NTU­RGB+D 60 to other datasets, the proposed approach achieves significantly improved results using different state­of­the­art spatio­temporal graph con­volutional networks as compared with those trained on the target dataset only, and this also demonstrates the effectiveness of the proposed approach.
Verification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables ix
Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.2 Research Objective 3
1.3 Contribution 4
Chapter 2 Related Work 5
2.1 Skeleton-based Action Recognition 5
2.2 Graph Convolutional Network on Skeleton Graphs 6
2.3 Domain Adaptation 6
Chapter 3 The Proposed Approach 7
3.1 Skeletal Adjacency Matrix and Graph Convolution Network 7
3.2 Algorithm Overview 9
3.3 Cross­Domain Knowledge Transfer Layer 10
3.4 Domain Adaptation 10
3.5 Action Classification 11
3.6 Overall Performance 11
Chapter 4 Experiment 12
4.1 Datasets 12
4.2 Implementation Detail 13
4.3 Overall Performance 15
4.4 Feature Alignment 16
4.5 Ablation Study 16
4.6 Relationship of Skeletons 18
Chapter 5 Conclusion 20
References 21
[1] T.­T. N. Amir Shahroudy, Jun Liu and G. Wang. Ntu rgb+d: A large scale dataset for 3d human activity analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[2] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh. Openpose: Realtime multi­ person 2d pose estimation using part affinity fields. CoRR, abs/1812.08008, 2018.
[3] J. Deng, W. Dong, R. Socher, L.­J. Li, K. Li, and L. Fei­Fei. ImageNet: A Large­ Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
[4] Y. Du, W. Wang, and L. Wang. Hierarchical recurrent neural network for skele­ ton based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1110–1118. IEEE Computer Society, 2015.
[5] H. Fang, S. Xie, and C. Lu. RMPE: regional multi­person pose estimation. CoRR, abs/1612.00137, 2016.
[6] G.,B.Cui,andS.Yu.Skeleton­basedactionrecognitionwithsynchronouslocaland non­local spatio­temporal learning and frequency attention. CoRR, abs/1811.04237, 2018.
[7] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation, 2015.
[8] J. C. H. L. L. Shi, Y. Zhang. Skeleton­based action recognition with directed graph neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[9] C. Li, Q. Zhong, D. Xie, and S. Pu. Skeleton­based action recognition with convo­lutional neural networks. CoRR, abs/1704.07595, 2017.
[10] C. Li, Q. Zhong, D. Xie, and S. Pu. Co­occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. CoRR, abs/ 1804.06055, 2018.
[11] S. Li, W. Li, C. Cook, C. Zhu, and Y. Gao. Independently recurrent neural network (indrnn): Building A longer and deeper RNN. CoRR, abs/1803.04831, 2018.
[12] S. Lin, Y. Lin, C. Chen, and Y. Hung. Recognizing human actions with outlier frames by observation filtering and completion. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 13(3):28, 2017.
[13] C. Liu, Y. Hu, . Li, S. Song, and J. Liu. Pku­mmd: A large scale benchmark for con­tinuous multi­modal human action understanding. arXiv preprint arXiv:1703.07475, 2017.
[14] J. Liu, A. Shahroudy, D. Xu, and G. Wang. Spatio­temporal LSTM with trust gates for 3d human action recognition. CoRR, abs/1607.07043, 2016.
[15] Z. Liu, H. Zhang, Z. Chen, Z. Wang, and W. Ouyang. Disentangling and unify­ ing graph convolutions for skeleton­based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
[16] Y. X. S. Yan and D. Lin. Spatial temporal graph convolutional networks for skeleton­based action recognition. In Association for the Advancement of Artificial Intelligence, 2018.
[17] L. Shi, Y. Zhang, J. Cheng, and H. Lu. Two­stream adaptive graph convolutional networks for skeleton­based action recognition. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[18] S. Song, C. Lan, J. Xing, W. Zeng, and J. Liu. An end­to­end spatio­temporal atten­tion model for human action recognition from skeleton data. CoRR, abs/1611.06067, 2016.
[19] S. B. L. B. R. G. J. H. T. Lin, M. Maire and others. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), 2014.
[20] K.S.B.Z.C.H.S.V.F.V.T.G.T.B.P.N.W.Kay,J.Carreiraandothers. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
[21] J.Wang,Z.Liu,Y.Wu, and J.Yuan.Miningactionletensembleforactionrecognition with depth cameras. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 1290–1297, 2012.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top