跳到主要內容

臺灣博碩士論文加值系統

(44.212.99.208) 您好!臺灣時間:2024/04/23 22:44
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:張人尹
研究生(外文):Jen-Yin Chang
論文名稱:基於電腦視覺方法利用無線網路訊號實現動作辨識
論文名稱(外文):WiFi Action Recognition via Vision-based Methods
指導教授:徐宏民
口試委員:陳祝嵩陳文進李宏毅葉梅真
口試日期:2016-07-07
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊網路與多媒體研究所
學門:電算機學門
學類:網路學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:英文
論文頁數:39
中文關鍵詞:動作辨識無線網路訊號電腦視覺
外文關鍵詞:Action RecognitionWi-FiComputer Vision
相關次數:
  • 被引用被引用:1
  • 點閱點閱:210
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
由於無線網路訊號的普及性,低成本以及不侵犯隱私性,利用無線網路訊號實現動作辨識在近幾年來受到相當程度的重視。通道狀態訊息(Channel State Information, 以下簡稱CSI)是一種可從接收到的無線網路訊號預估出來的資訊,提供了詳細的環境如何影響訊號的訊息。我們觀察到CSI畫成圖片後,會由於不同動作而產生不同的材質變化,於是我們將不同動作預估出來的CSI都轉成類似頻譜的圖片,並從其抽取特徵值,接著訓練支持向量機進行分類,以實現動作辨識。實驗結果顯示,將CSI轉成圖片看待,在辨識7種不同動作能夠達到85%以上的準確率。

然而,實驗結果也顯示,預估出來的CSI存在著地點資訊,若無線網路訊號是從不同地點被錄下的,那麼原本訓練好的分類器將無法辨識該動作。於是,我們提出了一個基於奇異值分解(Singular Value Decomposition)的方法,能夠將地點資訊從CSI中移除,使得留下的資訊就是人的動作的影響。實驗結果顯示,在跨房間(地點)的動作辨識上,我們的方法在分類6個不同動作能夠達到90%的準確率。

我們的貢獻包括:
就我們所知,我們是第一個探討將CSI視為圖片,並且提出使用電腦視覺方法實現動作辨識的論文。
我們將不同對的天線視為不同信號通道,並且探討了不同特徵融合的方法,也達到了同一地點上不錯的動作辨識率。
我們提出的地點資訊移除方法,成功實現了跨房間的動作辨識。

Action recognition via WiFi has caught intense attention recently because of its ubiquity, low cost, and privacy-preserving. Observing Channel State Information (CSI, a fine-grained information computed from the received WiFi signal) resemblance to texture, we transform the received CSI into images, extract features with vision-based methods and train SVM classifiers for action recognition. Our experiments show that regarding CSI as images achieves an accuracy above 85% classifying 7 actions.

However, from the experimental results, the CSI is usually location dependent, which affects the recognition performance if signals are recorded in different places. In this work, we propose a location-dependency removal method based on Singular Value Decomposition (SVD) to eliminate the background CSI and effectively extract the channel information of
signals reflected by human bodies. Experimental results show that our
method considering the correlation of CSI streams could achieve promising accuracy above 90% in identifying six actions even testing in five different rooms.

Our contributions include:
To our best knowledge, we are the first to investigate the feasibility of processing CSI by vision-based methods with extendable learning-based framework.
We regard CSI of each Tx-Rx pair as a channel and investigate early and late fusion of multi-channels. Also, we achieve promising accuracy on action recognition on a specific location.
We enable cross-room action recognition with the proposed location infomation removal method.

Acknowledgments i
Abstract iii
List of Figures x
List of Tables xii

Chapter 1 Introduction 1

Chapter 2 Related Work 4
2.1 RawSignal ................................ 4
2.2 ChannelStateInformation........................ 5
2.3 Others................................... 6

Chapter 3 Channel State Information 7
3.1 Background ................................ 7
3.2 Formulation................................ 8

Chapter 4 Vision-based Framework 11
4.1 CollectingCSI............................... 11
4.2 Pre-processing............................... 12
4.3 FeatureExtraction ............................ 12
4.3.1 GaborFilter............................ 12
4.3.2 BagofWord-SIFT ........................ 13
4.4 TrainingSVMClassifier ......................... 13
4.4.1 EarlyFusion............................ 13
4.4.2 LateFusion ............................ 14

Chapter 5 Location-Independent Method 15
5.1 Observation ................................ 15
5.2 SingularValueDecomposition ...................... 17
5.3 LocationInformationRemovalMethod ................. 18

Chapter 6 Experiments 21
6.1 Without Location-Independent Method. . . . . . . . . . . . . . . . . 21
6.1.1 DatasetandSettings ....................... 21
6.1.2 Results............................... 23
6.2 With Location-Independent Method................... 25
6.2.1 DatasetandSettings ....................... 25
6.2.2 Results............................... 26

Chapter 7 Discussion 30
7.1 Other Factors for Action Recognition .................. 30
7.2 Person Identification ........................... 32

Chapter 8 Conclusion 34

Bibliography 36


[1] F. Adib, C.-Y. Hsu, H. Mao, D. Katabi, and F. Durand. Capturing the human figure through a wall. ACM Transactions on Graphics (TOG), 34(6):219, 2015.
[2] F. Adib, Z. Kabelac, and D. Katabi. Multi-person motion tracking via rf body reflections. 2014.
[3] P. Casale, O. Pujol, and P. Radeva. Human activity recognition from accelerometer data using a wearable device. In Proceedings of the 5th Iberian Conference on Pattern Recognition and Image Analysis, IbPRIA’11, pages 289–296, Berlin, Heidelberg, 2011. Springer-Verlag.
[4] C.-C. Chang and C.-J. Lin. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011.
[5] M. Ettus. Usrp user’s and developer’s guide. Ettus Research LLC, 2005.
[6] D. Halperin, W. Hu, A. Sheth, and D. Wetherall. Tool release: Gathering 802.11n traces with channel state information. SIGCOMM Comput. Commun. Rev., 41(1):53–53, Jan. 2011.
[7] C. Han, K. Wu, Y. Wang, and L. M. Ni. Wifall: Device-free fall detection by wireless networks. In INFOCOM, 2014 Proceedings IEEE, pages 271–279. IEEE, 2014.
[8] T. Hwang, C. Yang, G. Wu, S. Li, and G. Li. OFDM and its wireless applications: A survey. Vehicular Technology, IEEE Transactions on, 58(4):1673–1694, May 2009.
[9] S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(1):221–231, Jan 2013.
[10] F. A. Z. K. D. Katabi and R. C. Miller. 3d tracking via body radio reflections.
[11] L. Lee and W. E. L. Grimson. Gait analysis for recognition and classification. In Au- tomatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 148–155. IEEE, 2002.
[12] D.G.Lowe.Distinctiveimagefeaturesfromscale-invariantkeypoints.Int.J.Comput. Vision, 60(2):91–110, Nov. 2004.
[13] J. R. Movellan. Tutorial on Gabor Filters. Tutorial paper http://mplab.ucsd.edu/tutorials/pdfs/gabor.pdf, 2008.
[14] P. Murphy, A. Sabharwal, and B. Aazhang. Design of warp: a wireless open-access research platform. In Signal Processing Conference, 2006 14th European, pages 1–5. IEEE, 2006.
[15] R. Nandakumar, B. Kellogg, and S. Gollakota. Wi-Fi Gesture Recognition on Exist- ing Devices. ArXiv e-prints, Nov. 2014.
[16] Q. Pu, S. Gupta, S. Gollakota, and S. Patel. Whole-home gesture recognition us- ing wireless signals. In Proceedings of the 19th Annual International Conference on Mobile Computing & Networking, MobiCom ’13, pages 27–38, New York, NY, USA, 2013. ACM.
[17] R. A. Sadek. Svd based image processing applications: state of the art, contributions and research challenges. arXiv preprint arXiv:1211.7102, 2012.
[18] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In Proceedings of the Pattern Recognition, 17th International Conference on (ICPR’04) Volume 3 - Volume 03, ICPR ’04, pages 32–36, Washington, DC, USA, 2004. IEEE Computer Society.
[19] Y. Shi, W. Zeng, T. Huang, and Y. Wang. Learning deep trajectory descriptor for action recognition in videos using deep neural networks. In Multimedia and Expo
(ICME), 2015 IEEE International Conference on, pages 1–6, June 2015.
[20] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore. Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116–124, 2013.
[21] W. Tao, T. Liu, R. Zheng, and H. Feng. Gait analysis using wearable sensors. Sensors, 12(2):2255–2283, 2012.
[22] G. Wang, Y. Zou, Z. Zhou, K. Wu, and L. M. Ni. We can hear you with wi-fi! In Proceedings of the 20th annual international conference on Mobile computing and networking, pages 593–604. ACM, 2014.
[23] W. Wang, A. X. Liu, M. Shahzad, K. Ling, and S. Lu. Understanding and modeling of wifi signal based human activity recognition. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, MobiCom ’15, pages 65–76, New York, NY, USA, 2015. ACM.
[24] Y. Wang, J. Liu, Y. Chen, M. Gruteser, J. Yang, and H. Liu. E-eyes: Device- free location-oriented activity identification using fine-grained wifi signatures. In Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, MobiCom ’14, pages 617–628, New York, NY, USA, 2014. ACM.
[25] Y. Yan, E. Ricci, G. Liu, and N. Sebe. Egocentric daily activity recognition via multitask clustering. Image Processing, IEEE Transactions on, 24(10):2984–2995, 2015.
[26] Y. Zeng, P. H. Pathak, and P. Mohapatra. Analyzing shopper’s behavior through wifi signals. In Proceedings of the 2Nd Workshop on Workshop on Physical Analytics,
WPA ’15, pages 13–18, New York, NY, USA, 2015. ACM.
[27] Y. Zeng, P. H. Pathak, and P. Mohapatra. Wiwho: Wifi-based person identification in smart spaces. In 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pages 1–12. IEEE, 2016.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top