跳到主要內容

臺灣博碩士論文加值系統

(44.210.83.132) 您好!臺灣時間:2024/05/25 18:12
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:蔡佳玲
研究生(外文):TSAI, CHIA-LING
論文名稱:融合電腦視覺與慣性感測資訊 實現在空間中識別身分
論文名稱(外文):Recognizing Individuals in Spaces by Fusing Computer Vision and Inertial Sensing Information
指導教授:陳建志陳建志引用關係
指導教授(外文):CHEN, JEN-JEE
口試委員:林志隆蔡孟勳
口試委員(外文):LIN, CHIH-LUNGTSAI, MENG-HSUN
口試日期:2019-07-26
學位類別:碩士
校院名稱:國立臺南大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:33
中文關鍵詞:身分辨識電腦視覺化資料融合慣性感測器穿戴式裝置
外文關鍵詞:Person identificationcomputer visiondata fusioninertial sensorswearable devices
相關次數:
  • 被引用被引用:0
  • 點閱點閱:109
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
身份辨識其應用場景非常廣泛,如:互動式機器人、客製化服務,目前市面上有許多相關的設備與產品可協助執行身分辨識,例如:無線射頻辨識(RFID)、人臉辨識、虹膜辨識。但這些辨識大部分都是使用單一的設備,因此在應用在現實環境時,總會遇到諸多限制;例如:虹膜辨識和指紋辨識需要短距離或接觸操作,人臉辨識需要很大量的標記數據集以訓練分類模型。因此,上述這些辨識並不適用於動態變化的環境。
在本論文中,我們運用三種感測器進行資料融合,分別為攝影機、慣性傳感器和電子證件。攝影機並不進行人臉辨識而是結合AI算法抓取受試者的影像位置和運動軌跡,使用穿戴式裝置整合慣性感測器與電子ID則能不受任何空間限制抓取受試者的運動狀態。這樣的系統有助於解決一是影像資料與人臉辨識的隱私問題,二是因頻寬限制而導致公共攝影機解析度不高和受試者影像受遮蔽問題,最後是融合多感測器可校正慣性感測器的定位誤差問題。在本文的系統中,我們提出二個特徵融合算法,進行感測資料融合,同時此演算法除了考慮受試者的運動軌跡,更加入受試者運動過程的時間特徵;算法的執行不需繁瑣的數據標籤和模型訓練。實驗數據顯示,我們的系統具有高達95%以上的辨識率。我們實現並實現一原型系統來驗證我們的方法與可行性。
Person identification is always one of the most popular technology applications. There are many devices and products have been sold to do person identification, such as radio frequency identification (RFID), face recognition, and iris recognition. However, most of identifications approaches, which are all based on single technology, have limitations when applying in the real environment. For example, they are strongly restricted by specific scenarios and spatial condition of places. In addition, the recognition rate of radio-based methods decreases as the number of target increases due to the unstableness of wireless channel and the noise. Therefore, above existing identification methods general solutions.
In this paper, we propose a data fusion method which combines three kinds of sensors, a camera, inertial sensors and compasses. The camera can capture the video of the whole space, with the video and AI algorithms, the record objects’ positions and trajectories can be calculated and identified. Each user is equipped with a wearable device, and the wearable device can capture the user’s motion without any space constraints. The video is not used for face or iris recognition so video quality is not concern here and privacy violation problem is prevented. In this paper, we propose a feature fusion algorithm, which not only considers the motion trajectory of the subject, but also the time characteristics. By the proposed methods, user and wearable devices are paired, so each user can be identified via his or her wearable device, which owns a unique id. According to experiments, our system reaches over 95% recognition rate. A prototype implementation is completed demonstrated to verify the feasibility of our proposed approach.
摘要 i
ABSTRACT ii
致謝 iv
目次 v
表目次 vi
圖目次 vii
第一章 緒論 1
第二章 參考文獻 4
第三章 系統架構 6
第四章 身分辨識演算法 8
第一節 慣性感測移動偵測 8
第二節 影像軌跡偵測 11
第三節 相似度分數 14
第四節 影像軌跡和步伐資料配對 21
第五章 實驗與評估 24
第六章 結論與未來展望 30
參考文獻 31


[1]M. Rofouei, A. Wilson, A. Brush, and S. Tansley, “Your phone or mine: fusing body, touch and device sensing for multi-user device-display interaction,” in Proc. ACM CHI,2012, pp. 1915–1918.
[2]A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich,“Common metrics for human-robot interaction,” in Proc. ACM HRI, 2006, pp. 33–40.
[3]W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM computing surveys (CSUR), vol. 35, no. 4, pp. 399–458, 2003
[4]P. J. Grother, G. W. Quinn, and P. J. Phillips, “Report on the evaluation of 2d still-image face recognition algorithms,” NIST interagency report, vol. 7709, p. 106, 2010
[5]O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition.” in BMVC, vol. 1, no. 3, 2015, p. 6.
[6]F. Cafaro, A. Panella, L. Lyons, J. Roberts, and J. Radinsky, “I see you there!: developing identity-preserving embodied interaction for museum exhibits,” in Proc. ACM CHI, 2013
[7]H. Li, P. Zhang, S. Al Moubayed, S. N. Patel, and A. P. Sample, “ID-Match: A hybrid computer vision and RFID system for recognizing individuals in groups,” in Proc. ACM CHI, 2016, pp. 4933–4944
[8]T. Teixeira, D. Jung, and A. Savvides, “Tasking networked CCTV cameras and mobile phones to identify and localize multiple people,” in Proc. ACM UbiComp, 2010.
[9]L. Xia and J. Aggarwal, “Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 2834–2841.
[10]O. D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors.” IEEE Communications Surveys and Tutorials, vol. 15, no. 3, pp. 1192–1209, 2013.
[11]S. Lenman, L. Bretzner, and B. Thuresson, “Using marking menus to develop command sets for computer vision based hand gesture interfaces,” in Proc. ACM NordiCHI, 2002.
[12]Y. Tao, H. Hu, and H. Zhou, “Integration of vision and inertial sensors for 3d arm motion tracking in home-based rehabilitation,” The International Journal of Robotics Research, vol. 26, no. 6, pp. 607–624, 2007.
[13]K. Liu, C. Chen, R. Jafari, and N. Kehtarnavaz, “Fusion of inertial and depth sensor data for robust hand gesture recognition,” IEEE Sensors Journal, vol. 14, no. 6, pp. 1898–1903, 2014.
[14]C. Chen, R. Jafari, and N. Kehtarnavaz, “Improving human action recognition using fusion of depth camera and inertial sensors,” IEEE Transactions on Human-Machine Systems, vol. 45, no. 1, pp. 51–61, 2015.
[15]J. Daugman, “How iris recognition works,” IEEE Transactions on circuits and systems for video technology, vol. 14, no. 1, pp. 21–30, 2004.
[16]H. Liu, H. Darabi, P. Banerjee, and J. Liu, “Survey of wireless indoor positioning techniques and systems,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 37, no. 6, pp. 1067–1080, 2007.
[17]M. Alzantot and M. Youssef, “UPTIME: Ubiquitous pedestrian tracking using mobile phones,” in IEEE Wireless Commun. and Networking Conf., 2012.
[18]F. Li, C. Zhao, G. Ding, J. Gong, C. Liu, and F. Zhao, “A reliable and accurate indoor localization method using phone inertial sensors,” in ACM Conf. Ubiquitous Comput., 2012.
[19]P. Lawitzki and J. Charzinski, “Application of dynamic binaural signals in acoustic games,” Master’s thesis, Media University Stuttgart, Dec. 2011.
[20]D. J. Berndt and J. Clifford, “Using dynamic time warping to find patterns in time Series” in Proc. AAAIWS, 1994.
[21]W.-C. Chang, C.-W, Wu, R. Y.-C. Tsai, K. C.-J. Lin, and Y.-C. Tseng, "Eye on you: Fusing gesture data from depth camera and inertial sensors for person identification, " in IEEE 1CRA, 2018
[22]K. Liu, C. Chen, R. Jafari, and N. Kehtarnavaz, “Fusion of inertial and depth sensor data for robust hand gesture recognition,” IEEE Sensors Journal, vol. 14, no. 6, pp. 1898–1903, 2014
[23]W.-C. Chang, C.-W. Wu, Y.-C. Tsai, K. Lin, and Y.-C. Tseng, “Eye on You: Fusing Gesture Data from Depth Camera and Inertial Sensors for Person Identification”, IEEE Int’l Conf. on Robotics and Automation (ICRA), 2018
[24]J.-W. Qiu and Y.-C. Tseng, “M2M Encountering: Collaborative Localization via Instant Inter-Particle Filter Data Fusion”, IEEE Sensors Journal, Vol. 16
[25]Y.-C. Tsai, T.-Y. Ke, C.-J. Lin, and Y.-C. Tseng, “Enabling Identification-Aware Tracking via Fusion of Visual and Inertial Features”, IEEE Int’l Conf. on Robotics and Automation (ICRA), 2019

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊