跳到主要內容

臺灣博碩士論文加值系統

(3.231.230.177) 您好!臺灣時間:2021/07/28 15:39
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:許書維
研究生(外文):Shu-WeiHsu
論文名稱:穩定手持裝置顯示內容之方法
論文名稱(外文):An Approach to Display Stabilization for Handheld Devices
指導教授:侯廷偉侯廷偉引用關係
指導教授(外文):Ting-Wei Hou
學位類別:碩士
校院名稱:國立成功大學
系所名稱:工程科學系碩博士班
學門:工程學門
學類:綜合工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:英文
論文頁數:28
中文關鍵詞:穩定手持裝置震動
外文關鍵詞:stabilizationhandheld devicevibration
相關次數:
  • 被引用被引用:0
  • 點閱點閱:173
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
  隨著手持裝置的日漸流行,人們有愈來愈多的機會於晃動的環境下使用手持裝置,例如邊走邊看手機,或在搭乘交通工具時閱讀電子書等。但是觀看晃動的螢幕可能會造成使用者感覺暈眩、難以閱讀等。本論文提出一個具原創性的方法,試圖補償這種晃動。此法不同於照相機解決手震的方法,主要不同點在於相機防手震是將最終儲存的影像穩定化,而我們是試圖在晃動的情境中,讓眼睛盯著手持裝置螢幕的使用者,覺得螢幕顯示的內容似乎並沒有移動。
  本論文使用了前置鏡頭與慣性傳感器,可偵測手持裝置的晃動及人臉與裝置之間的位移改變,藉以自動調整螢幕顯示內容的位置,補償此位移並降低使用者觀看晃動螢幕內容時產生的不適感。
Nowadays, people have more occasions to use handheld devices (such as smart phones) in vibrant environments. For example, it is common to see one person is watching his smart phone while walking. We can also see many people watching their smart phones or pad computers while they are taking a vehicle. Watching a vibrating (smart phone) screen may bring disturbance, inconvenience or discomfort to the user. This research proposes an original approach toward compensating such vibrations. The to-be-solved problem is different from the camera’s anti-handshake feature, which tries to stabilize the image being stored. We are to make the contents shown on the screen, in a vibrating environment, as if relatively not moving to the holder’s eyes.
By using front camera and inertial sensors on a smart phone, we attempt to detect the device displacement and push the content currently shown on the screen to the opposite direction accordingly, in order to make the user feel that it is remained at the same relative position.
Table of Contents
中文摘要........................................... I
Abstract........................................... II
誌謝...............................................III
Table of Contents ................................. IV
List of Tables..................................... VI
List of Figures....................................VII
Chapter 1 – Introduction ........................... 1
Chapter 2 – Background and Related Works............ 5
2.1 Sensor Fusion................................... 5
2.2 Image Compensation and Position Tracking ....... 5
2.3 Face Tracking................................... 6
2.4 Kalman Filter .................................. 6
Chapter 3 – System Design and Implementation ....... 9
3.1 Overview........................................ 9
3.2 Face Tracker.................................... 9
3.2.1 Preprocessing ................................11
3.2.2 Skin Detection................................12
3.2.3 Lips Detection................................13
3.2.4 Position Calculation..........................13
3.3 Acceleration Tracker ...........................14
3.3.1 Inertial Sensors .............................14
3.3.2 Linear Accelerations .........................16
3.4 Margin Compensator .............................16
3.5 Implementation .................................19
Chapter 4 – Benchmark ..............................20
4.1 Method .........................................20
4.2 Experimental Results ...........................20
Chapter 5 – Conclusion and Future Work..............24
Reference ..........................................25
Appendix A .........................................28
[1] Sebastian O.H. Madgwick, Andrew J.L. Harrison, and Ravi Vaidyanathan, Estimation of IMU and MARG orientation using a gradient descent algorithm, in Proceedings of IEEE International Conference on Rehabilitation Robotics, pp. 1-7, 2011.
[2] Georg Klein and Tom Drummond, Tightly integrated sensor fusion for robust visual tracking, in Image and Vision Computing, vol. 22, pp. 769-776, 2004.
[3] William Premerlani and Paul Bizard, Direction cosine matrix IMU: Theory. [Online]. http://gentlenav.googlecode.com/files/DCMDraft2.pdf
[4] Kuo-Yi Chen, Chin-Yang Lin, Tien-Yan Ma, and Ting-Wei Hou, A power-saving technique for the OSGi platform, IEICE Transactions on Information and Systems, vol. E95-D, no. 5, pp. 1417-1426, 2012.
[5] Arjen van Rhijn, Robert van Liere, and Jurriaan D. Mulder, An analysis of orientation prediction and filtering methods for VR/AR, in Proceedings of IEEE Virtual Reality, pp. 67-74, 2005.
[6] Gabriele Bleser and Didier Stricker, Advanced tracking through efficient image processing and visual-inertial sensor fusion, in Proceedings of Virtual Reality Conference, pp. 137-144, 2008.
[7] Gerhard Schall et al., Global pose estimation using multi-sensor fusion for outdoor augmented reality, in Proceedings of the 8th IEEE International Symposium on Mixed and Augmented Reality, pp. 153-162, 2009.
[8] Ronald Azuma and Gary Bishop, Improving static and dynamic registration in an optical see-through HMD, in Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pp. 197-204, 1994.
[9] Ronald Tadao Azuma, Predictive tracking for augmented reality, Ph.D. Dissertation, Dept. of Computer Science, University of North Carolina at Chapel Hill, 1996.
[10] Rahmati Ahmad, Clayton Shepard, and Zhong Lin, NoShake: Content stabilization for shaking screens of mobile devices, in Proceedings of IEEE International Conference on Pervasive Computing and Communications, pp. 1-6, 2009.
[11] Richard J. Qian, Ibrahim M. Sezan, and Kristine E. Matthews, A robust real-time face tracking algorithm, in Proceedings of IEEE International Conference on Image Processing, vol. 1, pp. 131-135, 1998.
[12] Maricor Soriano, Birgitta Martinkauppi, Sami Huovinen, and Mika Laaksonen, Using the skin locus to cope with changing illumination conditions in color-based face tracking, Proceedings of IEEE Nordic Signal Processing Symposium, pp. 383-386, 2000.
[13] Tien-Yan Ma, Chin-Yang Lin, Shu-Wei Hsu, Che-Wei Hu, and Ting-Wei Hou, Automatic brightness control of the handheld device display with low illumination, Proceedings of the 2nd International Conference on Computer Science and Automation Engineering, pp. 382-385, 2012.
[14] Chin-Yang Lin, Cheng-Liang Lin, and Ting-Wei Hou, A graph-based approach for automatic service activation and deactivation on the OSGi platform, IEEE Transactions on Consumer Electronics, vol. 55, no. 3, pp. 1271-1279, Aug. 2009.
[15] Greg Welch and Gary Bishop, An introduction to the Kalman filter, Notes of ACM SIGGRAPH tutorial on the Kalman filter 2001.
[16] Rudolph Emil Kalman, A new approach to linear filtering and prediction problems, Transactions of the ASME - Journal of Basic Engineering, vol. 82 (Series D), pp. 35-45, 1960.
[17] Connected-component labeling. [Online]. http://en.wikipedia.org/wiki/Connected-component_labeling
[18] Cheng-Chin Chiang and Chi-Jang Huang, A robust method for detecting arbitrarily tilted human faces in color images, Pattern Recognition Letters, vol. 26, no. 16, pp. 2518-2536, 2005.
[19] Shane Colton, The balance filter. [Online]. http://web.mit.edu/scolton/www/filter.pdf
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top