(3.210.184.142) 您好!臺灣時間:2021/05/13 18:47
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

: 
twitterline
研究生:陳孟歡(TRAN MANH HOAN)
研究生(外文):Hoan - Manh Tran
論文名稱:使用影像處理技術於車輛速度追蹤
論文名稱(外文):Image Processing Techniques ForTracking Vehicle Speed
指導教授:陳大德
指導教授(外文):Dar - Der Chan
口試委員:錢膺仁蔡宜學陳珍源蔡樸生陳大德
口試日期:2013-01-21
學位類別:碩士
校院名稱:國立宜蘭大學
系所名稱:電機工程學系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:英文
論文頁數:105
中文關鍵詞:Imagevideocolor spaceoptical flowmedian filterrelational operatormorphological operationerosion
外文關鍵詞:Image, video, color space, optical flow, median filter, relational operator, morphological operation, erosion,
相關次數:
  • 被引用被引用:0
  • 點閱點閱:232
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:6
  • 收藏至我的研究室書目清單書目收藏:0
The objective of this thesis is to utilize video image processing techniques for tracking vehicle speed. We use algorithms to analyze a frame and the same will be applied to all frames in a video. The video images are converted from color frames into gray scale ones. In a word, the cars are segmented from the background by thresholding the motion vector magnitudes and then blob analysis is used to locate the cars.
Optical flow object is applied to estimate direction and speed of object motion. Next, we use two objects for analyzing optical flow vectors, i.e., we use median filter object for removing speckles and noise as well as morphological erosion and closing object for filling holes in blobs. Subsequently, we use blob analysis to characterize cars and use bounding boxes to contain them in the video. Finally, we use system objects to display the original video, motion vector video, the thresholded video and the final result.
Keywords – Image, video, color space, optical flow, median filter, relational operator, morphological operation, erosion, close, blob analysis, draw rectangles, assignment, probe, conversion.

The objective of this thesis is to utilize video image processing techniques for tracking vehicle speed. We use algorithms to analyze a frame and the same will be applied to all frames in a video. The video images are converted from color frames into gray scale ones. In a word, the cars are segmented from the background by thresholding the motion vector magnitudes and then blob analysis is used to locate the cars.
Optical flow object is applied to estimate direction and speed of object motion. Next, we use two objects for analyzing optical flow vectors, i.e., we use median filter object for removing speckles and noise as well as morphological erosion and closing object for filling holes in blobs. Subsequently, we use blob analysis to characterize cars and use bounding boxes to contain them in the video. Finally, we use system objects to display the original video, motion vector video, the thresholded video and the final result.
Keywords – Image, video, color space, optical flow, median filter, relational operator, morphological operation, erosion, close, blob analysis, draw rectangles, assignment, probe, conversion.

TABLE OF CONTENTS
List of Figures..................................................................................................... viii
List of Tables...................................................................................................... xii
Glossary............................................................................................................... xiii
1. Introduction ................................................................................................ 1
2. Vehicle Tracking And Methods Of Camera Calibration .......................
Approach ................................................................................................. 4
4
2.1. Literature Review.....................................................................................
2.1.1. Camera calibration ........................................................................
2.1.2. Vehicle tracking ............................................................................
2.1.3. Systems that estimate vehicle speeds ............................................ 5
5
5
6
2.2. Camera and Scene Model ........................................................................
2.2.1. Fundamental Model and Assumptions .......................................... 6
6
2.2.2. Perspective Projection of The Scene Geometry ........................... 8
2.3. Methods of Camera Calibration ............................................................. 14
2.3.1. Method 1 (vanishing points) ......................................................... 17
2.3.2. Method 2 (Known Camera Position) ............................................ 18
2.3.3. Method 3 (known distance) .......................................................... 19
2.4. Vehicle Detection System ........................................................................ 24
2.5. Speed Detection ......................................................................................... 26
3. Algorithm Used for Vehicle Tracking ....................................................... 28
3.1. Algorithm for Color Space Conversion .................................................. 28
3.1.1. Conversion Between R'G'B' and HSV Color Spaces .................... 30
3.1.2. Conversion Between RGB and XYZ Color Spaces ...................... 31
3.1.3. Conversion Between RGB and L*a*b* Color Spaces .................. 33
3.2. Methods and Optical Flow Algorithm .................................................... 34
3.2.1. Optical Flow .................................................................................. 34
3.2.2. Algorithm Estimation of the Optical flow..................................... 35
3.2.3. Methods for determining optical flow........................................... 37
3.2.4. Estimate Object Velocities Optical Flow ...................................... 39
3.3. The Mean and Median Filters ................................................................. 42
3.3.1. Mean.............................................................................................. 42
3.3.2. Relational Operators ...................................................................... 44
3.3.3. Median Filtering ............................................................................ 46
3.4. Method Edge Detection .................................................................. 47
3.4.1. Edge Detection ................................................................................ 47
3.4.2. Edge properties ............................................................................... 49
3.4.3. Canny edge detection detector ...................................................... 50
3.4.4. Erosion Morphology ...................................................................... 51
3.4.5. Closing Brief Description ............................................................. 54
4. Experiment Results ........................................................................................ 56
4.1 Design Summary ........................................................................................... 56
4.1.1. Video Input, Output The Tracking Cars Using Optical Flow ....... 56
4.1.2. Convert R'G'B' to Intensity Images .............................................. 57
4.1.3. Optical Flow ..................................................... ................................ 58
4.1.4. Mean Value Sequence of Inputs ....................................................... 62
4.1.5. Relational Operator Logic and Bit Operations ............................ 64
4.1.6. Perform 2-D Median Filtering ........................................................... 66
4.1.7. Erosion Perform Morphological Closing on Binary ..................... 71
4.2. Create Systems Analysis Drawing Tracked And Display Results Cars ............................................................................................................................ 75
4.2.1. Blob Analysis .................................................................................... 75
4.2.2. Draw Shapes Rectangles-Lines on Vehicle Images ...................... 80
4.2.3. Assignment Values To Specified Elements of Signal ................... 83
4.2.4. Probe Output Signal Attributes Width Dimensionality Sample Time ................................................................................................................ 85
4.2.5. Select Input from Vector Matrix Multidimensional Signal .......... 88
4.2.6. Convert Input Signal To Specified Data Type .............................. 90
4.2.7. Insert Draw Text On Image Or Video Stream .............................. 91
5. CONCLUSION ................................................................................................. 99
6. SCOPE FOR FUTURE WORK ...................................................................... 100
REFERENCES .......................................................................................................... 101

LIST OF FIGURES
Figure 1. Model camera and roadway geometry ................................................... 7
Figure 2. Head-on view of camera and roadway geometry emphasizing the nonzero road tilt ...................................................................................................
8
Figure 3. Figure 3. Side view of camera and roadway geometry emphasizing the nonzero road slope ψ ................................................................................................. 8
Figure 4. Road geometry in the image showing the vanishing points for lines parallel and perpendicular to the road.................................................................... 13
Figure 5. Periodic lane markers with distance definitions................................. 15
Figure 6. Road geometry from a bird’s-eye view with relevant X- and Y-axis intercepts .................................................................................................................... 18
Figer 7. Distance L along the road. ........................................................................ 20
Figure 8. Camera setting ........................................................................................... 24
Figure 9. The camera vehicles coordination in image video ............................. 26
Figure 10. Receptive Field Maps of lptcs Are Similar to Optic Flow...............
35
Figure 11. The optical flow vector of a moving object in a video sequence..... 38
Figure 12. Comparison of mean, median and of two log normal distributions with different skewness......................................................................................
43
Figure 13. Center value (previously 1) is replaced by the mean of all nine values (5)............................................................................................................ 47
Figure 14. Center value (previously 1) is replaced by the mean of all nine values (5)............................................................................................................ 47
Figure15. Canny edge detection applied to a photograph ................................. 48
Figure 16. Effect of closing using a 3×3 square structuring ............................ 55
Figure 17. Model connect the blocks function on Matlab Simulink.................. 57
Figure 18. The image video displayed in the Video is the version R'G'B' to intensity processing............................................................................................ 58
Figure 19. The filter block places the median replaces the central value.......... 66
Figure 20. The block median value bias toward the upper-left ......................... 67
Figure 21. Typical shapes of the structuring elements(B)................................. 73
Figure 22. Model region thresholding and filtering optical flow....................... 74
Figure 23. Model result display......................................................................... 74
Figure 24. Blob analysis is indicated................................................................. 75
Figure 25. The block calculates the perimeter when set the Connectivity parameter to 4.....................................................................................................
78
Figure 26. The block calculates the perimeter when set the Connectivity parameter to 8..................................................................................................... 78
Figure 27. Set the shape parameter to rectangles width and height in pixel...... 80
Figure 28. Set the matrix corresponds to a different rectangle.......................... 81
Figure 29. Model a for Iterator block to create a vector..................................... 84
Figure 30. Model image processing techniques for tracking vehicle speed....... 97
Figure 31. Model result display image processing for tracking vehicle speed. 98


LIST OF TABLAES
Table 1. Assumptions and outputs of various camera calibration methods (* denotes optional parameters).............................................................................. 16
Table 2. The following table summarizes the possible values........................... 29
Table 3. Relational equal operators of arithmetic............................................. 45
Table 4. Etimates the direction and speed of object motion using either the Horn-Schunck or the Lucas-Kanade method..................................................... 59
Table 5. Operator block compares two inputs the parameter............................ 64
Table 6. Combination of a scalar and an array................................................... 65
Table7. The data types of the signals input to the I and Val............................. 67
Table 8. Fixed-Point data types.......................................................................... 68
Table 9. Rules for dilation and Erosion.............................................................. 71
Table 10. Designations of sample time information ......................................... 86
Table 11. Text Parameter supported conversion specifications......................... 92
Table 12. Text string color values ..................................................................... 93
Table 13. Location parameter text string insertion............................................ 94
Table 14. Text string opacity values.................................................................. 95

[1] A. Fusiello, “Uncalibrated Euclidean reconstruction: a review,” Image and Vision Computing, vol. 18, pp. 555-563, 2000.
[2] M. Yu, G. Jiang, and B. Yu, “An integrative method for video based traffic parameter extraction in ITS,” in 2000 IEEE Asia-Pacific Conf. on Circuits and Sys., pp. 136-139, 2000.
[3] S. Bouzar, F. Lenoir, J. M. Blosseville, and R. Glachet, “Traffic measurement: image processing using road markings,” in Eighth Int. Conf. on Road Traffic Monitoring and Control, pp. 105-109, 1996.
[4] D. Beymer, P. McLauchlan, B. Coifman, and J. Malik, “A real-time computer vision system for measuring traffic parameters,” in 1997 IEEE Comp. Soc. Conf. on Comp. Vision and Pattern Recognition, pp. 495-501, 1997.
[5] G. Garibotto, P. Castello, E. Del Ninno, P. Pedrazzi, G. Zan, “Speed-vision: speed measurement by license plate reading and tracking,” in IEEE 5th Int. Conf. Intelligent Transportation Sys., pp. 585-590, 2002.
[6] A. Lai and N. Yung, “Lane detection by orientation and length discrimination,” IEEE Trans. Sys., Man, and Cybernetics-Part B: Cybernetics, vol. 30, pp. 539-548, 2000.
[7] U.S. Department of Transportation. Manual on Uniform Traffic Control Devices,
Chapter 3a, Section 6. http://mutcd.fhwa.dot.gov/. December, 2000.
[8] Huston SJ, Krapp HG (2008). Kurtz, Rafael. ed. "Visuomotor Transformation in the Fly Gaze Stabilization System". PLoS Biology 6 (7): e173.doi:10.1371/journal.pbio.0060173. PMC 2475543. PMID 18651791.
[9] Andrew Burton and John Radford (1978). Thinking in Perspective: Critical Essays in the Study of Thought Processes. Routledge. ISBN 0-416-85840-6.
[10] David H. Warren and Edward R. Strelow (1985). Electronic Spatial Sensing for the Blind: Contributions from Perception. Springer. ISBN 90-247-2689-1.
[11] Gibson, J.J. (1950). The Perception of the Visual World. Houghton Mifflin.
[12] Kelson R. T. Aires, Andre M. Santana, Adelardo A. D. Medeiros (2008).Optical Flow Using Color Information. ACM New York, NY, USA. ISBN 978-1-59593-753-7.
[13] S. S. Beauchemin , J. L. Barron (1995). The computation of optical flow. ACM New York, USA.
[14] David J. Fleet and Yair Weiss (2006). "Optical Flow Estimation". In Paragios et al.. Handbook of Mathematical Models in Computer Vision. Springer. ISBN 0-387-26371-3.
[15] John L. Barron, David J. Fleet, and Steven Beauchemin (1994). "Performance of optical flow techniques". International Journal of Computer Vision(Springer).
[16] Glyn W. Humphreys and Vicki Bruce (1989). Visual Cognition. Psychology Press. ISBN 0-86377-124-6.
[17] B. Glocker, N. Komodakis, G. Tziritas, N. Navab & N. Paragios (2008).Dense Image Registration through MRFs and Efficient Linear Programming. Medical Image Analysis Journal.
[18] Barrows G.L., Chahl J.S., and Srinivasan M.V., Biologically inspired visual sensing and flight control, Aeronautical Journal vol. 107, pp. 159–268, 2003.
[19] Christopher M. Brown (1987). Advances in Computer Vision. Lawrence Erlbaum Associates. ISBN 0-89859-648-3.
[20] Feller, William (1950). Introduction to Probability Theory and its Applications, Vol I. Wiley. pp. 221. ISBN 0471257087.
[21] Underhill, L.G.; Bradfield d. (1998) Introstat, Juta and Company Ltd. ISBN 0-7021-3838-X p. 181
[22] Schaum's Outline of Theory and Problems of Probability by Seymour Lipschutz and Marc Lipson, p. 141
[23] Elementary Statistics by Robert R. Johnson and Patricia J. Kuby, p. 279
[24] T. Lindeberg (1998) "Edge detection and ridge detection with automatic scale selection", International Journal of Computer Vision, 30, 2, pages 117--154.
[25] W. Zhang and F. Bergholm (1997) "Multi-scale blur estimation and edge type classification for scene analysis", International Journal of Computer Vision, vol 24, issue 3, Pages: 219 - 250.
[26] Canny (1986) "A computational approach to edge detection", IEEE Trans. Pattern Analysis and Machine Intelligence, vol 8, pages 679-714.
[27] R. Haralick, (1984) "Digital step edges from zero crossing of second directional derivatives", IEEE Trans. on Pattern Analysis and Machine Intelligence, 6(1):58–68.
[28] R. Kimmel and A.M. Bruckstein (2003) "On regularized Laplacian zero crossings and other optimal edge integrators", International Journal of Computer Vision, 53(3) pages 225-243.
[29] R. Deriche (1987) Using Canny's criteria to derive an optimal edge detector recursively implemented, Int. J. Computer Vision, vol 1, pages 167-187.
[30] Shapiro, L., and Stockman, G. (2002). Computer Vision. Prentice Hall. pp. 69-73.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔