跳到主要內容

臺灣博碩士論文加值系統

(44.222.64.76) 您好!臺灣時間:2024/06/14 06:26
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:王民良
研究生(外文):Wang, Min-Liang
論文名稱:反射折射式攝影機於地方辨識和三維場景重建之研究
論文名稱(外文):A Catadioptric Robot Vision System for Visual Place Recognition and 3D Scene Recovery
指導教授:林惠勇
指導教授(外文):Lin, Huei-Yung
口試委員:宋開泰蔡清池黃國勝顏炳郎林惠勇
口試委員(外文):Kai-Tai SongChing-Chih TsaiKao-Shing HwangPing-Lang YenHuei-Yung Lin
口試日期:民國100年5月9日
學位類別:博士
校院名稱:國立中正大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:99
語文別:英文
論文頁數:102
中文關鍵詞:反射折射式攝影機視覺地方辨識虛擬相機混合相機系統場景變換偵測環境影像編碼描述
外文關鍵詞:Catadioptric camera systemHull census transformsemantic descriptionvisual place recognitionHybrid omnidirectional and perspective camera
相關次數:
  • 被引用被引用:1
  • 點閱點閱:462
  • 評分評分:
  • 下載下載:27
  • 收藏至我的研究室書目清單書目收藏:0
本論文深入探討全景視覺系統以及其在機器人移動平台的應用,內容包含全景與傳統針孔相機的結合與成像模型;接著是利用全景相機來達到物體辨識及自我定位上的應用,並且開發了一套基於多層次凸包(Multiple Convex Hull)的場景描述方法,利用此套創新的方法依附在全景視覺系統上,藉以能夠達成視覺地方辨識的機器人系統。

在使用反射折射式相機系統之前,我們首先討論全景視覺的幾何結構與其成像方式,因為反射折射式相機的成像與傳統攝影機有相當程度的不同,因此我們先著重於探討如何混合此兩種模型的方法。我們建構一依附在全景相機上的虛擬相機(Virtual Camera),並利用此虛擬相機來混合兩種不同的幾何成像模型,我們以虛擬與傳統針孔影像所構成的影像對,建構立體視覺系統並將其應用在機器人實驗平台,藉以取得環境的三維資訊。

另外,為了擴展全景視覺在移動機器人的實際應用,我們將研究議題移至利用全景視覺系統來建構高智慧型的環境感知上,我們開發一套嶄新的多凸包式的特徵編碼方式來收集環境的資訊,並以支持向量機(Support Vector Machine)的方式建立環境的模型,最後將此演算法應用在單一全景視覺系統的機器人移動平台上,並且達到環境感知的目的。

在實驗結果中,可以發現混合視覺系統,能夠在室內環境中成功的建構立體視覺系統;而在環境辨識等高等智慧型感知上,我們所開發的核心演算法,能夠簡易的利用全景視覺的優點搭載在機器人移動平台上,我們也同時提供相關的研究數據,證明我們所提出的方法能夠有效的應用在實際的機器人移動平台上。在此實驗部分,我們利用~COLD~環境影像測試資料庫來測試我們的多凸包影像編碼法之成果,該方法的研究數據證明此創新的編碼方式相當適合用來偵測環境的場景變化以及環境的地方辨識,我們也在同樣的設置下,比較幾個受歡迎的演算法,例如袋字演算法(Bag-of-Words)、色塊圖樣板模型(colored pattern appearance model)等等,結果中說明我們的方法在辨識率、模型訓練速度以及資料量大小皆較具有競爭性。
This dissertation addresses the problems related to the catadioptric camera system. We present an approach for hybrid heterogeneous cameras to estimate depth information and depict the algorithms for mobile robot to detect the scene change events and recognize places. In part I, we introduce a hybrid imaging geometry by constructing a virtual image plane for re-projecting the omnidirectional image to an imaginary plane associated with the catadioptric camera model. Based on the hybrid imaging geometry, the conventional stereo approaches are adopted for feature matching and depth estimation. The proposed virtual camera is efficient for calculating the hybrid epipolar geometry and stereo matching and the image formation is evaluated for both the synthetic data and real scene images to demonstrate the feasibility of our approach.

In part II, we are interested in the vision perception from the camera with a large field of view. We describe a new HCT (Hull Census Transform) descriptor for robot vision to detect the scene change and extend it to develop a semantic descriptor for recognizing the visual place. The HCT descriptor is a novel encoding method and based on the relation computation over convex hull points. It relies on the relative ordering of the feature strength, not directly on the feature vectors. The HCT codes are suitable for scene change detection by statistical analysis. The experimental results show the coding method is robust under the varying environment. For visual place recognition, we follow the HCT method to present an Extended-HCT semantic descriptor which integrates the image features and color information via the HCT and image histogram indexing. In this end, a one-versus-one (OVO) multi-class Support Vector Machine (SVM) is used to model the places. The experimental results show that the proposed method with less vectors is as robust as most popular codebooks under varying environment. The performance is evaluated and compared with several state-of-the-art descriptors.

In this dissertation, we focus on the catadioptric camera and its applications. A virtual image plane is used to mix the heterogeneous cameras, and the general stereo vision based on the hybrid camera system is then presented by using the virtual camera plane. The semantic descriptor is proposed for fitting the non-classic camera and applied it to solve the scene change detection and visual place recognition problems. The results demonstrate significant performance both on challenging scene change detection and visual place recognition tasks.
  Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Contents of Figure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Contents of Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Mixing Pinhole and Catadioptric Camera Imaging . . . . . . . . . . . . . . 4
1.3 Recognition Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Scene Change Detection . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Semantic Description . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Background and PreviousWorks . . . . . . . . . . . . . . . . . . . . . . . 10
2.1 Overview of Omnidirectional Cameras . . . . . . . . . . . . . . . . . . . . 10
2.2 Hybrid Camera System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Feature Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Visual Place Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Multi-class Classification Strategy . . . . . . . . . . . . . . . . . . . . . . 23
I Catadioptric Camera Imaging 25
3 Hybrid Catadioptric and Perspective Imaging . . . . . . . . . . . . . . . . 26
3.1 Image Projection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.1 Perspective Camera Model . . . . . . . . . . . . . . . . . . . . . . 27
3.1.2 Omnidirectional Camera Model . . . . . . . . . . . . . . . . . . . 28
3.2 HOPIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.1 Virtual Image Generation . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.2 Computation of the Hybrid Fundamental Matrix . . . . . . . . . . 33
3.2.3 Hybrid Fundamental Matrix Property . . . . . . . . . . . . . . . . 35
3.2.4 Triangulation from the Hybrid Image Pair . . . . . . . . . . . . . . 36
3.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.1 HOPIS Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.2 Feature Matching for the Hybrid Image Pair . . . . . . . . . . . . . 37
3.3.3 Evaluation on Feature Matching . . . . . . . . . . . . . . . . . . . 40
3.3.4 Synthetic Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3.5 Real Scene Results . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
II Recognition Tasks 60
4 A Hull Census Transform for Scene Change Detection . . . . . . . . . . . 61
4.1 HCT-based Topological Localization System . . . . . . . . . . . . . . . . 61
4.1.1 Hull Census Transform . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.2 The Framework of Scene Change Detection . . . . . . . . . . . . . 64
4.1.3 Matching Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.1 The COLD Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.2 Scene Change Detection Experiment Results . . . . . . . . . . . . 67
4.2.3 Scene Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Chapter Summary and Discussion . . . . . . . . . . . . . . . . . . . . . . 73
5 A HCT-Based Semantic Description for Visual Place Recognition . . . . . 76
5.1 The Extended-HCT Codebook Generation . . . . . . . . . . . . . . . . . . 78
5.1.1 Feature Coding Vectors . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1.2 Color Indexing Vectors . . . . . . . . . . . . . . . . . . . . . . . . 81
5.1.3 Codewords Re-weighting and SVM Classifier . . . . . . . . . . . . 81
5.2 Experiments and Performance Evaluation . . . . . . . . . . . . . . . . . . 83
5.2.1 System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.2.2 Results Based on Single Weather Videos . . . . . . . . . . . . . . 85
5.2.3 Results based on Mixed Weather Videos . . . . . . . . . . . . . . . 87
5.2.4 Performance Evaluation on Adjusting Camera Height . . . . . . . . 89
5.3 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
[1] R. Benosman and S. Kang, Panoramic vision: sensors, theory, and applications, 2001.
[2] J. Gaspar, C. Decc´o, J. Okamoto Jr, and J. Santos-Victor, “Constant resolution omnidirectional cameras,” in IEEE Workshop on Omnidirectional Vision, 2002, pp. 27–
34.
[3] C. Fermuller, Y. Aloimonos, P. Baker, R. Pless, J. Neumann, and B. Stuart, “Multicamera networks: eyes from eyes,” in IEEE Workshop on Omnidirectional Vision, 2000, pp. 11–18.
[4] C. Geyer and K. Daniilidis, “Conformal rectification of omnidirectional stereo pairs,” in Proceedings Omnidirectional Workshop and Sensor Networks, vol. 7, 2008, p. 73.
[5] T. Hirukawa, S. Komada, and J. Hirai, “Image feature based navigation of nonholonomic mobile robots with active camera,” in SICE, Annual Conference, 2007, pp. 2502–2506.
[6] L. Puig, J. Guerrero, and P. Sturm, “Matching of omnidirectional and perspective images using the hybrid fundamental matrix,” in IEEEWorkshop on Omnidirectional
Vision, 2008.
[7] J. Elderton, R. Heiney, K. McDonald, M. Moran, and J. Saunders, “Method for displaying a network topology for a task deployment service,” Nov. 5 2002, US Patent 6,477,572.
[8] L. Fei-Fei and P. Perona, “A bayesian hierarchical model for learning natural scene categories,” in IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, vol. 2, 2005, pp. 524–531.
[9] J.Wu and J. Rehg, “CENTRIST: A visual descriptor for scene categorization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010.
[10] C. Geyer and K. Daniilidis, “A unifying theory for central panoramic systems and practical implications,” Lecture Notes in Computer Science, vol. 1843, pp. 445–461, 2000.
[11] J. P. Barreto and H. Araujo, “Issues on the geometry of central catadioptric image formation,” in IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, 2001, pp. 422–427.
[12] C. Mei and P. Rives, “Single view point omnidirectional camera calibration from planar grids,” in IEEE International Conference on Robotics and Automation, 2007,
pp. 3945–3950.
[13] A. Pronobis and B. Caputo, “COLD: COsy Localization Database,” The International Journal of Robotics Research, vol. 28, no. 5, May 2009.
[14] R. Sukthankar, R. Stockton, and M. Mullin, “Smarter presentations: Exploiting homography in camera-projector systems,” in International Conference on Computer
Vision, 2001, pp. 82–87.
[15] M. Vieira, L. Velho, A. Sa, and P. Carvalho, “A Camera-Projector System for Real-Time 3D Video,” in IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, pp. 96–96.
[16] J. Takei, S. Kagami, and K. Hashimoto, “3,000-fps 3-D shape measurement using a high-speed camera-projector system,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 3211–3216.
[17] M. Hartrumpf and R. Munser, “Optical three-dimensional measurements by radially symmetric structured light projection,” Applied optics, vol. 36, no. 13, pp. 2923–
2928, 1997.
[18] D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in International Conference on Computer Vision and Pattern Recognition,
2003.
[19] A. Brahme, P. Nyman, and B. Skatt, “4D laser camera for accurate patient positioning, collision avoidance, image fusion and adaptive approaches during diagnostic
and therapeutic procedures,” Medical physics, vol. 35, p. 1670, 2008.
[20] W. Wijesoma, K. Kodagoda, and A. Balasuriya, “Laser-camera composite sensing for road detection and tracking,” International Journal of Robotics and Automation,
vol. 20, no. 3, pp. 145–157, 2005.
[21] P. Sturm, “Mixing catadioptric and perspective cameras,” in IEEE Workshop on Omnidirectional Vision, 2002.
[22] X. Chen, J. Yang, and A.Waibel, “Calibration of a hybrid camera network,” in IEEE International Conference on Computer Vision, 2003, pp. 150–155.
[23] A. Neves, D. Martins, and A. Pinho, “A hybrid vision system for soccer robots using radial search lines,” in International Conference on Autonomous Robot Systems and Competitions, 2008, pp. 51–55.
[24] L. Lu and Y.Wu, “Quasi-Dense Matching between Perspective and Omnidirectional Images,” in ECCV Workshop on Multicamera and Multimodal Sensor Fusion Algorithms
and Applications, 2008.
[25] R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University Press, ISBN: 0521540518, 2004.
[26] C. Geyer and K. Daniilidis, “Catadioptric camera calibration,” in International Conference on Computer Vision, vol. 1, 1999, pp. 398–404.
[27] S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision, vol. 35, no. 2, pp. 175–196, November
1999.
[28] R. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses,” IEEE Transactions on
Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987.
[29] J. Barreto and H. Ara´ujo, “Geometric Properties of Central Catadioptric Line Images,” in European Conference on Computer Vision, 2002, pp. 237–251.
[30] Z. H. Xianghua Ying, “Catadioptric camera calibration using geometric invariants,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 10,
pp. 1260–1271, 2004.
[31] O. Mart´ınez Mozos, C. Stachniss, and W. Burgard, “Supervised learning of places from range data using adaboost,” in IEEE International Conference on Robotics and
Automation, 2005, pp. 1742–1747.
[32] S. Lee, Y. Kim, and S. Choi, “Fast scene change detection using direct feature extraction from MPEG compressed videos,” IEEE Transactions on Multimedia, vol. 2,
no. 4, pp. 240–254, 2000.
[33] M. Liu, D. Scaramuzza, C. Pradalier, R. Siegwart, and Q. Chen, “Scene recognition with omnidirectional vision for topological map using lightweight adaptive descriptors,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009.
[34] M.-L.Wang and H.-Y. Lin, “A hull census transform for scene change detection and recognition towards topological map building,” in IEEE/RSJ International conference on Intelligent Robots and Systems, Taipei, Taiwan, oct 2010, pp. 548–553.
[35] A. Pronobis and B. Caputo, “Confidence-based cue integration for visual place recognition,” in IEEE/RSJ International Conference on Intelligent Robots and Systems,
San Diego, CA, USA, October 2007.
[36] R. Epstein, A. Harris, D. Stanley, and N. Kanwisher, “The Parahippocampal Place Area:: Recognition, Navigation, or Encoding?” Neuron, vol. 23, no. 1, pp. 115–125,
1999.
[37] A. Torralba, K. P. Murphy,W. T. Freeman, and M. A. Rubin, “Context-based vision system for place and object recognition,” in International Conference on Computer
Vision, 2003.
[38] A. Pronobis, O. Mozos, and B. Caputo, “SVM-based discriminative accumulation scheme for place recognition,” in Proceedings of the International Conference on
Robotics and Automation, 2008.
[39] D. Lowe, “Local feature view clustering for 3d object recognition,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I–682–
I–688 vol.1, 2001.
[40] T. T. Herbert Bay and L. V. Gool., “SURF: Speeded up robust features,” in European Conference on Computer Vision, 2006, pp. 404–417.
[41] C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey Conference, 1988, pp. 147–152.
[42] M. Swain and D. Ballard, “Color indexing,” International Journal of Computer Vision, vol. 7, no. 1, pp. 11–32, 1991.
[43] I. Ulrich and I. Nourbakhsh, “Appearance-based place recognition for topological localization,” in IEEE International Conference on Robotics and Automation, vol. 2, 2000, pp. 1023–1029.
[44] L. Wang and Z. Cai, “Place recognition based on saliency for topological localization,” Journal of Central South University of Technology, vol. 13, no. 5, pp. 536–541, 2006.
[45] T. Bailey, E. Nebot, J. Rosenblatt, and H. Durrant-Whyte, “Robust distinctive place recognition for topological maps,” in Proceedings of the International Conference on Field and Service Robotics, 1999, pp. 347–352.
[46] H. Y. Lin and M. L. Wang, “Generalized stereo for hybrid omnidirectional and perspective imaging,” in IEEE Workshop on Omnidirectional Vision, 2009, pp. 2220–
2227.
[47] Y. Yagi, “Omnidirectional Sensing and lts Applications,” Image processing technologies: algorithms, sensors, and applications, p. 116, 2004.
[48] S. Nayar, “Catadioptric omnidirectional camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997, pp. 482–488.
[49] ——, “Omnidirectional video camera,” in DARPA Image Understanding Workshop, vol. 1, 1997, pp. 235–242.
[50] S. Baker and S. Nayar, “Single viewpoint catadioptric cameras,” Panoramic vision, pp. 39–71, 2001.
[51] P. Greguss, “Panoramic imaging block for three-dimensional space,” Jan. 28 1986, US Patent 4,566,763.
[52] J. A. S. Shah, “Panorama scene analysis with conic projection,” in Proceedings of the IEEE International Conference on Image Processing, 1994, pp. 740–744.
[53] Y. Xiong and K. Turkowski, “Creating image-based VR using a self-calibrating fisheye lens,” in IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, pp. 237–243.
[54] Y. Yagi and S. Kawato, “Panorama scene analysis with conic projection,” in IEEE International Workshop on Intelligent Robots and Systems, 1990, pp. 181–187.
[55] S. Lin and R. Bajcsy, “True single view point cone mirror omni-directional catadioptric system,” in International Conference on Computer Vision, vol. 2, 2001, pp.
102–107.
[56] K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 1029–1034, 1993.
[57] ——, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in IEEE International Conference on Robotics and Automation, vol. 1, 1995, pp.
1062–1067.
[58] M. Ollis, H. Herman, and S. Singh, “Analysis and design of panoramic stereo vision using equi-angular pixel cameras,” Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, Tech. Rep. CMU-RI-TR-99-04, 1999.
[59] P. Sturm, “A method for 3d reconstruction of piecewise planar objects from single panoramic images,” in IEEE Workshop on Omnidirectional Vision, vol. 12, 2000,
pp. 119–126.
[60] R. Hicks and R. Perline, “Equi-areal catadioptric sensors,” in IEEE Workshop on Omnidirectional Vision, 2002, pp. 13–18.
[61] S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision, vol. 35, no. 2, pp. 175–196, 1999.
[62] E. Mouaddib, R. Sagawa, T. Echigo, and Y. Yagi, “Stereovision with a single camera and multiple mirrors,” in IEEE International Conference on Robotics and Automation, 2005, pp. 800–805.
[63] S. Nene and S. Nayar, “Stereo with mirrors,” in International Conference on Computer Vision, 1998, p. 1087.
[64] D. Southwell, A. Basu, M. Fiala, and J. Reyda, “Panoramic stereo,” in International Conference on Pattern Recognition, vol. 1, 1996, pp. 378–382.
[65] M. Fiala and A. Basu, “Line segment extraction in panoramic images,” in International Conference in Central Europe on Computer Graphics, Visualization and
Computer Vision (WSCG), 2002.
[66] K. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26,
no. 7, pp. 941–946, 2004.
[67] G. Scotti, L. Marcenaro, C. Coelho, F. Selvaggi, and C. Regazzoni, “Dual camera intelligent sensor for high definition 360 degrees surveillance,” in IEE Proceedings on Vision, Image and Signal Processing, vol. 152, no. 2, 2005, pp. 250–257.
[68] Y. Yao, B. Abidi, and M. Abidi, “Fusion of omnidirectional and PTZ cameras for accurate cooperative tracking,” in IEEE International Conference on Video and Signal
Based Surveillance, 2006, p. 46.
[69] L. Puig, J. Guerrero, and P. Sturm, “Hybrid matching of uncalibrated omnidirectional and perspective images,” in International Conference on Informatics in Control, Automation and Robotics, 2008, pp. 125–128.
[70] H. Morevec, “Towards automatic visual obstacle avoidance,” in Proceedings of the 5th international joint conference on Artificial intelligence, 1977, pp. 584–584.
[71] D. Lowe, “Object recognition from local scale-invariant features,” International Conference on Computer Vision, vol. 2, pp. 1150–1157 vol.2, 1999.
[72] C. Wu, “SiftGPU: A GPU implementation of scale invariant feature transform (SIFT),” 2007.
[73] C. Mei, G. Sibley, M. Cummins, P. Newman, and I. Reid, “Rslam: A system for
large-scale mapping in constant-time using stereo,” International Journal of Computer Vision, pp. 1–17, 2010.
[74] A. Torii, M. Havlena, and T. Pajdla, “Omnidirectional Image Stabilization by Computing Camera Trajectory,” in Proceedings of the 3rd Pacific Rim Symposium on
Advances in Image and Video Technology, 2009, pp. 71–82.
[75] A. Angeli, S. Doncieux, J. Meyer, and D. Filliat, “Visual topological SLAM and global localization,” in IEEE International Conference on Robotics and Automation,
2009, pp. 4300–4305.
[76] E. Nowak, F. Jurie, and B. Triggs, “Sampling Strategies for Bag-of-Features Image Classification,” Lecture Notes in Computer Science, pp. 490–503, 2006.
[77] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2006, pp. 2169–2178.
[78] C. Silpa-Anan and R. Hartley, “Visual localization and loop-back detection with a high resolution omnidirectional camera,” in IEEE Workshop on Omnidirectional
Vision, 2005.
[79] M. Cummins and P. Newman, “FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance,” The International Journal of
Robotics Research, vol. 27, no. 6, pp. 647–665, 2008. [Online]. Available: http://ijr.sagepub.com/cgi/content/abstract/27/6/647
[80] V. Pradeep, G. Medioni, and J. Weiland, “Visual loop closing using multi-resolution SIFT grids in metric-topological SLAM,” in IEEE Conference on Computer Vision
and Pattern Recognition, 2009, pp. 1438–1445.
[81] R. Zabih and J. Woodfill, “Non-parametric local transforms for computing visual correspondence,” in Proceedings of the Second European Conference on Computer
Visionn, 1994, pp. 151–158.
[82] J. Wu, H. I. Christensen, and J. M. Rehg, “Visual place categorization: Problem, dataset, and algorithm,” IEEE/RSJ International Conference on Intelligent Robots
and Systems, 2009.
[83] J. A. Ruiz-Hernandez, J. L. Crowley, A. Meler, and A. Lux, “Face recognition using tensors of census transform histograms from gaussian features maps,” in The British Machine Vision Conference, 2009.
[84] O. Booij, B. Terwijn, Z. Zivkovic, and B. Krose, “Navigation using an appearance based topological map,” in IEEE International Conference on Robotics and Automation, 2007, pp. 3927–3932.
[85] D. Fontanelli, P. Salaris, F. Belo, and A. Bicchi, “Visual appearance mapping for optimal vision based servoing,” in International Symposium on Experimental Robotics, 2009, pp. 353–362.
[86] H. Tagare, D. McDermott, and H. Xiao, “Visual place recognition for autonomous robots,” in IEEE International Conference on Robotics and Automation, vol. 3, 1998,
pp. 2530–2535.
[87] A. Pronobis, B. Caputo, P. Jensfelt, and H. I. Christensen, “A discriminative approach to robust visual place recognition,” in IEEE/RSJ International Conference
on Intelligent Robots and Systems, Beijing, China, Oct. 2006. [Online]. Available: http://www.pronobis.pro/publications/pronobis2006iros
[88] V. Javier Traver and A. Bernardino, “A review of log-polar imaging for visual perception in robotics,” Robotics and Autonomous Systems, vol. 58, no. 4, pp. 378–398, 2010.
[89] M. L. Wang and H. Y. Lin, “Object recognition from omnidirectional visual sensing for mobile robot applications,” in IEEE International Conference on Systems, Man,
and Cybernetics, October 2009, pp. 1941–1946.
[90] B. Yim, Y. Lee, J. Song, and W. Chung, “Mobile robot localization using fusion of object recognition and range information,” in IEEE International Conference on
Robotics and Automation, 2007, pp. 3533–3538.
[91] C. Valgren and A. Lilienthal, “Incremental spectral clustering and seasons: Appearance-based localization in outdoor environments,” in IEEE international conference on Robotics and automation, 2008, pp. 1856–1861.
[92] W. Wright and I. Nimeroff, “The rays are not coloured: essays on the science of vision and colour,” Physics Today, vol. 22, p. 85, 1969.
[93] A. Koschan, “Improving robot vision by color information,” in International Conference on Artificial Intelligence and Information-Control Systems of Robots, 1997,
pp. 247–258.
[94] S. Tominaga, T. Fukuda, and A. Kimachi, “A High-Resolution Imaging System for Omnidirectional Illuminant Estimation,” Journal of Imaging Science and Technology,
vol. 52, p. 040907, 2008.
[95] M. Ullah, A. Pronobis, B. Caputo, J. Luo, P. Jensfelt, and H. Christensen, “Towards robust place recognition for robot localization,” in IEEE International Conference on Robotics and Automation, 2008, pp. 530–537.
[96] A. Pronobis, O. M. Mozos, B. Caputo, and P. Jensfelt, “Multi-modal semantic place classification,” The International Journal of Robotics Research, Special Issue on
Robotic Vision, vol. 29, no. 2-3, pp. 298–320, Feb. 2010.
[97] C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines, 2001.
[98] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (2nd Edition). Wiley-Interscience, 2000.
[99] G. Madzarov, D. Gjorgjevikj, and I. Chorbev, “A Multi-class SVM Classifier Utilizing Binary Decision Tree,” Informatica: An International Journal of Computing
and Informatics, vol. 33, no. 2, pp. 225–233, 2009.
[100] E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision. Prentice Hall, 1998.
[101] D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easy calibrating omnidirectional cameras,” in IEEE International Conference on Intelligent Robots and Systems, 2006.
[102] H. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature, vol. 293, pp. 133–135, 1981.
[103] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
[104] P. Sturm, S. Ramalingam, and S. Lodha, “On calibration, structure from motion and multi-view geometry for generic camera models,” Imaging Beyond the Pinhole
Camera, pp. 87–105, 2006.
[105] E. Lehmann and J. Romano, Testing statistical hypotheses. Springer Verlag, 2005.
[106] B. Ryabko and J. Astola, “Universal codes as a basis for nonparametric testing of serial independence for time series,” Journal of Statistical Planning and Inference, vol. 136, no. 12, pp. 4119–4128, 2006.
[107] C. Liu and H. Shum, “Kullback-leibler boosting,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003.
[108] K. Patwardhan, G. Sapiro, and V. Morellas, “Robust foreground detection in video using pixel layers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 4, p. 746, 2008.
[109] S. Keerthi, S. Sundararajan, K. Chang, C. Hsieh, and C. Lin, “A sequential dual method for large scale multi-class linear SVMs,” in ACM SIGKDD international
conference on Knowledge discovery and data mining, 2008, pp. 408–416.
[110] G. Qiu, “Indexing chromatic and achromatic patterns for content-based colour image retrieval,” Pattern Recognition, vol. 35, no. 8, pp. 1675–1686, 2002.
[111] N. Cristianini and J. Shawe-Taylor, An introduction to support Vector Machines: and other kernel-based learning methods. Cambridge university press, 2004.
[112] A. Pronobis, B. Caputo, P. Jensfelt, and H. Christensen, “A realistic benchmark for visual indoor place recognition,” Robotics and Autonomous Systems, vol. 58, no. 1, pp. 81–96, 2010.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top