跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.10) 您好!臺灣時間:2025/09/30 06:52
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:王于碩
研究生(外文):WANG, YU SHUO
論文名稱:利用兩階段方式改善背景機率之顯著物體偵測
論文名稱(外文):Saliency Objection Detection based on two-stage Approach to Improve the Background Probability
指導教授:郭天穎郭天穎引用關係
口試委員:郭天穎高立人楊士萱蘇柏齊
口試日期:2017-07-24
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電機工程系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:中文
論文頁數:84
中文關鍵詞:人類視覺系統邊界連通性背景機率影像切割主體物件偵測
外文關鍵詞:Human visual systemBoundary connectivityBackground probabilityImage segmentationSaliency objection detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:226
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
主體物件偵測(Saliency objection detection)主要是找出影像中最顯著的物體,可應用在許多更高階的應用中。近年來,背景先驗(Background prior)的方法在主體物件偵測表現出很好的效能,但是在在此條件下,當主體區域大部分連接到邊界時會造成偵測效果不佳。
本文提出基於邊界連通性的方法,首先將單張影像分成多個超像素(Superpixel),利用CIELab色彩空間和Log賈伯濾波器,計算出對比程度;利用兩階段的方式修正由邊界連通性(Boundary Connectivity)計算出的每個超像素之背景機率,結合對比程度與背景機率,再經成本函數優化,計算出更均勻一致的顯著物體偵測圖。實驗結果證明,本文提出的演算法經由主體物件資料庫驗證,與其他常見五種主體物件偵測方法相比,準確性與一致性皆有顯著提升。
Salient objection detection is to find out the most attractive objects in the image. It can be applied to many high-level applications. In recent years, the background prior method has shown good performance in the salient objection detection, but under this condition, when the subject is mostly connected to the image boundary, the detection result is weak.
This paper presents a method based on the boundary connectivity. First, the image is divided into multiple superpixels, CIELab color space and log-Gabor filter used to calculate the local contrast. A two-stage strategy is to revise the background probability of each superpixel calculated by boundary connectivity. Finally, combining the local contrast with background probability and optimizing them via cost function to get a more uniform and accurate saliency map. The experimental results show that the algorithm proposed in this paper is verified by several popular salience object databases, and the accuracy and consistency are significantly improved compared with other common object salient detection methods.
摘 要 i
ABSTRACT ii
誌 謝 iii
目 錄 iv
表目錄 vi
圖目錄 vii
第一章 緒論 1
1.1 研究動機與目的 1
1.2 研究方法 3
1.3 研究貢獻 4
1.4 論文組織架構 5
第二章 相關知識及文獻回顧 7
2.1 利用ground truth與機器學習訓練的方法 7
2.1.1 條件隨機場(Conditional random field) 7
2.1.2 支持向量機(Support vector machine) 8
2.1.3 隨機森林(Random forest) 8
2.2 單純只利用原圖資訊的方法 12
2.2.1 中心先驗(Center prior) 13
2.2.2 背景先驗(Background prior) 14
2.2.3 分布性(Distribution) 18
2.2.4 焦距(Focusness) 20
2.2.5 頻域(Frequency domain) 20
2.2.6 其他(Others) 20
2.3 總結文獻方法 21
第三章 本文提出的主體物件偵測方法 24
3.1 本文提出的方法架構 24
3.2 影像分割以及特徵提取 26
3.2.1 簡單線性迭代聚類(simple linear iterative cluster, SLIC) 26
3.2.2 特徵擷取 27
3.2.3 超像素抽象化 29
3.3 第一階段前景圖 30
3.3.1 局部對比 30
3.3.2 背景機率計算及最佳化產生第一階段顯著圖 32
3.4 第二階段前景圖 34
3.5 本文提出方法總結 37
第四章 實驗結果 38
4.1 影像資料庫 38
4.2 評估效能方法與介紹 42
4.2.1 相關知識介紹 42
4.2.2 評估效能方法 44
4.3 本文評估效能 50
4.3.1 本文提出方法之個別效能 50
4.4 分析本文整體效能、運算複雜度 51
4.4.1 整體效能比較 51
4.4.2 運算複雜度分析 56
4.5 不同資料庫不同方法的展示 56
4.5.1 各評估方法在不同顯著模型上的分析 56
4.5.2 不同資料庫中不同顯著模型之結果 61
4.6 實驗結果總結 76
4.6.1失敗案例分析 76
第五章 結論 78
參考文獻 79
[1]A. Borji, M.-M. Cheng, H. Jiang, and J. Li, “Salient Object Detection: A Survey,” arXiv preprint arXiv:1411.5878, 2014.
[2]A. Borji and L. Itti, “State-of-the-Art in Visual Attention Modeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 185–207, 2013.
[3]J.E. Raymond, K.L. Shapiro, and K.M. Arnell, “Temporary Suppression of Visual Processing in an RSVP Task: An Attentional Blink?” J. Experimental Psychology, vol. 18, no 3, pp. 849-60, 1992.
[4]S. Treue, “Neural Correlates of Attention in Primate Visual Cortex,” Trends in Neurosciences, vol. 24, no. 5, pp. 295-300, 2001.
[5]S. Kastner and L.G. Ungerleider, “Mechanisms of Visual Attention in the Human Cortex,” Ann. Rev. Neurosciences, vol. 23, pp. 315-341, 2000.
[6]L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998.
[7]K. Koehler, F. Guo, S. Zhang, and M. P. Eckstein, “What do saliency models predict?” Journal of vision., vol. 14, no. 3, p. 14, 2014.
[8]J. Li, Y. Tian, and T. Huang, “Visual saliency with statistical priors,” International journal of computer vision., vol. 107, no. 3, pp. 239–253, 2014.
[9]T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to Detect a Salient Object,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 353–367, 2011.
[10]H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient Object Detection: A Discriminative Regional Feature Integration Approach,” 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[11]J. Kim, D. Han, Y.-W. Tai, and J. Kim, “Salient Region Detection via High-Dimensional Color Transform,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[12]J. Kim, D. Han, Y.-W. Tai, and J. Kim, “Salient Region Detection via High-Dimensional Color Transform and Local Spatial Support,”IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 9–23, 2016.
[13]S. Lu, V. Mahadevan, and N. Vasconcelos, “Learning Optimal Seeds for Diffusion-Based Salient Object Detection,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[14]L. Duan, C. Wu, J. Miao, L. Qing, and Y. Fu, “Visual saliency detection by spatially weighted dissimilarity,” 2011 IEEE Conference on Computer Vision and Pattern Recognition, 2011.
[15]Z. Jiang and L. S. Davis, “Submodular Salient Region Detection,” 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[16]X. Shen and Y. Wu, “A unified approach to salient object detection via low rank matrix recovery,” 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012.
[17]T. Liu, N. Zheng, W., and Z. Yuan, “Video attention: Learning to detect a salient object sequence,” 2008 19th International Conference on Pattern Recognition, 2008.
[18]T. Judd, K. Ehinger, F. Durand, and A. Torralba, “Learning to predict where humans look,” 2009 IEEE 12th International Conference on Computer Vision, 2009.
[19]F. Perazzi, P. Krahenbuhl, Y. Pritch, and A. Hornung, “Saliency filters: Contrast based filtering for salient region detection,”2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012.
[20]P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient Region Detection by UFO: Uniqueness, Focusness and Objectness,” 2013 IEEE International Conference on Computer Vision, 2013.
[21]W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency Optimization from Robust Background Detection,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[22]C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency Detection via Graph-Based Manifold Ranking,”2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[23]C. Li, Y. Yuan, W. Cai, Y. Xia, and D. D. Feng, “Robust saliency detection via regularized random walks ranking,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[24]J. Li, F. Meng, and Y. Zhang, “Saliency detection using a background probability model,” 2015 IEEE International Conference on Image Processing (ICIP), 2015.
[25]Q. Jiang, Z. Wu, C. Tian, T. Liu, M. Zeng, and L. Hu, “Salient object detection based on discriminative boundary and multiple cues integration,” Journal of Electronic Imaging, vol. 25, no. 1, p. 013019, Jan. 2016.
[26]Q. Zhang, J. Lin, Y. Tao, W. Li, and Y. Shi, “Salient object detection via color and texture cues,” Neurocomputing, vol. 243, pp. 35–48, 2017.
[27]Y.-S. Wang, C.-L. Tai, O. Sorkine, and T.-Y. Lee, “Optimized scale-and-stretch for image resizing,” ACM Transactions on Graphics, vol. 27, no. 5, p. 1, Jan. 2008.
[28]郭天穎、莊少榮,考慮雙眼視覺感知特性之立體影像品質評估,碩士論文,國立臺北科技大學電機所,臺北,2016。
[29]C. Guo and L. Zhang, “A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression,” IEEE Trans. Image Process., vol. 19, no. 1, pp. 185–198, Jan. 2010
[30]J. Park, J.-Y. Lee, Y.-W. Tai, and I. S. Kweon, “Modeling photo composition and its application to photo re-arrangement,” 2012 19th IEEE International Conference on Image Processing, 2012.
[31]Z. Ren, S. Gao, L.-T. Chia, and I. W.-H. Tsang, “Region-Based Saliency Detection and Its Application in Object Recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 5, pp. 769–779, 2014.
[32]M. Guo, Y. Zhao, C. Zhang, and Z. Chen, “Fast object detection based on selective visual attention,” Neurocomputing, vol. 144, pp. 184–197, 2014.
[33]G. Kulkarni et al., “Baby talk: Understanding and generating simple image descriptions,” in Proc. IEEE Conf. CVPR, Jun. 2011, pp. 1601–1608.
[34]A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth, “Describing objects by their attributes,” in Proc. IEEE Conf. CVPR, Jun. 2009, pp. 1778–1785.
[35]Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic Saliency Using Background Priors,” Computer Vision – ECCV 2012 Lecture Notes in Computer Science, pp. 29–42, 2012.
[36]A. Borji, “What is a Salient Object? A Dataset and a Baseline Model for Salient Object Detection,” IEEE Transactions on Image Processing, vol. 24, no. 2, pp. 742–756, 2015.
[37]R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and Süsstrunk Sabine, “SLIC Superpixels Compared to State-of-the-Art Superpixel Methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2282, 2012.
[38]D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” J. Opt. Soc. Amer. A, vol. 4, no. 12, pp. 2379–2394, Dec. 1987.
[39]P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graphbased image segmentation,” IJCV, vol. 59, no. 2, 2004.
[40]L. Breiman, “Random forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, Oct. 2001.
[41]J. Yang, D. Zhang, A. F. Frangi and J. Y. Yang, “Two-dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004.
[42]D. Zhou, J. Weston, A. Gretton, O. Bousquet, and B. Scholkopf. Ranking on data manifolds. NIPS, 2004.
[43]D. B. Johnson, “Efficient Algorithms for Shortest Paths in Sparse Networks,” Journal of the ACM, vol. 24, no. 1, pp. 1–13, Jan. 1977.
[44]M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Global contrast based salient region detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015.
[45]Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical Saliency Detection,” 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[46]J. Shi, Q. Yan, L. Xu, and J. Jia, “Hierarchical Image Saliency Detection on Extended CSSD,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 4, pp. 717–729, Jan. 2016.
[47]S. Alpert, M. Galun, R. Basri, and A. Brandt, “Image Segmentation by Probabilistic Bottom-Up Aggregation and Cue Integration,” 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[48]A. Borji, M.-M. Cheng, H. Jiang, and J. Li, “Salient Object Detection: A Benchmark,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5706–5722, 2015.
[49]M.-M. Cheng, “SalientShape: Group Saliency in Image Collections,” 15-Aug-2016. [Online]. Available: http://mmcheng.net/zh/gsal/.
[50]Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “The Secrets of Salient Object Segmentation,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[51]Judd, “http://ilab.usc.edu/borji.”
[52]R. Achanta, S. Hemami, F. Estrada, and S. Süsstrunk, “Frequency-tuned salient region detection,” Proc. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2009, pp. 1597–1604.
[53]R. Margolin, L. Zelnik-Manor, and A. Tal, “How to Evaluate Foreground Maps,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[54]X. Li, Y. Li, C. Shen, A. Dick, and A. V. D. Hengel, “Contextual Hypergraph Modeling for Salient Object Detection,” 2013 IEEE International Conference on Computer Vision, 2013.
[55]T. K. Leung and J. Malik, “Representing and recognizing the visual appearance of materials using three-dimensional textons,” IJCV, vol. 43, no. 1, pp. 29–44, 2001.
[56]C. Aytekin, S. Kiranyaz, and M. Gabbouj, “Automatic Object Segmentation by Quantum Cuts,” 2014 22nd International Conference on Pattern Recognition, 2014.
[57]C. Aytekin, E. C. Ozan, S. Kiranyaz, and M. Gabbouj, “Visual saliency by extended quantum cuts,” 2015 IEEE International Conference on Image Processing (ICIP), 2015.
[58]B. Alexe, T. Deselaers, and V. Ferrari, “Measuring the Objectness of Image Windows,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2189–2202, 2012.
[59]D. Feng, N. Barnes, S. You, and C. Mccarthy, “Local Background Enclosure for RGB-D Salient Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[60]J. Guo, T. Ren, and J. Bei, “Salient object detection for RGB-D image via saliency evolution,” 2016 IEEE International Conference on Multimedia and Expo (ICME), 2016.
[61]R. Ju, Y. Liu, T. Ren, L. Ge, and G. Wu, “Depth-aware salient object detection using anisotropic center-surround difference,” Signal Processing: Image Communication, vol. 38, pp. 115–126, 2015.
[62]N. Li, J. Ye, Y. Ji, H. Ling, and J. Yu, “Saliency Detection on Light Field,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top