(18.210.12.229) 您好!臺灣時間:2021/03/05 12:19
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:黃韋翔
研究生(外文):HUANG, WEI-HSIANG
論文名稱:基於分類架構應用於 衣著風格分析的分群方法
論文名稱(外文):Classifier based Clustering for Clothing Style Analysis
指導教授:江振國
指導教授(外文):CHIANG, CHEN-KUO
口試委員:江振國朱威達黃敬群胡敏君
口試委員(外文):CHIANG, CHEN-KUOCHU, WEI-TAHUANG, CHING-CHUNHU, MIN-CHUN
口試日期:2020-07-29
學位類別:碩士
校院名稱:國立中正大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:25
中文關鍵詞:服裝風格分析類神經網路卷積神經網路分群
外文關鍵詞:Clothing style analysisNeural networkConvolution neural networkClustering
相關次數:
  • 被引用被引用:0
  • 點閱點閱:59
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:11
  • 收藏至我的研究室書目清單書目收藏:0
設計師在設計新產品時,時常會藉由觀察當紅產品、尋找近期流行的元素,以獲得設計靈感。若能跳脫現有的人為定義的服裝風格,完全從機器的角度分析,則可以提供設計師有別於以往的設計方向,激發設計師更多不同的靈感。因此,我們提出一個基於分類架構應用於衣著風格分析的分群方法。比起一般的分群方法有更好的分群結果,與更高的群內風格相似性。通過循環的執行,能透過分類器取得更具風格特性的特徵。
When designing new products, designers often observe popular products and look for recently popular elements to obtain design inspiration. If we can think out of the box of the existing artificially defined clothing style and analyze it completely from the perspective of the machine, then we can provide designers with different designing ways from the past, and inspire designers with more different ideas. Thus, We propose a new clustering method base on classifier for clothing style analysis. Compared to the traditional clustering method, our method has better clustering results and higher similarities of styles in cluster. Through the loop execution, our method can obtain more style characteristics feature from classifier.
Abstract i
Contents ii
List of Tables iii
List of Figures iv
1 Introduction ...................................... 1
2 Related Works ..................................... 3
2.1 Clothing Style Analysis ......................... 3
2.2 Deep Filter Bank ................................ 3
2.3 Unsupervised Learning of Features ............... 4
3 Proposed Method ................................... 6
3.1 Feature Extraction .............................. 6
3.1.1 Material and Texture .......................... 6
3.1.2 Color Histogram ............................... 7
3.2 Feature Preprocessing ........................... 7
3.3 Clustering and Classification Cycle ............. 8
4 Implementation .................................... 10
5 Experimental Results .............................. 12
5.1 Dataset ......................................... 12
5.2 Evaluation Metrics .............................. 12
5.3 Feature Comparison .............................. 13
5.4 Clustering and Classification Results ........... 13
5.5 Visualization Results ........................... 15
6 Conclusion ........................................ 23
References .......................................... 24
[1] M. H. Kiapour, K. Yamaguchi, A. C. Berg, and T. L. Berg, “Hipster wars: Discovering elements of fashion styles,” in European conference on computer vision, pp. 472–488, Springer, 2014.
[2] M. Takagi, E. Simo-Serra, S. Iizuka, and H. Ishikawa, “What makes a style: Experimental analysis of fashion prediction,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2247–2253, 2017.
[3] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang, “Deepfashion: Powering robust clothes recognition and retrieval with rich annotations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1096–1104, 2016.
[4] F. Perronnin and C. Dance, “Fisher kernels on visual vocabularies for image categorization,” in 2007 IEEE conference on computer vision and pattern recognition, pp. 1–8, IEEE, 2007.
[5] F. Perronnin, J. S ́anchez, and T. Mensink, “Improving the fisher kernel for large-scale image classification,” in European conference on computer vision, pp. 143–156, Springer, 2010.
[6] H. J ́egou, M. Douze, C. Schmid, and P. P ́erez, “Aggregating local descriptors into acompact image representation,” in2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3304–3311, IEEE, 2010.
[7] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” in Workshop on statistical learning in computer vision, ECCV, vol. 1, pp. 1–2, Prague, 2004.
[8] J. Sivic and A. Zisserman, “Video google: A text retrieval approach to object matching in videos,” in null, p. 1470, IEEE, 2003.
[9] M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3828–3836, 2015.
[10] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
[11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.
[13] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
[14] A. Coates and A. Y. Ng, “Learning feature representations with k-means,” in Neural networks: Tricks of the trade, pp. 561–580, Springer, 2012.
[15] J. Yang, D. Parikh, and D. Batra, “Joint unsupervised learning of deep representations and image clusters,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5147–5156, 2016.
[16] J. Xie, R. Girshick, and A. Farhadi, “Unsupervised deep embedding for clustering analysis,” in International conference on machine learning, pp. 478–487, 2016.
[17] M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for unsupervised learning of visual features,” in Proceedings of the European Conference on Computer Vision (ECCV), pp. 132–149, 2018.
[18] L. Sharan, R. Rosenholtz, and E. Adelson, “Material perception: What can you see in a brief glance?,” Journal of Vision, vol. 9, no. 8, pp. 784–784, 2009.
[19] M. Fritz, E. Hayman, B. Caputo, and J.-O. Eklundh, “The kth-tips database,” 2004.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔