跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.81) 您好!臺灣時間:2025/10/04 04:40
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:邱日聖
研究生(外文):Jih-Sheng Chiu
論文名稱:使用類神經網路改善非對稱式近似搜尋
論文名稱(外文):Improving Asymmetric Approximate Search through Neural Networks
指導教授:邱志義邱志義引用關係
指導教授(外文):Chih-Yi Chiu
學位類別:碩士
校院名稱:國立嘉義大學
系所名稱:資訊工程學系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:106
語文別:中文
論文頁數:36
中文關鍵詞:近似最近鄰居搜尋類神經網路乘積量化二元嵌入非對稱距離
外文關鍵詞:Approximate Nearest Neighbor SearchNeural NetworksProduct QuantizationBinary EmbeddingAsymmetric Distance Computation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:139
  • 評分評分:
  • 下載下載:10
  • 收藏至我的研究室書目清單書目收藏:0
近年來由於科技發展的進步,所要處理的資料日漸龐大,要如何對這些巨大的資料做搜尋是一項艱難的挑戰。因資料量過於龐大而使傳統的線性搜尋變得不切實際,轉以近似搜尋為研究方向。近似搜尋前先要將資料做分群,之後進行搜尋時,將查詢值對每一群集所代表的群中心做歐基里德距離運算,根據距離挑選足夠的候選人,再使用傳統的最近鄰居搜尋輸出搜尋結果。然而在近似搜尋中,使用距離做為查詢值與群集的相關程度並不是最佳挑選候選人的方法,因此我們提出依據每個群集與查詢值相關資料的密度來做為查詢值與每個群集的相關程度來挑選候選人,但在實際情況下我們無法得到每個群集中與查詢值相關資料的密度值,因此我們使用類神經網路為模型,使用此模型來優化查詢值與每個群中心的距離使之趨近於密度值,進而提升挑選出的候選人質量。在實驗成果中,我們所挑選的候選人質量有一定幅度的提升。
Due to advance in information technology, we have to deal with growing digital data. The traditional linear search becomes impractical because of the large amount of data, so many researchers turn to develop approximate search methods. Before Approximate search, we have to do the clustering on data. In the search process, we compute the Euclidean distance between query and each cluster center, and then pick enough candidates according to their distances. However, the distance-based approach is not always the best way to pick candidates. In this study, we propose employing neural networks to optimize the relevance between query and each cluster center so that the candidate quality can be further improved. Experiment results, show the proposed method achieve satisfactory accuracy compared with our past work.
第一章 緒論 5
第二章 相關研究 8
2.1 二元嵌入 8
2.2 非對稱距離及乘積量化 9
第三章 研究方法 11
3.1背景知識 11
3.1.1二元嵌入 11
3.1.2非對稱距離 12
3.1.3乘積量化 13
3.1.4近似非對稱式最近鄰居搜尋(AAKNN) 15
3.2建立模型 18
3.3群中心計算 19
3.4訓練資料、標籤製作 20
3.5 模型使用於最近非對稱式最近鄰居搜尋法 24
第四章 實驗 27
4.1 實驗設計 27
4.2 實驗結果 28
第五章 結論與未來展望 31
參考文獻 32
[1] A. Alahi, R. Ortiz, and P. Vandergheynst, “Freak: Fast retina keypoint,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2012.

[2] I. Amerini, L. Ballan, R. Caldelli, A. D. Bimbo, and G. Serra, “A SIFT-based forensic method for copy-move attack detection and transformation recovery,” IEEE Transactions Information Forensics Security, Vol. 6, No. 3, pp. 1099–1110, 2011.

[3] A. Babenko and V. Lempitsky, “Efficient Indexing of Billion-Scale datasets of deep descriptors, ” CVPR. IEEE, 2016.

[4] C. Burges, T. Shaked*, and E. Renshaw, “Learning to Rank using Gradient Descent,” Appearing in Proceedings of the 22th International Conference on Machine Learning, Bonn, Germany, 2005.

[5] Z. Cao*, T. Qin*, T. Y. Liu, M. F. Tsai*, and H. Li, “Learning to Rank: From Pairwise Approach to Listwise Approach,” Appearing in Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, pp. 129-136, 2007.

[6] M. Charikar, “Similarity estimation techniques from rounding algorithms,” In Proceedings of ACM Symposium on Theory of Computing (STOC), pp 380-388, 2002.

[7] A. Gionis, P. Indyk, and R. Motwani, “Similarity search in high dimensions via hashing,” in Proc. of 25th International Conference on Very Large Data Bases, 1999, pp. 518–529.

[8] A. Gordo, F. Perronnin, and Y. Gong, “Asymmetric distance for binary embedding,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 36, No. 1, pp. 33-47, 2014.

[9] T. Ge, K. He, and J. Sun, “Graph cuts for supervised binary coding,” In Proceedings of European conference on Computer Vision (ECCV), Vol. 7, pp. 250-264, 2014.

[10] T. Ge, K. He, Q. Ke, and J. Sun, “Optimized Product Quantization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013.

[11] D. Greene, M. Parnas, and F. Yao, “Multi-index hashing for information retrieval,” In Proceedings of IEEE Conference on Foundations of Computer Science, 1994.

[12] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin, “Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-scale Image Retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.

[13] J. He, W. Liu, and S. F. Chang, “Scalable similarity search with optimized kernel hashing,” in Proc. Int. Conf. Knowledge Discovery Data Mining, 2010.

[14] K. He, F. Wen, and J. Sun, “K-means Hashing: an Affinity-Preserving Quantization Method for Learning Binary Compact Codes,” In Proceedings of IEEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2938-2945, 2013.

[15] G. Irie, Z. Li, X. M. Wu, and S. F. Chang, “Locally Linear Hashing for Extracting Non-Linear Manifolds,” In Proceedings of IEEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2123-2130, 2014.

[16] P. Indyk, “Nearest-neighbor searching in high dimensions,” in Handbook of discrete and computational geometry, J. E. Goodman and J. O’Rourke, Eds. Boca Raton, FL: CRC Press LLC,2004.

[17] C. Jung, L.C. Jiao, and Y. Shen, “Ensemble Ranking SVM for Learning to Rank,” IEEE International Workshop on Machine Learning for Signal Processing, Beijing, China, 2011.

[18] H. Jegou, M. Douze, and C. Schmid, “Product Quantization for Nearest Neighbor Search”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 1, Jan. 2011.

[19] J. Ji, J. Li, S. Yan, B. Zhang, and Q. Tian, “Super-bit Locality-Sensitive Hashing,” NIPS'12 Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 2012..

[20] B. Kulis and K.Grauman, “Kernelized Locality-Sensitive Hashing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.

[21] B. Kulis and T. Darrell, “Learning to hash with binary reconstructive embeddings,” in Proc. 23rd Adv. Neural Inf. Process. Syst., 2009.

[22] Y. Kalantidis and Y. Avrithis, “Locally Optimized Product Quantization for Approximate Nearest Neighbor Search,” IEEE Conference on Computer Vision and Pattern Recognition, 2014.

[23] M. Norouzi, A. Punjani, and D. J. Fleet, “Fast exact search in Hamming space with multi-index hashing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 36, No. 6, pp. 1107-1119, 2014.

[24] M. Raginsky and S. Lazebnik, “Locality-sensitive binary codes from shift-invariant kernels,” In Proc. Advances in Neural Information Processing Systems, 2009.

[25] G. Shakhnarovich, P. Viola, and T. Darrell, “Fast pose estimation with parameter-sensitive hashing,” in Proc. IEEE 9th Int. Conf. Comput. Vis., 2003.

[26] G. Shakhnarovich, T. Darrell, and P. Indyk, “Nearest-neighbor methods in learning and vision: theory and practice” ,MIT Press, 2006.

[27] B. Wei, T. Guan, and J. Yu, “Projected Residual Vector Quantization for ANN Search,” IEEE Multimedia, vol. 21, 2014.

[28] Y. Weiss, A. Torralba, and R. Fergus, “Spectral hashing,” in Proc. Adv. Neural Inf. Process. Syst., 2008.

[29] F. Xia*, T. Y. Liu, J. Wang, W. Zhang, and H. Li , “Listwise Approach to Learning to Rank - Theory and Algorithm,” Appearing in Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 2008.

[30] J. Xu and H. Li, “AdaRank: A Boosting Algorithm for Information Retrieval,” SIGIR’07, Amsterdam, The Netherlands, July 2007.

[31] 劉宇詮,“近似非對稱式最近鄰居搜尋二元特徵資料庫,”嘉義:國立嘉義大學資訊工程學研究所
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top