跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.171) 您好!臺灣時間:2024/12/13 20:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:張大衛
研究生(外文):Da-Wei Chang
論文名稱:整合多重隨機奇異值分解與理論分析
論文名稱(外文):Theoretical and Performance Analysis for Integrated Randomized Singular Value Decomposition
指導教授:王偉仲
口試日期:2017-07-26
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:應用數學科學研究所
學門:數學及統計學門
學類:其他數學及統計學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:英文
論文頁數:54
中文關鍵詞:數值線性代數奇異值分解隨機演算法數值優化維度降低
外文關鍵詞:Numerical Linear AlgebraSingular Value DecompositionRandomized AlgorithmNumerical OptimizationDimension Reduction
相關次數:
  • 被引用被引用:0
  • 點閱點閱:229
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
維度降低和特徵提取是大數據時代的重要技術,此二技術可以降低數據維數並降低進一步分析數據的計算成本。低秩奇異值分解(low-rank SVD)是這些技術的關鍵部分。為了更快地計算低秩奇異值分解,一些研究提出可以使用隨機抽取子空間的方法來獲得近似結果。在這項研究中,我們提出了一種新的概念,將隨機算法的結果進行整合以獲得更準確的近似值,稱為整合奇異值分解。我們通過理論和數值實驗來分析演算法的性質,以及不同的整合方法。整合方法的架構是有條件的優化問題,其具有唯一的局部極小值。整合子空間將透過線搜索、Kolmogorov-Nagumo 平均、和簡化類型的方法來進行計算,並針對這些方法的理論背景及計算複雜度進行分析,此外,整合奇異值分解與先前隨機奇異值分解的相似與相異處也會進行說明與分析。數值實驗結果顯示,在所提供的例子中,整合奇異值分解相對於同樣數量的隨機奇異值分解,使用線搜索方法時的疊代次數較少。另外,使用簡化類型的方法,來當作線搜索方法的初始值,可以減少收斂所需的疊代次數。
Dimension reduction and feature extraction are the important techniques in the big-data era to reduce the dimension of data and the computational cost for further data analysis. Low-rank singular value decomposition (low-rank SVD) is the key part of these techniques. In order to compute low-rank SVD faster, some researchers propose to use randomized subspace sketching algorithm to get an approximation result (rSVD). In this research, we propose an idea for integrating the results from randomized algorithm to get a more accurate approximation, which is called integrated singular value decomposition (iSVD). We analyze iSVD and the integration methods by theoretical analysis and numerical experiment. The integration scheme is a constraint optimization problem with unique local maximizer up to orthogonal transformation. Line search type method, Kolmogorov-Nagumo type average method and reduction type method are introduced and analyzed for their theoretical background and computational complexity. The similarity and difference between iSVD and rSVD with same sketching number are also explained and analyzed. The numerical experiment shows that the line search method in iSVD converges faster than the one in rSVD for our test examples. Also, using the integrated subspace from reduction as the initial value of line search method can reduce the iteration number to converge.
口試委員會審定書 iii
誌謝 v
Acknowledgements vii
摘要 ix
Abstract xi
1 Introduction 1
2 Overview of Integrated Singular Value Decomposition 5
3 Properties of Integrated Subspace 9
3.1 Solution of the Optimization Problem 10
3.2 Asymptotic Behavior of the Integrated Subspace 12
3.3 Uniqueness of Local Maximizer 13
4 Integration Method 21
4.1 Line Search Type Method 21
4.2 Kolmogorov-Nagumo-Type Average 29
4.3 Reduction-Type Average 33
5 Comparison of rSVD and iSVD 37
6 Numerical Experiment 41
6.1 Different Number of Sketched Subspaces 42
6.2 Comparison of KN and WY 44
6.3 Comparison of iSVD, rSVD and Reduction 47
7 Discussion and Conclusion 51
Bibliography 53
[1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds. Princeton University Press, 2009.
[2] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users’ Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, third edition, 1999.
[3] J. Barzilai and J. M. Borwein. Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8(1):141–148, 1988.
[4] T.-L. Chen, D. D. Chang, S.-Y. Huang, H. Chen, C. Lin, and W. Wang. Integrating multiple random sketches for singular value decomposition. arXiv preprint arXiv:1608.08285, 2016.
[5] S.Fiori, T.Kaneko, and T.Tanaka. Mixed maps for learning a kolmogoroff-nagumo-type average element on the compact Stiefel manifold. IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), pages 4518– 4522, 2014.
[6] I. Griva, S. G. Nash, and A. Sofer. Linear and nonlinear optimization. Siam, 2009.
[7] N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011.
[8] T.Kaneko, S.Fiori, and T.Tanaka. Empirical arithmetic averaging over the compact Stiefel manifold. IEEE Transations on Signal Processing, 61(4):883–894, 2013.
[9] J. R. Magnus and H. Neudecker. The commutation matrix: some properties and applications. The Annals of Statistics, pages 381–394, 1979.
[10] V. Rokhlin, A. Szlam, and M. Tygert. A randomized algorithm for principal component analysis. SIAM Journal on Matrix Analysis and Applications, 31(3):1100–1124, 2009.
[11] Z. Wen and W. Yin. A feasible method for optimization with orthogonality constraints. Mathematical Programming, 142(1-2):397–434, 2013.
[12] H. Zhang and W. W. Hager. A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization, 14(4):1043–1056, 2004.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top