Archive




Volume 9, Issue 4, August 2020, Page: 64-73
An Accelerated Accuracy-enhanced Randomized Singular Value Decomposition for Factorizing Matrices with Low-rank Structure
Joseph Roger Arhin, School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China
Francis Sam, School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China
Kenneth Coker, School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
Toufic Seini, School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China
Received: Jun. 24, 2020;       Accepted: Jul. 13, 2020;       Published: Jul. 28, 2020
DOI: 10.11648/j.pamj.20200904.11      View  16      Downloads  14
Abstract
Big data has in recent years gained ground in many scientific and engineering problems. It seems to some extent prohibitive for traditional matrix decomposition methods (i.e. QR, SVD, EVD, etc.) to handle such large-scale problems involving data matrix. Many researchers have developed several algorithms to decompose such big data matrices. An accuracy-enhanced randomized singular value decomposition method (referred to as AE-RSVDM) with orthonormalization recently becomes the state-of-the-art to factorize large data matrices with satisfactory speed and accuracy. In our paper, low-rank matrix approximations based on randomization are studied, with emphasis on accelerating the computational efficiency on large data matrices. By this, we accelerate the AE-RSVDM with modified normalized power iteration to result in an accelerated version. The accelerated version is grounded on a two-stage scheme. The first stage seeks to find the range of a sketch matrix which involves a Gaussian random matrix. A low-dimensional space is then created from the high-dimensional data matrix via power iteration. Numerical experiments on matrices of different sizes demonstrate that our accelerated variant achieves speedups while attaining the same reconstruction error as the AE-RSVDM with orthonormalization. And with data from Google art project, we have made known the computational speed-up of the accelerated variant over the AE-RSVDM algorithm for decomposing large data matrices with low-rank form.
Keywords
Low-rank, Singular Value Decomposition (SVD), Accelerated AE-RSVDM, Orthonormalization, Power Iteration
To cite this article
Joseph Roger Arhin, Francis Sam, Kenneth Coker, Toufic Seini, An Accelerated Accuracy-enhanced Randomized Singular Value Decomposition for Factorizing Matrices with Low-rank Structure, Pure and Applied Mathematics Journal. Vol. 9, No. 4, 2020, pp. 64-73. doi: 10.11648/j.pamj.20200904.11
Copyright
Copyright © 2020 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Reference
[1]
Chan T and Hansen PC (1992) Some applications of the rank revealing QR factorization. SIAM Journal on Scientific and Statistical Computing, 13 (3), 727-741.
[2]
Martinsson P-G and Voronin S (2016) A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices. SIAM Journal on Scientific Computing, 38 (5), S485-S507.
[3]
Halko N, Martinsson P-G and Tropp JA (2011) Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53 (2), 217-288.
[4]
G. H. Golub and C. F. V Loan, Linear Algebra and its Applications. 1996.
[5]
L. N. Trefethen and D. Bau III, Numerical Linear Algebra. 50, SIAM, 1997.
[6]
Hastie T, Tibshirani R, Friedman J and Franklin J (2005) The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer, 27 (2), 83-85.
[7]
Khoromskij BN (2011) O (dlogN)-quantics approximation of N-d tensors in high-dimensional numerical modeling. Constructive Approximation, 34 (2), 257-280.
[8]
B. N. Erichson, S. L. Brunton and N. J. Kutz, “Compressed singular value decomposition for image and video processing,” In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1880-1888, 2017.
[9]
Calvetti D, Reichel L and Sorensen DC (1994) An implicitly restarted Lanczos method for large symmetric eigenvalue problems. Electronic Transactions on Numerical Analysis, 2 (1), 21.
[10]
Frieze A, Kannan R and Vempala S (2004) Fast Monte-Carlo algorithms for finding low-rank approximations. Journal of the ACM (JACM), 51 (6), 1025-1041.
[11]
T. Sarlos, “Improved approximation algorithms for large matrices via random projections,” In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06), pp. 143-152, 2006.
[12]
Martinsson, P-G, Rokhlin V and Tygert M (2011) A randomized algorithm for the decomposition of matrices. Applied and Computational Harmonic Analysis, 30 (1), 47-68.
[13]
Woolfe F, Liberty E, Rokhlin V and Tygert M (2008) A fast randomized algorithm for the approximation of matrices. Applied and Computational Harmonic Analysis, 25 (3), 335-366.
[14]
Martinsson, P-G (2019) Randomized methods for matrix computations. The Mathematics of Data, 25, 187-231.
[15]
Rokhlin V, Szlam A and Tygert M (2010) A randomized algorithm for principal component analysis. SIAM Journal on Matrix Analysis and Applications, 31 (3), 1100-1124.
[16]
Erichson NB, Voronin S, Brunton, SL and Kutz, JN (2016) Randomized matrix decompositions using R. arXiv preprint arXiv: 1608.02148.
[17]
Gordon, Y (1985) Some inequalities for Gaussian processes and applications. Israel Journal of Mathematics, 50 (4), 265-289.
[18]
Balkema, A A, and De Haan L (1974) Residual life time at great age. The Annals of probability, 792-804.
[19]
Chen Z and Dongarra JJ (2005) Condition numbers of Gaussian random matrices. SIAM Journal on Matrix Analysis and Applications, 27 (3), 603-620.
Browse journals by subject