|
A.Introduction [1]V. Bruce, “Influences of familiarity on the processing of faces,” Perception, vol. 15, pp. 387-397, 1986. [2]H. D. Ellis, J. W. Shepherd, G. M. Davies, “Identification of familiar and unfamiliar faces from internal and external features: Some implications for theories of face recognition,” Perception, vol. 8, no. 4, pp. 431-439, 1979. [3]D. Marr, E. Hildreth, “Theory of edge detection,” Proc. Roy. Soc. London, vol. B207, pp. 187-217, 1980. [4]D. Marr, T. Poggio, “A computational theory of human stereo vision,” Proc. Roy. Soc. London, vol. B204, pp. 301-328, 1979. [5]M. A. Turk, A. P. Pentland, “Face recognition using eigenfaces,” Proc. Int. Conf. on Patt. Recog., pp. 586-591, 1991. [6]M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve procedure for the characterization of human faces,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, pp. 103-108, 1990. [7]R. Chellappa, C.L. Wilson, S. Sirohey, “Human and machine recognition of Faces: A survey,” Proc. IEEE, vol. 83, no. 5, pp. 705-740, 1995. [8]L. Bottou, "Large-Scale machine learning with stochastic gradient descent", Proc. 19th Int''l Conf. Computational Statistics, 2010. [9]D.P. Bertsekas, Nonlinear Programming., Belmont, MA, USA:Athena Scientific, 1995. B.Statistics and data modeling [10]C. Eckart, G. Young, “The approximation of one matrix by another of lower rank,” Psychometrika, vol. 1, no. 3, pp. 211-218, 1936. [11]R. Vidal, Y. Ma, S. S. Sastry, Generalized Principal Component Analysis, Springer, New York, 2016. [12]M.E. Tipping, C.M. Bishop, “Probabilistic principal components analysis,” J. Royal Statistical Soc. B, vol. 61, no. 3, pp. 611-622, 1999. [13]J. Yang D. Zhang, A. F. Frangi, J. Y. Yang, “Two-dimensional PCA: A new approach to appearance-based face representation and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no.1, pp. 131–137, 2004. [14]B. Schlkopf, A. Smola, K.-R. Mller, “Kernel principal component analysis,” Proc. Int''l Conf. Artificial Neural Networks, pp. 583-588, 1997. [15]K. P. Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, Cambridge, MA, 2012. [16]S. Shalev-Shwartz, S. Ben-David, Understanding Machine Learning: From Theory to Algorithms, Cambridge University Press, 2014. [17]L. Valiant, “A theory of the learnable,” Commun. ACM, vol. 27, pp. 1134-1142, Nov. 1984. [18]V. N. Vapnik, “An overview of statistical learning theory,” IEEE Trans. Neural Networks, vol. 10, pp. 988-999, Sept. 1999. [19]A. Blumer, A. Ehrenfeucht, D. Haussler, M. K. Warmuth, “Learnability and the Vapnik–Chervonenkis dimension”, J. ACM, vol. 36, no. 4, pp. 929-965, 1989. [20]C. M. Bishop, Pattern Recognition and Machine Learning, Springer, New York, 2007. [21]J. Friedman, R. Tibshirani, T. Hastie, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer, New York, 2009. [22]C. Cortes, V. Vapnik, “Support-vector networks,” Mach. Learn., vol. 20, no. 3, pp. 273-297, 1995. [23]I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, “Generative adversarial nets,” Proc. Adv. Neural Inf. Process. Syst., pp. 2672-2680, 2014. [24]P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no.7, pp. 711–720, 1997. [25]J. B. Kruskal, “Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis,” Psychometrika, vol. 29, no. 1, pp. 1-27, Mar. 1964. [26]S.T. Roweis, L.K. Saul, “Nonlinear dimensionality reduction by local linear embedding,” Science, vol. 290, pp. 2323-2326, Dec. 2000. [27]M. Belkin, P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Computation, vol. 15, no. 6, pp. 1373-1396, 2003. C.Compressive Sensing and Dictionary Learning [28]R. van Handel, “Probability in high dimension,” Princeton University, Jun. 2014. [29]D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289-1306, 2006. [30]R. Vershynin, “High dimensional probability,” 2016, to be published. [31]S. Mallat, Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397-3415, 1993. [32]G. Davis, S. Mallat, Z. Zhang, “Adaptive time-frequency decompositions,” Opt. Eng., vol. 33, no. 7, pp. 2183-91, 1994. [33]W. J. Fu, “Penalized regression: The bridge versus the lasso,” J. Computat. Graph. Statist., vol. 7, pp. 397-416, 1998. [34]B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, “Least angle regression,” Ann. Statist., vol. 32, no. 2, pp. 407-499, 2004. [35]A. Lee, F. Caron, A. Doucet, and C. Holmes, “A hierarchical Bayesian framework for constructing sparsity-inducing priors,” Technical Report, University of Oxford, UK, pp. 1-18, 2010. [36]D. Andrews, C. Mallows, “Scale mixtures of normal distributions,” J. R. Statist. Soc, vol. 36, pp. 99, 1974. [37]A. Armagan, D. B. Dunson, J. Lee, “Generalized double Pareto shrinkage,” Technical Report, University of Duke, 2011. [38]J. Griffin, P. Brown, “Bayesian adaptive lassos with non-convex penalization,” Technical Report, University of Warwick, 2007. [39]M. Figueiredo, “Adaptive Sparseness for Supervised Learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, pp. 1150-1159, 2003. [40]E. Candes, M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 21-30, Mar. 2008. [41]J. Cai, E. Candes, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. Optim., vol. 20, no. 4, pp. 1956-1982, 2010. [42]J. Wright, Y. Peng, Y. Ma, A. Ganesh, S. Rao, “Robust principal component analysis: exact recovery of corrupted low-rank matrices by convex optimization”, Proc. Adv. Neural Inf. Process. Syst., 2009. [43]E. Candes, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?,” Journal of the ACM, vol. 58, issue 3, article 11, May 2011. [44]E. Elhamifar, R. Vidal, “Sparse subspace clustering,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 2790-2797, 2009. [45]G. Liu, Z. Lin, Y. Yu, “Robust subspace segmentation by low-rank representation,” Proc. Int''l Conf. Mach. Learn., pp. 663-670, 2010. [46]G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1, pp. 171-184, Jan. 2013. D.Convex Optimization [47]S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. [48]D. Bertsekas, Convex Optimization Algorithms, Belmont, MA, USA:Athena Scientific, 2015. [49]Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, MA, Norwell:Kluwer, 2004. [50]N. Parikh, S. Boyd, “Proximal algorithms,” Found. Trends Optim., vol. 1, no. 3, pp. 123-231, 2013. [51]S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, "Distributed optimization and statistical learning via the alternating direction method of multipliers", Found. Trends Mach. Learn., vol. 8, no. 1, pp. 1-122, 2011. [52]A. Beck, M. Teboulle, “A fast iterative shrinkage thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci., vol. 2, no. 1, pp. 183-202, 2009. [53]T. Lin, S. Ma, S. Zhang, “On the global linear convergence of the ADMM with multiblock variables,” SIAM J. Optim., vol. 25, no. 3, pp. 1478-1497, 2015. [54]Y. Wang, W. Yin, J. Zeng, “Global convergence of ADMM in nonconvex nonsmooth optimization,” in arXiv, 2015, [online] Available: https://arxiv.org/abs/1511.06324. [55]M. Hong, Z.-Q. Lo, M. Razaviyayn, “Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems” Proc. IEEE Int. Conf. Acoust. Speech Signal Process., pp. 1-5, 2015. E.OPSRC related works [56]J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, issue. 2, pp. 210-227, Feb. 2009. [57]L. Zhang, M. Yang, and X. Feng, “Sparse representation or collaborative representation: Which helps face recognition?” Proc. IEEE Int’l Conf. Computer Vision, pp. 471-478, 2011. [58]P. J. Huber, “Robust estimation of a location parameter,” Ann. Math. Statist., vol. 35, no. 1, pp. 73-101, 1964. [59]P. J. Huber, E. Ronchetti, Robust Statistics, USA, NY, New York:Wiley, 2009. [60]R. He, W. S. Zheng, B. G. Hu, and X. W. Kong, “A regularized correntropy framework for robust pattern recognition,” Neural Computation, vol. 23, issue 8, pp. 2074-2100, 2011. [61]R. He, W. S. Zheng, T. Tan, and Z. Sun, “Half-quadratic based iterative minimization for robust sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, issue 2, pp. 261-275, 2014. [62]X. T. Yuan and B. G. Hu, “Robust feature extraction via information theoretic learning,” Proc. Int’l Conf. Mach. Learn., pp. 1193-1200, 2009. [63]R. He, W. S. Zheng, and B. G. Hu, “Maximum correntropy criterion for robust face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, issue 8, pp. 1561-1576, 2011. [64]M. Nikolova and M. K. Ng, “Analysis of half-quadratic minimization methods for signal and image recovery,” SIAM J. Scientific Computing, vol. 27, issue 3, pp. 937-966, 2005. [65]A. Y. Yang, S. S. Sastry, A. Ganesh, and Y. Ma, “Fast L1-Minimization algorithms and an application in robust face recognition: A review,” Proc. Int’l Conf. Image Process., pp. 1849-1852, 2010. [66]R. L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, “Face detection in color images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, issue 5, pp. 696–706. 2002. [67]A. S. Georghiades, P. N. Belhumeur and D. J. Kriegman, “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, issue 6, pp. 643-660, June 2001. [68]A. M. Martinez, “PCA versus LDA,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, issue 2, pp. 228-233, 2001. F.GD-HASLR related works [69]G. Hua, M. H. Yang, E. G. Learned-Miller, Y. Ma, M. Turk, D. J. Kriegman, and T. S. Huang, “Introduction to the special section on real-world face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 10, pp. 1921-1924, Oct. 2011. [70]H. Jia and A. M. Martinez, “Support vector machines in face recognition with occlusions,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 136-141, 2009 [71]Z. Zhou, A. Wagner, J. Wright, H. Mobahi and Y. Ma, “Face recognition with contiguous occlusion using Markov random fields,” Proc. IEEE Int''l Conf. Computer Vision, pp.1050-1057, 2009. [72]X. X. Li, D. Q. Dai, X. F. Zhang, and C. X. Ren, “Structured sparse error coding for face recognition with occlusion,” IEEE Trans. Image Process., vol. 22, no. 5, pp. 1889-1900, May 2013. [73]R. Liang and X. X. Li, “Mixed error coding for face recognition with mixed occlusions,” Proc. Int''l Joint Conf. Artificial Intelligence, pp. 3657-3663, 2015. [74]S. Cai, L. Zhang, W. Zuo, and X. Feng, “A probabilistic collaborative representation based approach for pattern classification,” IEEE Conf. Computer Vision and Pattern Recognition, 2016. [75]M. Yang, L. Zhang, J. Yang, and D. Zhang, “Robust sparse coding for face recognition,” IEEE Conf. Computer Vision and Pattern Recognition, pp. 625-632, 2011. [76]M. Yang, L. Zhang, J. Yang, and D. Zhang, “Regularized robust coding for face recognition,” IEEE Trans. Image Process., vol. 22, no. 5, pp. 1753-1766, May 2013. [77]M. Iliadis, H. Wang, R. Molina, and A. K. Katsaggelos, “Robust and low-rank representation for fast face identification with occlusions,” arXiv preprint arXiv: 1605.02266, 2016. [78]D. Zhang, Y. Hu, J. Ye, X. Li and X. He, “Matrix completion by truncated nuclear norm regularization,” IEEE Conf. Computer Vision and Pattern Recognition, pp. 2192-2199, 2012. [79]E. Candes and B. Recht, “Exact matrix completion via convex optimization,” Conf. Foundations of Computational Math., vol. 9, pp. 717-772, 2008. [80]E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm theory and applications,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 11, pp. 2765-2781, Nov. 2013. [81]G. Liu, H. Xu, and S. Yan, “Exact subspace segmentation and outlier detection by low-rank representation,” Proc. Int’l Conf. Artificial Intelligence and Statistics, pp. 703-711, 2012. [82]L. Ma, C. Wang, B. Xiao, and W. Zhou, “Sparse representation for face recognition based on discriminative low-rank dictionary learning,” IEEE Conf. Computer Vision and Pattern Recognition, pp. 2586-2593, 2012. [83]C. P. Wei, C. F. Chen, and Y. C. F. Wang, “Robust face recognition with structurally incoherent low-rank matrix decomposition,” IEEE Trans. Image Process., vol. 23, no. 8, pp. 3294-3307, Aug. 2014. [84]J. Qian, L. Luo, J. Yang, F. Zhang, and Z. Lin, “Robust nuclear norm regularized regression for face recognition with occlusion,” Pattern Recognition, vol. 48, no. 10, pp. 3145-3159, Oct. 2015. [85]R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd Edition, New Delhi: Pearson, 2008. [86]X. Xiang, M. Dao, Gregory D. Hager, and T. D. Tran, “Hierarchical sparse and collaborative low-rank representation for emotion recognition,” Proc. IEEE Int’l Conf. Acoustics, Speech and Signal Processing, pp. 3811-3815, 2015. [87]A. Lee, F. Caron, A. Doucet, and C. Holmes, “Bayesian sparsity-path-analysis of genetic association signal using generalized t priors,” Statistical Applications in Genetics and Molecular Biology, vol. 11, no. 2, pp. 1-29, 2012. [88]Y. Taigman, M. Yang, M. A. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1701-1708, 2014. [89]F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 815-823, 2015. [90]O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” British Machine Vision Conference, pp. 1–12, 2015. [91]G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Technical Report 07-49 UMass, vol. 1, no. 2, pp. 1-11, 2007. [92]L. Wolf, T. Hassner and I. Maoz, “Face recognition in unconstrained videos with matched background similarity,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.529-534, 2011. [93]D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. Int''l Conf. Computer Vision, pp. 1150-1157, 1999. [94]G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “Sparse representations of image gradient orientations for visual recognition and tracking,” Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshop, pp. 26-33, 2012. [95]O. E. Barndorff-Nielsen, “Normal inverse Gaussian distributions and stochastic volatility modeling,” Scand. J. Stat., vol. 24, pp. 1-13, Mar. 1997. [96]K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” Proc. Int’l. Conf. Learn. Representations, pp. 1-14, 2015. [97]M. M. Ghazi and H. K. Ekenel, “A Comprehensive analysis of deep learning based representation for face recognition,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 102–109, 2016. [98]X. Wu, R. He, and Z. Sun, “A lightened CNN for deep face representation,” arXiv preprint arXiv: 1511.02683, 2015. [99]X. Wu, “Learning robust deep face representation,” arXiv preprint arXiv: 1507.04844, 2015. [100]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Proc. Adv. Neural Inf. Process. Syst., pp. 1106–1114, 2012. [101]M. Lin, Q. Chen, and S. Yan, “Network in network,” Computing Research Repository, arXiv preprint arXiv: 1312.4400v3, 2013. [102]D. L. Donoho, “Denoising by soft-thresholding,” IEEE Trans. Inf. Theory, vol. 41, no. 3, pp. 613-627, Mar. 1995. [103]T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression (PIE) database,” Proc. IEEE Conf. Face and Gesture Recognition, pp. 46–51, 2002. G.Nonconvex sparse and low-rank model related work [104]R. Basri, D. Jacobs, “Lambertian reflection and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 3, pp. 218-233, Mar. 2003. [105]C. Eckart, G. Young, “The approximation of one matrix by another of lower rank,” Psychometrika, vol. 1, no. 3, pp. 211-218, 1936. [106]Z. Lin, R. Liu, Z. Su, "Linearized alternating direction method with adaptive penalty for low-rank representation", Proc. Adv. Neural Inf. Process. Syst., pp. 1-7, 2011. [107]X. Zhong, L. Xu, Y. Li, Z. Liu, E. Chen, “A nonconvex relaxation approach for rank minimization problems,” Proc. AAAI Conf. Artif. Intell., pp. 1980-1987, 2015. [108]L. E. Frank, J. H. Friedman, “A statistical view of some chemometrics regression tools,” Technometrics, vol. 35, no. 2, pp. 109-135, 1993. [109]J. Trzasko, A. Manduca, “Highly undersampled magnetic resonance image reconstruction via homotopic l0-minimization ,” IEEE Trans. Med. Imag., vol. 28, no. 1, pp. 106-121, Jan. 2009. [110]C. H. Zhang, “Nearly unbiased variable selection under minimax concave penalty,” Ann. Statist., vol. 38, no. 2, pp. 894-942, Apr. 2010. [111]T. Zhang, “Analysis of multi-stage convex relaxation for sparse regularization,” J. Mach. Learn. Res., vol. 11, pp. 1081-1107, Jan. 2010. [112]J. Fan, R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” J. Amer. Statist. Assoc., vol. 96, no. 456, pp. 1348-1360, 2001. [113]C. Gao, N. Wang, Q. Yu, and Z. Zhang, “A feasible nonconvex relaxation approach to feature selection,” Proc. AAAI Conf. Artif. Intell., pp. 356-361, 2011. [114]D. Bertsekas, Convex Optimization Algorithms, Belmont, MA, USA:Athena Scientific, 2015. [115]C. Lu, C. Zhu, C. Xu, S. Yan, Z. Lin, “Generalized singular value thresholding,” Proc. AAAI Conf. Artif. Intell., pp. 1805-1811, 2015. [116]C. Lu, J. Tang, S. Yan, and Z. Lin, “Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm,” IEEE Trans. Image Process., vol. 25, no. 2, pp. 829-839, Feb. 2016. [117]P. Gong, C. Zhang, Z. Lu, J. Huang, J. Ye, “A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems,” Proc. Int. Conf. Mach. Learn., pp. 37-45, 2013. [118]Y. Hu, D. Zhang, J. Ye, X. Li, X. He, “Fast and accurate matrix completion via truncated nuclear norm regularization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 9, pp. 2117-2130, Sep. 2013. [119]Q. Liu, Z. Lai, Z. Zhou, F. Kuang, and Z. Jin, “A truncated nuclear norm regularization method based on weighted residual error for matrix completion,” IEEE Trans. Image Process., vol. 25, issue 1, pp. 316–330, Jan. 2016. [120]D. Gabay, B. Mercier, “A dual algorithm for the solution of nonlinear variational problems via Finite-Element Approximations,” Computer Math. Applications, vol. 2, pp. 17-40, 1976. [121]Z. Lin, M. Chen, L. Wu, Y. Ma, “The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices", UIUC Technical Report, Aug. 2009. [122]J. Bolte, S. Sabach, M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Math. Progr., pp. 460-494, Jul. 2013. [123]S. Ji, J. Ye, “An accelerated gradient method for trace norm minimization,” Proc. 26th ICML Conf., pp. 457–464, 2009. [124]C. J. Hsieh, P. Olsen, “Nuclear norm minimization via active subspace selection,” Proc. 31th ICML Conf., pp. 575–583, 2014. [125]J. Feng, H. Xu, S. Yan, “Online robust PCA via stochastic optimization,” Proc. Adv. Neural Inf. Process. Syst., pp. 404-412, 2013. [126]Q. Zhao, D. Meng, Z. Xu, W. Zuo, L. Zhang, “Robust principal component analysis with complex noise,” Proc. 31th Int’l Conf. Mach. Learn., pp. 55-63, 2014. [127]S. D. Babacan, M. Luessi, R. Molina, A. K. Katsaggelos, “Sparse Bayesian methods for low-rank matrix estimation,” IEEE Trans. Signal Process., vol. 60, no. 8, pp. 3964-3977, Aug. 2012. [128]F.-F. Li, R. Fergus, P. Perona, “Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 178-188, 2004. [129]Z. Lin, R. Liu, Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” Proc. Adv. Neural Inf. Process. Syst., pp. 612-620, 2011. [130]J. Shi, J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 888-905, Aug. 2000.. [131]S. Xiao, M. Tan, D. Xu, “Weighted block-sparse low rank representation for face clustering in videos,” Proc. Eur. Conf. Comput. Vis., pp. 123-138, Sep. 2014. [132]N. Halko, P. Martinsson, and J. Tropp, “Finding Structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions,” SIAM Rev., vol. 53, no. 2, pp. 217-288, 2011. [133]J. Costeira and T. Kanade, “A multibody factorization method for independently moving objects,” Int’l J. Computer Vision, vol. 29, no. 3, pp. 159-179, 1998.
|