Abstract
Estimating missing entries in matrices has attracted much attention due to its wide range of applications like image inpainting and video denoising, which are usually considered as low-rank matrix completion problems theoretically. It is common to consider nuclear norm as a surrogate of the rank operator since it is the tightest convex lower bound of the rank operator under certain conditions. However, most approaches based on nuclear norm minimization involve a number of singular value decomposition (SVD) operations. Given a matrix X ∈ Rm×n, the time complexity of the SVD operation is O(mn2), which brings prohibitive computational burden on large-scale matrices, limiting the further usage of these methods in real applications. Motivated by this observation, a series of atom-decomposition-based matrix completion methods have been studied. The key to these methods is to reconstruct the target matrix by pursuit methods in a greedy way, which only involves the computation of the top SVD and has great advantages in efficiency compared with the SVD-based matrix completion methods. However, due to gradually serious accumulation errors, atom-decomposition-based methods usually result in unsatisfactory reconstruction accuracy. In this article, we propose a new efficient and scalable atom decomposition algorithm for matrix completion called Adaptive Basis Selection Strategy (ABSS). Different from traditional greedy atom decomposition methods, a two-phase strategy is conducted to generate the basis separately via different strategies according to their different nature. At first, we globally prune the basis space to eliminate the unimportant basis as much as possible and locate the probable subspace containing the most informative basis. Then, another group of basis spaces are learned to improve the recovery accuracy based on local information. In this way, our proposed algorithm breaks through the accuracy bottleneck of traditional atom-decomposition-based matrix completion methods; meanwhile, it reserves the innate efficiency advantages over SVD-based matrix completion methods. We empirically evaluate the proposed algorithm ABSS on real visual image data and large-scale recommendation datasets. Results have shown that ABSS has much better reconstruction accuracy with comparable cost to atom-decomposition-based methods. At the same time, it outperforms the state-of-the-art SVD-based matrix completion algorithms by similar or better reconstruction accuracy with enormous advantages on efficiency.
- Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2008. Convex multi-task feature learning. Machine Learning 73, 3 (Dec. 2008), 243--272. DOI:http://dx.doi.org/10.1007/s10994-007-5040-8 Google Scholar
Digital Library
- James Bennett and Stan Lanning. 2007. The Netflix prize. In Proceedings of KDD Cup and Workshop. 35.Google Scholar
- Jian-Feng Cai, Emmanuel J. Candès, and Zuowei Shen. 2010. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization 20, 4 (March 2010), 1956--1982. DOI:http://dx.doi.org/10.1137/080738970Google Scholar
Cross Ref
- Emmanuel J. Candès and Yaniv Plan. 2010. Matrix completion with noise. Proceedings of the IEEE 98, 6 (June 2010), 925--936. DOI:http://dx.doi.org/10.1109/JPROC.2009.2035722Google Scholar
Cross Ref
- Emmanuel J. Candès and Benjamin Recht. 2012. Exact matrix completion via convex optimization. Foundations of Computational Mathematics 55, 6 (June 2012), 111--119. DOI:http://dx.doi.org/10.1145/2184319.2184343 Google Scholar
Digital Library
- Emmanuel J. Candès and Terence Tao. 2010. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory 56, 5 (May 2010), 2053--2080. DOI:http://dx.doi.org/10.1109/TIT.2010.2044061 Google Scholar
Digital Library
- Bisheng Chen, Jingdong Wang, Qinghua Huang, and Tao Mei. 2012. Personalized video recommendation through tripartite graph propagation. In Proceedings of the 20th ACM International Conference on Multimedia. 1133--1136. Google Scholar
Digital Library
- Ke Chen. 2005. Matrix Preconditioning Techniques and Applications. Vol. 19. Cambridge University Press.Google Scholar
- Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. 2001. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing 43, 1 (Jan. 2001), 129--159. DOI:http://dx.doi.org/10.1137/S003614450037906X Google Scholar
Digital Library
- Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. 2010. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the 4th ACM Conference on Recommender Systems. 39--46. Google Scholar
Digital Library
- Miroslav Dudik, Zaid Harchaoui, and Jérome Malick. 2012. Lifted coordinate descent for learning with trace norm regularization. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics-2012 (AISTATS’12), Vol. 22.Google Scholar
- Maryam Fazel. 2002. Matrix Rank Minimization with Applications. Ph.D. Dissertation. PhD thesis, Stanford University.Google Scholar
- Cho-Jui Hsieh and Peder Olsen. 2014. Nuclear norm minimization via active subspace selection. In Proceedings of International Conference on Machine Learning. 575--583.Google Scholar
- Yao Hu, Debing Zhang, Jun Liu, Jieping Ye, and Xiaofei He. 2012. Accelerated singular value thresholding for matrix completion. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 298--306. Google Scholar
Digital Library
- Yao Hu, Debing Zhang, Jieping Ye, Xuelong Li, and Xiaofei He. 2013. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 9 (Sept. 2013), 2117--2130. DOI:http://dx.doi.org/10.1109/TPAMI.2012.271 Google Scholar
Digital Library
- Hervé Jégou, Matthijs Douze, and Cordelia Schmid. 2008. Hamming embedding and weak geometric consistency for large scale image search. In European Conference on Computer Vision (LNCS), Andrew Zisserman David Forsyth, Philip Torr (Ed.), Vol. I. Springer, 304--317. http://lear.inrialpes.fr/pubs/2008/JDS08. Google Scholar
Digital Library
- Hui Ji, Chaoqiang Liu, Zuowei Shen, and Yuhong Xu. 2010. Robust video denoising using low rank matrix completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1791--1798.Google Scholar
Cross Ref
- Raghunandan H. Keshavan, Sewoong Oh, and Andrea Montanari. 2009. Matrix completion from a few entries. In Proceedings of the IEEE International Symposium on Information Theory. 324--328. DOI:http://dx.doi.org/10.1109/ISIT.2009.5205567 Google Scholar
Digital Library
- Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 42, 8 (Aug. 2009), 30--37. DOI:http://dx.doi.org/10.1109/MC.2009.263 Google Scholar
Digital Library
- Ji Liu, Ryohei Fujimaki, and Jieping Ye. 2014. Forward-backward greedy algorithms for general convex smooth functions over a cardinality constraint. In Proceedings of International Conference on Machine Learning.Google Scholar
- Canyi Lu, Jinhui Tang, Shuicheng Yan, and Zhouchen Lin. 2014. Generalized nonconvex nonsmooth low-rank minimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4130--4137. Google Scholar
Digital Library
- Stéphane G. Mallat and Zhifeng Zhang. 1993. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing 41, 12 (Dec. 1993), 3397--3415. DOI:http://dx.doi.org/10.1109/78.258082 Google Scholar
Digital Library
- Bradley N. Miller, Istvan Albert, Shyong K. Lam, Joseph A. Konstan, and John Riedl. 2003. MovieLens unplugged: Experiences with an occasionally connected recommender system. In Proceedings of the 8th International Conference on Intelligent User Interfaces. ACM, 263--266. Google Scholar
Digital Library
- Karthik Mohan and Maryam Fazel. 2010. Iterative reweighted least squares for matrix rank minimization. In 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton). 653--661. DOI:http://dx.doi.org/10.1109/ALLERTON.2010.5706969Google Scholar
Cross Ref
- Sahand Negahban and Martin J. Wainwright. 2011. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Annals of Statistics 39, 2 (2011), 1069--1097.Google Scholar
Cross Ref
- Yagyensh Chandra Pati, Ramin Rezaiifar, and P. S. Krishnaprasad. 1993. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Conference Record of the 27th Asilomar Conference on Signals, Systems and Computers, 1993. 40--44.Google Scholar
- James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. 2007. Object retrieval with large vocabularies and fast spatial matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1--8.Google Scholar
Cross Ref
- Nikhil Rao, Parikshit Shah, and Stephen Wright. 2014. Forward-backward greedy algorithms for atomic norm regularization. arXiv preprint arXiv:1404.5692 (2014).Google Scholar
- Benjamin Recht. 2011. A simpler approach to matrix completion. Journal of Machine Learning Research 12 (Dec. 2011), 3413--3430. Google Scholar
Digital Library
- Shai Shalev-Shwartz, Alon Gonen, and Ohad Shamir. 2011. Large-scale convex minimization with a low-rank constraint. In International Conference on Machine Learning.Google Scholar
- Shai Shalev-Shwartz and Ambuj Tewari. 2011. Stochastic methods for l 1-regularized loss minimization. Journal of Machine Learning Research 12 (2011), 1865--1892. Google Scholar
Digital Library
- Ling Shao, Ruomei Yan, Xuelong Li, and Yan Liu. 2014. From heuristic optimization to dictionary learning: A review and comprehensive comparison of image denoising algorithms. IEEE Transactions Cybernetics 44, 7 (2014), 1001--1013.Google Scholar
Cross Ref
- Nathan Srebro and Adi Shraibman. 2005. Rank, trace-norm and max-norm. In Learning Theory. Springer, 545--560. Google Scholar
Digital Library
- Jos F. Sturm. 1999. Using SeDuMi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software 11, 1--4 (1999), 625--653. DOI:http://dx.doi.org/10.1080/10556789908805766Google Scholar
Cross Ref
- Kim-Chuan Toh and Sangwoon Yun. 2010. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization 6, 615--640 (2010), 15.Google Scholar
- Joel A. Tropp and Anna C. Gilbert. 2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory 53, 12 (Dec. 2007), 4655--4666. DOI:http://dx.doi.org/10.1109/TIT.2007.909108 Google Scholar
Digital Library
- R. H. Tütüncü, K. C. Toh, and M. J. Todd. 2001. SDPT3—a Matlab software package for semidefinite-quadratic-linear programming, version 3.0. Retrieved from http://www.math.nus.edu.sg/∼mattohkc/sdpt3.html.Google Scholar
- Shusen Wang, Dehua Liu, and Zhihua Zhang. 2013. Nonconvex relaxation approaches to robust matrix recovery. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. AAAI Press, 1764--1770. Google Scholar
Digital Library
- Zheng Wang, Ming-Jun Lai, Zhaosong Lu, Wei Fan, Hasan Davulcu, and Jieping Ye. 2014. Rank-one matrix pursuit for matrix completion. In Proceedings of International Conference on Machine Learning. 91--99.Google Scholar
- Stephen J. Wright, Robert D. Nowak, and Mário A. T. Figueiredo. 2009. Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing 57, 7 (July 2009), 2479--2493. DOI:http://dx.doi.org/10.1109/TSP.2009.2016892 Google Scholar
Digital Library
- Harchaoui Zaid, Matthijs Douze, Mattis Paulin, Miroslav Dudik, and Jérôme Malick. 2012. Large-scale image classification with trace-norm regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Google Scholar
Digital Library
- Debing Zhang, Yao Hu, Jieping Ye, Xuelong Li, and Xiaofei He. 2012a. Matrix completion by truncated nuclear norm regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2192--2199. Google Scholar
Digital Library
- Tong Zhang. 2009. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In Advances in Neural Information Processing Systems. 1921--1928.Google Scholar
- Xinhua Zhang, Dale Schuurmans, and Yao-liang Yu. 2012b. Accelerated training for matrix-norm regularization: A boosting approach. In Advances in Neural Information Processing Systems. 2906--2914.Google Scholar
- Yi Zhang and Jonathan Koren. 2007. Efficient Bayesian hierarchical user modeling for recommendation system. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 47--54. Google Scholar
Digital Library
- Ciyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. 1997. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software 23, 4 (Dec. 1997), 550--560. DOI:http://dx.doi.org/10.1145/279232.279236 Google Scholar
Digital Library
- Jinfeng Zhuang, Tao Mei, Steven C. H. Hoi, Xian-Sheng Hua, and Yongdong Zhang. 2015. Community discovery from social media by low-rank matrix recovery. ACM Transactions on Intelligent Systems and Technology 5, 4, Article 67 (Jan. 2015), 19 pages. DOI:http://dx.doi.org/10.1145/2668110 Google Scholar
Digital Library
Index Terms
Atom Decomposition with Adaptive Basis Selection Strategy for Matrix Completion
Recommendations
Atom Decomposition Based Subgradient Descent for matrix classification
Matrices are appropriate for representing a wealth of data with complex structures such as images and electroencephalogram data (EEG). To learn a classifier dealing with these matrix data, the structure information of the feature matrix is useful. In ...
Iterative rank-one matrix completion via singular value decomposition and nuclear norm regularization
AbstractMatrix completion is widely used in many fields. In existing matrix completion methods, such as rank minimization and matrix factorization, the hyperparameters must be learned. However, hyperparameter tuning is a time-consuming and ...
A fast tri-factorization method for low-rank matrix recovery and completion
In recent years, matrix rank minimization problems have received a significant amount of attention in machine learning, data mining and computer vision communities. And these problems can be solved by a convex relaxation of the rank minimization problem ...






Comments