Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Fu, Yifan; Gao, Junbin; Sun, Yanfeng; Hong, Xia (2014)
Languages: English
Types: Unknown

Classified by OpenAIRE into

arxiv: Computer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing), Computer Science::Computer Vision and Pattern Recognition
Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional\ud data by transforming samples into a one-dimensional (1D)\ud vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor\ud dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding,\ud which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates\ud elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] M. Aharon, M. Elad, and A. Brucktein. K-SVD: An algorithm for desdesign overcomplete dictionaries for sparse representation. IEEE Trans. on Signal Processing, 54(2):4311-4322, 2006.
    • [2] E. Benetos and C. Kotropoulos. Non-negative tensor factorization applied to music genre classification. Audio, Speech, and Language Processing, IEEE Transactions on, 18(8):1955-1967, 2010.
    • [3] M. Blondel, K. Seki, and K. Uehara. Block coordinate descent algorithms for large-scale sparse multiclass classification. Machine Learning, 93(1):31-52, 2013.
    • [4] C. F. Caiafa and A. Cichocki. Computing sparse representations of multidimensional signals using kronecker bases. Neural Comput., 25(1):186-220, January 2013.
    • [5] M. Elad, M.and AElad. Image denoising via sparse and redundant representation over learned dictionaries. Image Processing, IEEE Transactions on, 15:3736-3745, 2006.
    • [6] K. Engan, S. Aase, and J.H. Husoy. Method of optimal directions for frame design. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages 2443-2446, 1999.
    • [7] Y. Fang, J.J. Wu, and B.M. Huang. 2d sparse signal recovey via 2d orthogonal matching pursuit. Science China Information Sciences, 55:889-897, 2012.
    • [8] J. Han, J. Pei, Y. Yin, and R. Mao. Mining frequent patterns witwith candidate generation: A frequent-pattern tree approach. Data Mining and Knowledge Discovery, 8:53-87, 2004.
    • [9] T. Hazan, S. Polak, and A. Shashua. Sparse image coding using a 3d non-negative tensor factorization. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, volume 1, pages 50-57 Vol. 1, 2005.
    • [10] Y. Kim and S. Choi. Nonnegative tucker decomposition. In Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, pages 1-8, 2007.
    • [11] Y. Kim, A. Cichocki, and S. Choi. Nonnegative tucker decomposition with alpha-divergence. In ICASSP, pages 1829-1832. IEEE, 2008.
    • [12] G. Kolda and B. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455-500, 2009.
    • [13] S. Li. Non-negative sparse coding shrinkage for image denoising using normal inverse gaussian density model. Image Vision Comput., 26(8):1137-1147, August 2008.
    • [14] M. Mørup, L.K. Hansen, and S. M. Arnfred. Algorithms for sparse nonnegative tucker decompositions. Neural Comput., 20(8):2112- 2131, August 2008.
    • [15] G. Peyre´. Sparse Modeling of Textures. Journal of Mathematical Imaging and Vision, 34(1):17-31, May 2009.
    • [16] J. Portilla, V. Strela, M.J. Wainwright, and E.P. Simoncelli. Image denoising using scale mixtures of gaussian in the wavelet domain. Image Processing, IEEE Transactions on, 12:1338-1351, 2003.
    • [17] N. Qi, Y. Shi, X. Sun, J. Wang, and B. Yin. Two dimensional synthesis sparse model. In Multimedia and Expo (ICME), 2013 IEEE International Conference on, pages 1-6, 2013.
    • [18] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding for image classification. In in IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2009.
    • [19] S. Zubair and W. Wang. Tensor dictionary learning with sparse tucker decomposition. In Digital Signal Processing (DSP), 2013 18th International Conference on, pages 1-6, 2013.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects

Cite this article