You have just completed your registration at OpenAire.
Before you can login to the site, you will need to activate your account.
An e-mail will be sent to you with the proper instructions.
Important!
Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version
of the site upon release.
A number of ill-posed inverse problems in signal processing, like blind deconvolution, matrix factorization, dictionary learning and blind source separation share the common characteristic of being bilinear inverse problems (BIPs), i.e. the observation model is a function of two variables and conditioned on one variable being known, the observation is a linear function of the other variable. A key issue that arises for such inverse problems is that of identifiability, i.e. whether the observation is sufficient to unambiguously determine the pair of inputs that generated the observation. Identifiability is a key concern for applications like blind equalization in wireless communications and data mining in machine learning. Herein, a unifying and flexible approach to identifiability analysis for general conic prior constrained BIPs is presented, exploiting a connection to low-rank matrix recovery via lifting. We develop deterministic identifiability conditions on the input signals and examine their satisfiability in practice for three classes of signal distributions, viz. dependent but uncorrelated, independent Gaussian, and independent Bernoulli. In each case, scaling laws are developed that trade-off probability of robust identifiability with the complexity of the rank two null space. An added appeal of our approach is that the rank two null space can be partly or fully characterized for many bilinear problems of interest (e.g. blind deconvolution). We present numerical experiments involving variations on the blind deconvolution problem that exploit a characterization of the rank two null space and demonstrate that the scaling laws offer good estimates of identifiability.
[1] S. Choudhary and U. Mitra, “On Identifiability in Bilinear Inverse Problems,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2013, pp. 4325-4329.
[2] --, “Identifiability Bounds for Bilinear Inverse Problems,” in 47th Asilomar Conference on Signals, Systems and Computers, Nov. 2013. [Online]. Available: http://www-scf.usc.edu/~sunavcho/BIP_IdBds.pdf
[3] J. Hopgood and P. J. W. Rayner, “Blind single channel deconvolution using nonstationary signal processing,” Speech and Audio Processing, IEEE Transactions on, vol. 11, no. 5, pp. 476-488, 2003.
[4] P. D. O'grady, B. A. Pearlmutter, and S. T. Rickard, “Survey of sparse and non-sparse methods in source separation,” International Journal of Imaging Systems and Technology, vol. 15, no. 1, pp. 18-33, 2005.
[5] Z. Xing, M. Zhou, A. Castrodad, G. Sapiro, and L. Carin, “Dictionary learning for noisy and incomplete hyperspectral images,” SIAM J. Imaging Sci., vol. 5, no. 1, pp. 33-56, 2012.
[6] D. L. Donoho and V. Stodden, “When Does Non-Negative Matrix Factorization Give a Correct Decomposition into Parts?” in NIPS, 2003.
[7] C. R. Johnson, Jr., P. Schniter, T. J. Endres, J. D. Behm, D. R. Brown, and R. A. Casas, “Blind Equalization Using the Constant Modulus Criterion: A Review,” Proc. IEEE, vol. 86, no. 10, pp. 1927-1950, Oct. 1998.
[8] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky, “The Convex Geometry of Linear Inverse Problems,” Found. Comput. Math., vol. 12, no. 6, pp. 805-849, 2012.
[9] S. Choudhary and U. Mitra, “Sparse recovery from convolved output in underwater acoustic relay networks,” in 2012 Asia-Pacific Signal Information Processing Association Annual Summit and Conference (APSIPA ASC), Dec. 2012, pp. 1-8.
[10] --, “Sparse Blind Deconvolution: What Cannot Be Done,” submitted to ISIT 2014. [Online]. Available: http://wwwscf.usc.edu/~sunavcho/SBD_id.pdf
[14] M. S. Asif, W. Mantzel, and J. K. Romberg, “Random Channel Coding and Blind Deconvolution,” in 47th Annual Allerton Conference on Communication, Control, and Computing, 2009. Allerton 2009., Oct. 2009, pp. 1021-1025.
[15] C. Hegde and R. G. Baraniuk, “Sampling and Recovery of Pulse Streams,” IEEE Trans. Signal Process., vol. 59, no. 4, pp. 1505-1517, 2011.
[17] D. Gross, “Recovering Low-Rank Matrices From Few Coefficients in Any Basis,” IEEE Trans. Inf. Theory, vol. 57, no. 3, pp. 1548-1566, 2011.
[18] B. Recht, W. Xu, and B. Hassibi, “Null space conditions and thresholds for rank minimization,” Math. Program., vol. 127, no. 1, Ser. B, pp. 175-202, 2011.
[19] E. J. Candès and Y. Plan, “Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements,” IEEE Trans. Inform. Theory, vol. 57, no. 4, pp. 2342-2359, 2011.
[23] A. Beck, “Convexity properties associated with nonconvex quadratic matrix functions and applications to quadratic programming,” J. Optim. Theory Appl., vol. 142, no. 1, pp. 1-29, 2009.
[25] R. Gribonval and K. Schnass, “Dictionary identification-sparse matrix-factorization via `1-minimization,” IEEE Trans. Inform. Theory, vol. 56, no. 7, pp. 3523-3539, 2010.
[26] A. Agarwal, A. Anandkumar, and P. Netrapalli, “Exact Recovery of Sparsely Used Overcomplete Dictionaries,” ArXiv e-prints, Sep. 2013.
[27] K. Abed-Meraim, W. Qiu, and Y. Hua, “Blind System Identification,” Proc. IEEE, vol. 85, no. 8, pp. 1310-1322, Aug. 1997.
[28] E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,” Found. Comput. Math., vol. 9, no. 6, pp. 717-772, 2009.
[29] E. J. Candès and T. C. Tao, “The Power of Convex Relaxation: Near-Optimal Matrix Completion,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2053-2080, 2010.
[31] W. Rudin, Real and complex analysis, 3rd ed. New York: McGraw-Hill Book Co., 1987.
[32] B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization,” SIAM Rev., vol. 52, no. 3, pp. 471-501, 2010.
[33] M. Ledoux and M. Talagrand, Probability in Banach Spaces, ser. Classics in Mathematics. Berlin: Springer-Verlag, 2011, isoperimetry and Processes, Reprint of the 1991 Edition.
[34] E. J. Candès and T. C. Tao, “Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406-5425, 2006.
[35] R. Baraniuk, M. A. Davenport, M. F. Duarte, and C. Hegde. (2011, April 2) An Introduction to Compressive Sensing. Connexions. [Online]. Available: http://cnx.org/content/col11133/1.5/
[36] R. Vershynin. (2009) Lectures in Geometric Functional Analysis. [Online]. Available: http://www-personal.umich.edu/~romanv/papers/ GFA-book/GFA-book.pdf
[37] K. Mohan and M. Fazel, “Reweighted nuclear norm minimization with application to system identification,” in American Control Conference (ACC), 2010, 2010, pp. 2953-2959.