Sum-of-Squares method, dictionary learning, and tensor decomposition

Banff approximation workshop, STOC 2015.

PDF

VIDEO

abstract

We give a new approach to the dictionary learning (also known as “sparse coding”) problem of recovering an unknown n×mn\times m matrix AA (for mnm \geq n) from examples of the form y=Ax+e,y = Ax + e, where xx is a random vector in Rm\mathbb R^m with at most τm\tau m nonzero coordinates, and ee is a random noise vector in Rn\mathbb R^n with bounded magnitude. For the case m=O(n)m=O(n), our algorithm recovers every column of AA within arbitrarily good constant accuracy in time mO(logm/log(τ1))m^{O(\log m/\log(\tau^{-1}))}, in particular achieving polynomial time if τ=mδ\tau = m^{-\delta} for any δ>0\delta>0, and time mO(logm)m^{O(\log m)} if τ\tau is (a sufficiently small) constant. Prior algorithms with comparable assumptions on the distribution required the vector xx to be much sparser—at most n\sqrt{n} nonzero coordinates—and there were intrinsic barriers preventing these algorithms from applying for denser xx.

We achieve this by designing an algorithm for noisy tensor decomposition that can recover, under quite general conditions, an approximate rank-one decomposition of a tensor TT, given access to a tensor TT' that is τ\tau-close to TT in the spectral norm (when considered as a matrix). To our knowledge, this is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of TT and TT' have similar structures.

Our algorithm is based on a novel approach to using and analyzing the Sum of Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and it can be viewed as an indication of the utility of this very general and powerful tool for unsupervised learning problems.