Rounding sum-of-squares relaxations

with Boaz Barak, Jonathan Kelner. STOC 2014.

PDF

abstract

We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a combining algorithm—an algorithm that maps a distribution over solutions into a (possibly weaker) solution—into a rounding algorithm that maps a solution of the relaxation to a solution of the original problem.

Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems:

  1. We give a quasipolynomial-time algorithm that approximates maxx2=1P(x)\max_{\lVert x \rVert_2=1} P(x) within an additive factor of εPspectral\varepsilon\lVert P \rVert_{spectral}, where ε>0\varepsilon>0 is a constant, PP is a degree d=O(1)d=O(1), nn-variate polynomial with nonnegative coefficients, and Pspectral\lVert P \rVert_{spectral} is the spectral norm of a matrix corresponding to PP’s coefficients. Beyond being of interest in its own right, obtaining such an approximation for general polynomials (with possibly negative coefficients) is a long-standing open question in quantum information theory, and our techniques have already led to improved results in this area (Brandão and Harrow, STOC ’13).

  2. We give a polynomial-time algorithm that, given a subspace VRnV \subseteq \R^n of dimension dd that (almost) contains the characteristic function of a set of size n/kn/k, finds a vector vVv\in V that satisfies Eivi4Ω(d1/3k(Eivi2)2)\mathbb E_i v_i^4 \geq \Omega(d^{-1/3} k(\mathbb E_i v_i^2)^2). This is a natural analytical relaxation of the problem of finding the sparsest element in a subspace, and is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012). In particular our results yield an improvement of the previous best known algorithms for small set expansion in a certain range of parameters.

  3. We use this notion of L4L_4 vs. L2L_2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted sparse vector vv in a random dd-dimensional subspace of Rn\R^n. If vv has μn\mu n nonzero coordinates, we can recover it with high probability whenever μO(min(1,n/d2))\mu\leq O(\min(1,n/d^2)). In particular, when dnd\leq \sqrt{n}, this recovers a planted vector with up to Ω(n)\Omega(n) nonzero coordinates. When dn2/3d\leq n^{2/3}, our algorithm improves upon existing methods based on comparing the L1L_1 and LL_\infty norms, which intrinsically require μO(1/d)\mu \leq O\left(1/\sqrt{d}\right).

keywords

  • sum-of-squares method
  • semidefinite programming
  • machine learning
  • small-set expansion