This work studies several questions about the optimality of semidefinite programming (SDP) for constraint satisfaction problems (CSPs).
First we propose the hypothesis that the well known Basic SDP relaxation is actually optimal for random instances of constraint satisfaction problems for every predicate. This unifies several conjectures proposed in the past, and suggests a unifying principle for the average-case complexity of CSPs. We provide several types of indirect evidence for the truth of this hypothesis, and also show that it (and its variants) imply several conjectures in hardness of approximation including polynomial factor hardness for the densest \(k\) subgraph problem and hard instances for the Sliding Scale Conjecture of Bellare, Goldwasser, Lund and Russell (1993).
Second, we observe that for every predicate \(P\), the basic SDP relaxation achieves the same approximation guarantee for the CSP for \(P\) and for a more general problem (involving not just Boolean but constrained vector assignments), which we call the Generalized CSP for \(P\). Raghavendra (2008) showed that it is UG-hard to approximate the CSP for \(P\) better than this guarantee. We show that it is NP-hard to approximate the Generalized CSP for \(P\) better than this guarantee.
semidefinite programming, constraint satisfaction problems, average-case complexity.