Consistent regression when oblivious outliers overwhelm

with Tommaso d'Orsi, Gleb Novikov, David Steurer. ICML 2021.



We consider a robust linear regression model y=Xβ+ηy=X\beta^* + \eta, where an adversary oblivious to the design XRn×dX\in \mathbb{R}^{n\times d} may choose η\eta to corrupt all but an α\alpha fraction of the observations yy in an arbitrary way. Prior to our work, even for Gaussian XX, no estimator for β\beta^* was known to be consistent in this model except for quadratic sample size n(d/α)2n \gtrsim (d/\alpha)^2 or for logarithmic inlier fraction α1/logn\alpha\ge 1/\log n. We show that consistent estimation is possible with nearly linear sample size and inverse-polynomial inlier fraction. Concretely, we show that the Huber loss estimator is consistent for every sample size n=ω(d/α2)n= \omega(d/\alpha^2) and achieves an error rate of O(d/α2n)1/2O(d/\alpha^2n)^{1/2} (both bounds are optimal up to constant factors). Our results extend to designs far beyond the Gaussian case and only require the column span of XX to not contain approximately sparse vectors (similar to the kind of assumption commonly made about the kernel space for compressed sensing). We provide two technically similar proofs. One proof is phrased in terms of strong convexity, extending work of Tsakonas et al. (2014), and particularly short. The other proof highlights a connection between the Huber loss estimator and high-dimensional median computations. In the special case of Gaussian designs, this connection leads us to a strikingly simple algorithm based on computing coordinate-wise medians that achieves nearly optimal guarantees in linear time, and that can exploit sparsity of β\beta^*. The model studied here also captures heavy-tailed noise distributions that may not even have a first moment.


  • robustness
  • high-dimensional estimation
  • linear regression