Compressed sensing is an error correcting technique where a signal x is multiplied by an M × N (M < N) sampling matrix in order to both sample and compress it in a single operation. The signal x is recovered by finding the ℓ1 norm of the sparse version of x which could be represented by the received samples y. In other words, out of the infinite signals which could have been used to create the received y, the sparsest one is recovered as the original signal. As long as x is “sparse enough”, it can be recovered exactly using this technique.
We will discuss whether this same technique can be used to detect errors in a signal. Assume that we have the signal
where A is an M × N sampling matrix and e is a sparse vector of additive errors. Like compressed sensing, A must obey the restricted isometry property, which is easily achieved by choosing each member of A to be
based on a uniform distribution. There are two important things to note. First, unlike the compressed sensing scenario described above, M > N. This is necessary as additional information needs to be added to the signal to correct the errors. Also, no assumptions are made about the distribution of e besides sparsity. The actual sparsity levels required are given in the numerical results below.
The signal x can be recovered exactly by solving the convex minimization problem
One way to solve this problem is to consider a matrix B such that BA = 0 (B is any (M − N) × N matrix whose kernel is in the range of A). Applying B to both sides of (1) results in
where y‘ = By and B are known at the receiver. Equation (4) can be solved by finding a vector d such that
Bd = Be = y‘ using the equation
which can be recast as the well known basis pursuit problem as long as e is “sparse enough”. Solving this underdetermined system for e will uniquely determine x (since A is full rank). It has been shown that for binary x, this system can exactly recover x with 22.5% of the samples corrupted if M = 2N and with 32.5% of the samples in error with M = 4N.