Stochastic Resonance (SR) is a phenomenon where noise can be used to enhance a signal. SR occurs when a noisy signal x has noise of a certain power ξ added to it, and the result is used to excite a bi-stable differential equation like y‘ = ay by3. That is, we must solve y‘ = ayby3 + x + ξ for y. The bi-stability means that we have two stable solutions to the unexcited differential equation, y = ±(a/b)½. If the signal of interest, x, is not present, then the noise will cause a switching between the domains of attraction of the stable solutions at a rate of

 R = e –a² a 2bσ² π 2½

where σ² is the variance of the noise. The signal of interest causes a time varying change in the location of the stable solutions, and in how strong of an attractor they are. In other words R changes with the value of x. Thus, the periodic component times how long it takes the noise to force a jump from one domain of attraction to the other.

Stochastic Resonance was first discovered for analog signals: it was used to explain the periodic global temperature jumps that cause the beginning and end of ice ages. In fact its formulation as the solution to a differential equation implicitly assumes an analog  signal. Despite this, SR can be used to analyze digital signals. This is possible by utilizing a numerical method, such as Newton’s Method or a Runge-Kutta Algorithm, to approximate the solution to the differential equation. The largest issue with analyzing discrete
signals is the noise. In order for the SR effect to occur, the noise is assumed to be Gaussian, and it must be added in many times for the rate formula given above to hold. Thus, sampling at or near the Nyquist limit will not work for SR. Results have shown that if the signal is sampled at 100 times the frequency desired, SR can be seen to occur. Thus, we can resample a digital signal to get the necessary oversampling, then add in the noise and find the numerical solution before downsampling back to where the signal started.