A boring post.
Suppose some system can be (partly) described by , e.g., the position of the wrist of a person over time. When measuring position , one obtains data . This leads to catastrophes when a derivative must be determined. A standard way to compute the derivative is given by
which delivers the actual derivative for by definition.
But when the signal has noise , the situation is radically different. If we use the same, numerical, approach to find derivatives, we find that
Now, since we assume that is normally distributed noise, it can be as positive as negative, and when ranges from to , then the extra term has twice that range.
So, differentiation of a noisy signal doubles the noise!
Consider what happens when differentiating such a signal twice, to get the acceleration: double double means four times as much noise. Clearly, this is not the way to go.
A student, who is (made) aware of this fact, will then go on to smooth or filter the original signal. A good idea. Terminology here is discipline-dependent and often leads to confusion. An engineer, who has enjoyed a control theory class, will try to build a low-pass filter, which is an input-output system passing low signal frequencies, while blocking higher frequencies. The engineer will look at the data as a summation of frequencies, thus describing without its index . A computer scientist, however, will rather look at a sequence of single measurements , and thus does not directly consider frequencies as a mode of describing the signal.
The engineer’s view has a strong advantage: by looking at the complete data , one can analyse which “band” of frequencies, occurring in the signal, are desired, and which can be considered noise. The correct choice of filter (typically, your engineer will like some version of a Butterworth filter, which is the closest you can get to an ideal filter with amplification 1 at desired frequencies and 0 elsewhere; Mr. Butterworth was a perfectionist) will take out those (high) frequencies related to noise, and leave the rest in tact.
So far for the theory. In practice, there is a small snatch with the above: when running a process (let’s say, controlling a robot in real time) the data are not available in a whole, but one data point comes in after the other. So, indeed I get data points , and I do not want to wait to gather enough data, in order to apply my filter.
That is, in fact, the crux: can smooth out a signal delay-free? Yes, and it is relatively easy. The idea is to make a model of the data, and use that to predict the next step. The following approach is very applicable to robotic movement, as movement can be well described using polynomials of low order (in fact, up to 5 for fast movements [why 5? What is fast? I will cover that elsewhere.])
A standard approach to smoothing out data is a Moving-Average (MA) filter. The basic principle is simple: the smoothed value is set to . It works, but has a disadvantage: it lags. Even if we only take data from the past, i.e., .
What does this do? In fact, it fits the data to a straight line, and—in the second case above, where only past points are taken—extrapolates from that. A linear model is good, if the underlying data is linear, and the noise isn’t (with respect to , the number of points we smooth over). But otherwise? It will fail.
The bad news: for the kind of (movement) data we are talking about, the linear assumption does not hold. We need to find a better model. An elegant method was described in 1957, by which orthogonal polynomials fit a set of data until an error level is obtained which is around the variance of the data . Orthogonal polynomials means that, a higher order can be added without having to recompute the previous parameters. The solution that is found, in a predictive form, is
are parameters found from optimisation. A very attractive method, obtaining the same, was published in 1964 ; in this paper, it is shown that the parameters of the equation
can be precomputed, independent of the data , leading to a very efficient implementation.
The standard implementation of this well-known Savitzky-Golay (“SG”) filter introduces a delay. After all, it predicts the value of the data at the centre point, using data points . But with some tricks, it can be made predictive, too. How? A first, not quite correct, approach is to, well, pretend you’re half blind: match the polynomial only on the first half of the data, and predict the “middle” from there. A method which, dumb as it is, gives acceptable results.
The right way is to recompute the SG parameters—the above—to predict the last rather than the middle data point for the polynomial. In fact, an elegant solution was published in . It computes the parameters differently and more efficiently, and that we can use to make a “predictive” filter. Here’s an implementation of a piece of code doing the promised work: psgolayp. (My sincere apologies that it is written in Matlab. I don’t like Matlab all that much, but for historical reasons it came out this way. I promise it may be the last piece of Matlab posted here.)
So now never come to me again with a moving average filter.