EXERCISE #2:
Kyle M. White
Due: April 24, 2003
Department of
Electrical & Electronic Engineering
College of
Engineering & Computer Science
California State
University, Sacramento
In this exercise, some common digital filters are designed and compared. The design and implementation of a finite impulse response (FIR), an infinite impulse response (IIR) and a least mean square (LMS) adaptive filter is demonstrated.
MATLAB is used extensively to generate plots for discussion and comparison. The code listing for the entire project is included in the appendix. The code is organized similar to the project in that one can follow the code from top to bottom in parallel with the discussion of the text. All of the graphs were made by putting a breakpoint at the appropriate point in the code to capture the plot and in some cases, text and arrows were added directly to the plot. The interested reader can follow the code by referring to the listing in the appendix while reading the text.
In this section, MATLAB is used to create three sixth-order 300-Hz low-pass filters: a Butterworth, a Chebyshev, and an elliptic. The specific design of each filter is left up to the MATLAB functions butter, cheby1, and ellip, respectively. Figure 1 shows a comparison of the ideal response, an analog Butterworth filter and a digital Butterworth filter with sample frequency of 5120 Hz.
To create the analog transfer function, the following equation for an Nth order Butterworth filter is used with N = 6 and ω0 = 300 Hz:
(1)
Figure 1 – Comparison of analog and digital Butterworth filters
To create the Butterworth digital filter, the MATLAB function ‘butter’ is used. According to the MATLAB documentation, this function uses a bilinear transformation with frequency prewarping to convert an analog prototype filter into a digital filter. This results in a transfer function in the frequency domain that is very consistent with the analog transfer function. Thus, at 300 Hz, the magnitude response of both filters is .70711, which is equivalent to -3.01 dB. The butter function uses prewarping at the cutoff frequency, which guarantees that the analog response and digital response will be the same at the cutoff frequency, which is exactly what is observed in figure 1.
To compare with a digital Chebyshev filter, the MATLAB function ‘cheby1’ is used with a pass-band ripple of 0.500 dB specified. This creates a Chebyshev type I filter, which is equiripple in the passband and monotonic in the stopband. A type II Chebyshev filter is monotonic in the passband an equiripple in the stopband and does not have as steep a response as the type I. As with the Butterworth filter, MATLAB uses a bilinear transformation of an analog prototype to obtain the same frequency response as the equivalent analog filter. Figure 2 compares the Chebyshev type I with the Butterworth.
Unlike the Butterworth filter, the 3 dB frequency of the Chebyshev is NOT the specified the cutoff frequency. Instead, the magnitude of the filter at the cutoff frequency is near the specified -0.5 dB point and the actual 3 dB frequency is at about 310 Hz. The Chebyshev filter, however, is much steeper near the cutoff frequency.
Figure 2 – Comparison of digital Chebyshev I and Butterworth filters
To create an elliptic filter, the MATLAB function ‘ellip’ function is used with a specified passband ripple of 0.500 dB and stopband attenuation of 40 dB. As with the others, MATLAB uses a bilinear transformation for the elliptic filter. The response is shown in figure 3 along with the other filters. From figure 3, it is clear that the elliptic filter has the steepest transition from the passband to the stopband and thus the 3 dB frequency is very close to 300 Hz. It also has a lower frequency ripple in the passband than the Chebyshev I filter.
To observe how the different filters perform in the time domain, the MATLAB function ‘filter’ is used in conjunction with a step input vector to produce figure 4.
Figure 3 – Comparison of an elliptic filter with the others
Figure 4 – Step responses of three digital filters
As expected the Butterworth filter settles to 1 while the other filters settle to approximately 0.95. This is expected because the Butterworth filter response in the frequency domain shows that it has a unity response at DC while the other filters have responses that are significantly less at f = 0 Hz. All three filters have similar overshoot of about 17% although the Butterworth filter settles much sooner than the others. This demonstrates that one cannot necessarily determine the ‘best’ filter by looking at the magnitude of the frequency response alone. If one could, one might assume from figure 3 that the elliptic filter is ‘best’ because it has the steepest transition. However, for the step input, the elliptic response has the most overshoot, the longest oscillation and it does not settle to the input value.
In this part, the complete design of a second order digital Butterworth filter using the impulse invariance method is demonstrated without the aid of the MATLAB butter function. Starting with an analog filter and using a sampling frequency fs = 320 Hz, the following checklist is used:
Equation (1) is repeated to show the magnitude of the frequency response of a 2nd order analog Butterworth filter.
What is needed, however, it not the magnitude of the frequency response |G(jω)|, but the frequency response G(jω).
Since j4 = 1, the j can be removed giving
. (2)
In
order to complete the square under the root, add and subtract 2(ω/ω0)2
. (3)
Rearranging,
. (4)
Now, consider that the magnitude of G(jω) is the magnitude of the numerator over the magnitude of the denominator.
(5)
Thus, by breaking the denominator into its conjugate roots, the magnitude is
, (6)
which
means that the transfer function G(jω) is
, (7)
which is the same as
. (8)
Substituting s for jω and multiplying numerator and denominator by ω02 gives
, (9)
which is now in a convenient form for taking the Laplace transform in step 2.
To get the response in the time domain, the LaPlace transform is performed on (9) using the following transformation[1]:
Table 1 – Transformations for converting an analog filter to digital
Ha(s) |
ha(t) |
h[k] |
H[z] |
|
|
|
|
Solving for A, B, a, and c for the given function yields
. (10)
Solving for r, θ and b gives
(11)
The time domain response g(t) is left in terms of r, θ, and b temporarily for simplicity
(12)
Note that the T scaler is included in the table in anticipation of converting the continuous time function to a discrete time function. Since each sample in discrete time represents T amount of time, g(t) really represents the area under the curve for one sample period. In this manner CT and DT signals can be compared because the areas will be similar.
Figure 6 shows a plot of the analog impulse response.
Figure 6 – Impulse response of the analog Butterworth filter
Sampling is easily done by replacing t with nT and calling the result h[n].
(13)
The sampled points are illustrated in figure 7. The samples appear at every T (1/320 = .003125) s.
Figure 7 – Sampled impulse response
The Z-transform is performed using table 1.
. (14)
Multiplying both numerator and denominator by z-2 gives
. (15)
This is a standard form
for defining the transfer function H[z].
Now, substituting in r,
θ, and b from (12), a from (11), and T= 1/320 will generate the ai
and bi coefficients.
(17)
Thus, the transfer function becomes
. (19)
The impinvar function in MATLAB can be used to perform steps 2 through 5 above to obtain the a and b coefficients. Listing 1 shows the results of the impinvar function. The MATLAB output agrees very well with the results obtained above.
Generating the response of the digital filter in the time domain is easily accomplished by rearranging the transfer function and taking the inverse Z-transform.
(20)
By cross multiplying
(21)
(22)
Now, take the inverse Z-transform to get
(23)
Equation (23) is used in MATLAB to ‘march’ through the sampled input points with a loop and create the digital step response shown in figure 9. If the impulse invariant method works as promised, then the impulse response obtained from equation (23) should match the samples from figure 7. Lo and behold, mach up exactly as shown in figure 8. The circles and squares line up very nicely.
Figure 8 – Comparison of the analog response, samples and digital response
One might expect that, if the impulse response of the discrete time filter matches exactly with its analog counterpart, then the response of the discrete time filter of a different input function might also coincide with the analog response of the same input. Getting the response of the discrete time filter is easy using the same technique used to get the impulse response. Getting the analog response, however, is a different story.
There are two ways to see the response of the analog filter in the time domain. The impulse response g(t) can be convolved with the input function, or, the transfer function G(s) can be multiplied by the Laplace transform of the input function and then the inverse Laplace transform taken. For the unit step input, the following convolution table[2] is used to calculate the unit step response.
Table 2 – Convolution
f1(t) |
f2(t) |
f1(t)* f2(t) |
|
|
|
The impulse response of the analog filter (12) is repeated (without the T scaler) here for reference
Using the convolution formula in table 2 gives
(24)
Figure 9 shows that the step responses of the two filters are NOT the same as is the case with the impulse responses
Figure 9 – Comparison of CT and DT step responses
The first thing that one might notice is that the steady state output of the digital filter is significantly attenuated. This fact begs one to ask, “What does the frequency response look like?” Figure 10 shows that the frequency response of the digital filter is much flatter than that of the analog filter and that at DC, there is significant attenuation, which agrees with what is seen in figure 9.
Although the responses are the same at the cutoff frequency, they differ by about 10% both at 0 Hz and at 160 Hz (the limit of the digital filter). This is significant attenuation in the pass-band and leakage in the stop-band. It indicates that this digital filter is probably not as good as its analog counterpart for most applications that would use a 2nd order Butterworth filter. The advantage of using digital filters, however, is that it is simple to increase the order of the filter, while doing so in an analog system might prove difficult and/or expensive. Another interesting note is that the digital filter’s peak response is NOT at DC. It is, in fact, at around 20 Hz, which shows that the digital filter not only is a bit flattened, but that it also has a different shape than its analog counterpart.
To observe the performance of this digital filter on a more realistic signal, a square wave is generated with white Gaussian noise added to it using the ‘randn’ function in MATLAB. To get the response of the noisy signal, the marching algorithm is used as before. Figure 11 shows the result of the filter compared with the noisy signal for 3 periods of the square wave. It is clear that the filter reduced many of the sharp peaks introduced by the noise. Some of this reduction is due to the filtering and some is due to the fact that this filter attenuates all frequencies by at least 10%.
To get some idea of how well the filter performs on the noisy signal, the SNR is taken of the input and output and compared. The following results were found for a simulation with 5000 samples:
Table
3 –Analysis of IIR filter
|
% RMS error |
SNR |
Noisy input signal |
100.3914 % |
0.0781 dB |
Filtered output signal |
82.3673 % |
3.8796 dB |
Difference |
82.3673 - 100.3914 = -18.0241 % |
3.8796 - 0.0781 = 3.9578 dB |
After running 10 such simulations, the SNR improvement averages out to about 3.8 dB. Your results may vary. Note that in order to get an accurate description of the error, the output has to be shifted ahead by one sample to account for the delay introduced by the filter. This assumes that we do not care about delay in the output, which is usually the case.
Figure 11 – Filtered signal
In this section, an eighth-order, 60 Hz, Finite Infinite Response (FIR) low pass filter is designed using the Hamming window method. Similar to the impulse invariant method, the window method is used to design a filter that matches the impulse response of an analog filter at the sampled points. Unlike the impulse invariant method, however, the window method does not start with a realizable analog filter. Instead, an ideal filter is used to obtain the impulse response to be matched. The following steps are used to design the filter:
1. Choose an ideal analog filter
2. Find the impulse response of the ideal analog filter
3. Sample the impulse response N+1 times to create the b coefficients
The ideal filter to be used is the rectangular shape shown in figure 12. This represents the magnitude response of an ideal low pass filter with cutoff frequency of 60 Hz.
Figure 12 – Ideal low pass filter
The magnitude of the response is not the complete description of filter’s response. There will most likely be some non-zero phase response over the frequency range as well. It turns out that letting the phase response be a linear function of frequency makes the design very simple[3]. Thus, the frequency response is defined by
(25)
The exponent term has two consequences, which are really two manifestations of the same thing. First, it results in a linear phase response in the frequency domain without affecting the magnitude. Second, it causes a delay of M terms in the impulse response of the filter, which can be seen in the following:
(26)
In order for the impulse response to be finite (i.e. Finite Impulse Response) and causal (i.e. cannot predict the future), (26) must be truncated so that it only represents a limited number of terms that are all greater than or equal to zero. This is illustrated in figure 13.
Figure 13 – Example of a windowed impulse response
By Limiting the number of terms to N+1, equation (26) becomes
(27)
In figure 13, all points are a result of (26) while the large dots are a result of (27) only, making the response finite. By letting M= ½ N, the first large dot coincides with sample number 0 so the filter is now causal as well. Therefore, (27) represents the impulse response of a realizable FIR filter. The filter’s frequency response is shown in figure 14. The magnitude has a non-flat response in both the passband and stopband and there are side lobes in the stopband. As expected, the phase is linear (except for the reversals where the magnitude reaches 0). To implement the filter, the points represented by the big dots in figure 14 become the b coefficients for use in the marching equation or the MATLAB filter function.
The filter represented by (27) can be improved by simply applying a non-rectangular window to the truncated points. A Hamming window works well for this purpose. Figure 15 shows the truncated impulse response after applying the Hamming window. Note that the first and last points have a magnitude of nearly zero. As was seen in exercise 1, this ‘smoothing’ of the sampling at the endpoints should result in fewer side lobes in the frequency response. Figure 16 shows the response of the windowed filter and it does indeed have reduced side lobes. In fact, they are almost completely gone.
Figure
14 – Frequency response of 8th order FIR filter with rectangular
window
Figure 16 –
Frequency response of 8th order FIR filter with Hamming window
Figure 17 shows the step response of the FIR filter. It has a delay of N+1 samples before it gets to the steady state value, as expected. The DC steady state value is slightly under unity as is indicated by figure 16.
To get a more quantitative description of the filter, the same type of analysis that was done in part B is performed for the FIR filter. The results are shown in table 4.
Table
4 –Analysis of FIR filter
|
% RMS error |
SNR |
Noisy input signal |
98.5067 % |
0.3009 dB |
Filtered output signal |
55.6309 % |
11.7286 dB |
Difference |
55.6309 - 98.5067 = -42.8757 % |
11.7286 - 0.3009 = 11.4277 dB |
As table 4 shows, the SNR improvement is around 11.4 dB, which is a significant improvement from the 2nd order IIR filter from part B.
To explore the effect of changing the filter order on the SNR improvement, the FIR algorithm was made into a function with a MATLAB .m file. This function was then run with different values of N from 1 to 50 and the SNR improvement calculated. As expected, as the order went up, the SNR improvement also increased. What was not expected, however, was that once the filter order went over 4, which resulted in an SNR improvement of 11.75, the SNR improvement went down and seemed to approach an asymptote of about 8.5 dB. Another interesting phenomenon is that once the initial gains were realized with an order of 4 or more, the even order (6,8,10 etc.) filters consistently resulted in an SNR improvement about 0.5 dB over the odd ordered filters. These observations are illustrated in figure 19.
In this part, an 8th order adaptive filter is designed and tested using the least mean square algorithm. This is an FIR filter whose b coefficients are changed on-the-fly for some period of time while it adapts to the input signal. The following function is used to change the b coefficients while the filter is adapting:
(28)
Where represents the new set of b coefficients, µ is a constant, N is the filter order, is the variance of the current set of input values, ek is the error of the current sample with respect to the desired signal and is the set of inputs going back in time N samples.
To determine when the filter should stop adapting, the RMS of the most recent change in all of the b coefficients is compared with a constant (1%). If the RMS of the change between the new set of b coefficients and the old set is below 1%, then the algorithm figures that the coefficients are good and stops adapting. By trial and error, µ was set to 0.1, which seemed to provide reasonable results.
Due to the random nature of the noisy signal, every simulation yields a different result. In case shown in figure 20, the coefficients met the criteria after 88 samples and settled on a low pass type filter with maximum spectrum magnitude at about 20 Hz. The SNR analysis is shown in table 5.
Figure
20 – Adapting coefficients and the final frequency response
Table
5 –Analysis of adaptive filter
|
% RMS error |
SNR |
Noisy input signal |
100.0642 % |
-0.0128 dB |
Filtered output signal |
62.3460 % |
9.4494 dB |
Difference |
62.3460 - 100.0642 = -37.7182 % |
9.4494 – (-0.0128) = 9.4622 dB |
This particular run had a SNR improvement of about 9.5 dB. After running many simulations it was found that the SNR improvement number varies widely. In some cases, it actually made the SNR improvement negative, indicating that it was adding more noise to the signal. Most simulations, however, had a SNR improvement between +5 dB and +10 dB.
The step response of the adaptive filter sometimes is very nice, and sometimes looks like figure 22. For this particular run, the DC gain is only about .72. One might think that this is not a good filter but one must remember that this filter was ‘trained’ on a 10 Hz square wave. It has never seen a plain old step response before. If a step response is the expected kind of signal, then the filter should be allowed to adapt to it. Sometimes, however, the filter adapts with a unity gain at DC and thus its step response is very nice.
Figure
21 – Step response of 8th order adaptive filter
Figure 22 shows the filter’s response to the same type of input signal that it was trained on.
Figure
22 – 8th order adaptive filter with noisy input signal
To see what is possible if the order is increased, it was increased by 10-fold to 80. Figures 23 and 24 show the results of one run on the noisy signal. Figure 23 demonstrates that the filter actually ‘learns’ the desired input signal. From Fourier series analysis, we know that a square wave has only odd harmonics. Thus, a 10 Hz, 50% duty cycle square has frequency components at 10 Hz, 30 Hz, 50 Hz etc. This is exactly what is seen in figure 23. For the N = 80 filter, almost every simulation yielded a similar result for the frequency response with peaks at the odd harmonics of 10 Hz.
Figure
23 – Adapting coefficients and the final frequency response for N = 80
This filter resulted in an RMS improvement of 15.3 dB. The filter was also run with an order of 800, which resulted in an RMS improvement of 20.6 dB and is shown in figure 25.
Figure
24 – 80th order adaptive filter with noisy input signal
Figure
25 – 800th order adaptive filter with noisy input signal (very nice)
Table 6 shows that of all the filters tested, the higher order adaptive filters proved best for the given noisy square wave signal. For the IIR filter, it seems that a 4th order (and no higher) is best. This is a mystery to me but good news if I ever have to design an FIR filter with limited processing power.
Table
6 –Analysis of adaptive filter
Filter type |
SNR Improvement of noisy signal |
2nd order IIR |
3.8 dB |
8th order FIR |
11.4 dB |
4th order FIR |
11.7 dB |
8th order adaptive |
9.4 dB |
80th order adaptive |
15.3 dB |
800th order adaptive |
20.6 dB |
This project clearly demonstrated the steps to go through when designing a DSP filter and made it very clear exactly how to program a computer to do DSP filtering.
It was also shown that just because a digital filter looks good in the frequency domain doesn’t mean that it performs as well as one might expect on a particular signal in the time domain.
[1] Lathi, B.P., Signal Processing & Linear Systems, Berkeley-Cambridge Press, Carmichael, CA, 1998, p 736
[2] Lathi, B.P., Signal Processing & Linear Systems, Berkeley-Cambridge Press, Carmichael, CA, 1998, p 125
[3] Chen, Chi-Tsong, Digital Signal Processing, Oxford University Press, Oxford, NY. p. 305