The Fourier Transform-What’s Wrong With It?

Share This
Share This

The Fourier Transform-What’s Wrong With It?

The short answer is: Nothing. It’s perfect. However, we live in an imperfect world.

The Fourier Transform is a powerful mathematical tool used to analyze the frequency components present in a signal. It’s perfect in the sense that it does exactly what it’s mathematically designed to do: transform a function of time into a function of frequency. However, challenges arise in practical applications.

Like any tool, it has its limitations and potential pitfalls that must be understood and addressed to obtain accurate and meaningful results. The reality is that, with few exceptions, the outcomes of Fourier analysis are only approximations of the truth.

Producing useful and accurate results requires that we understand what the transform does and do our best to compensate for its shortcomings.

We will discuss:

Basics--Time Series and Spectra

Most modern data analysis systems use a discrete representation of an event called a time history. It is normally made up of a collection of amplitude values spaced equidistant in time (DT) (Figure 1).

Figure 1: Time Series

Then, the signal’s spectrum (Figure 2) might be calculated to assess its frequency-dependent response.

Figure 2: Spectrum

Spectrum Analysis as a Filtering Operation

There are a variety of ways to calculate spectra. We will view it as the process of passing the time history through a bank of bandpass filters whose outputs are analyzed to determine the signal level in each band. The parameters include the number of filters, the filter shape, the filter spacing, and how the magnitude is determined. Figure 3 shows a section of the series of filters used to calculate Figure 2: A bank of 6-pole Butterworth filters spaced 1/3 octave apart (the standard for acoustic testing). The magnitude is determined by calculating the RMS level of each filter output.

Figure 3: Filter Bank (1/3 Octave)

There is a wide range of options for filter shapes and methods of quantifying the magnitude. “Standard” options include Nth octave, shock response spectra (SRS), and enhancements to the Fourier Transform such as Power Spectral Density (PSD). Figure 4 shows the results of these spectral analyses for the time history in Figure 1.

Figure 4: Different Forms of Spectra of the Shock in Figure 1

They are very different in character and scaling. Further manipulation is required to make them useful from an engineering standpoint.

Of these, the Fourier Spectrum and its derivatives are the most important for most uses. Here we will look at this strategy from a different viewpoint than is normally presented.

We need to start with an understanding of Fourier Series.

Fourier Series

Fourier’s fundamental idea (Reference 1) was that any time history can be represented by a collection of appropriately sized sine and cosine waves. Exact reproduction requires an infinite number of these waves but, in “reasonable cases” a smaller number will produce an adequate representation.

Figure 5: Fourier Series Approximation

Figure 5 shows the result of adding more components. (The inset shows the fit in the vicinity of the corner.)

  • 10 components provide a very rough approximation.
  • 26 components do better.
  • and 128 produces a very good fit.
  • How do we determine the magnitude and the phase of these components from the time history? The Fourier Transform.

Fourier Transform

The classic mathematical expression for the Fourier Transform (Reference 2,3,4) is

If this makes your eyes glaze over, have faith. We are going to look at it from the graphical/trigonometric viewpoint which I find easier to understand.

Since we are in the digital world, we are going to jump directly to the discrete process that calculates the real(ak). and imaginary(bk) components of the spectrum at each frequency using the relationships:


It is simply the process of multiplying the input signal x(t) by a bunch of sine and cosine waves and summing the result. Let's look at it graphically (Figure 6). The black curve in the left frame shows the input signal.

Figure 6: Fourier Series Analysis

For each desired spectral frequency (the kth component):

  • Build an N-point sine and cosine wave at that frequency (see Sine and Cosine Frames).
  • Multiply the input (left frame black curve) by the cosine and sine waves (point-by-point) to produce the Cosine Product and Sine Product.
  • Calculate the mean of both products. (Cosine Average and Sine Average)
    • The average of the cosine product is the real part (ak).
    • The average of the sine product is the imaginary part (bk)

Spectral displays are normally displayed in terms of magnitude and phase (right frame): 


The calculated spectral components are used to reconstruct the input (red curve, left frame)

As you can see, a lot of calculations are required.

The Fast Fourier Transform

These calculations require an immense amount of computation. When investigators were “inventing” digital time series analysis the calculations were done with electromechanical calculators. Very painful.

In 1965, James Cooley and John Tukey recognized that, for arrays that were a power of 2 in length, many of the operations were duplicated and they developed an algorithm, the “Fast Fourier Transform”, that significantly reduced the number of calculations required. Other developers have produced similar algorithms for powers of 10 and others.

For many applications, modern computers make it possible to perform the Fourier Transform on large arrays of arbitrary length. For example, on my computer (I7-6700), a 1 mega sample (1048576)) point transform takes less than .04 seconds. A 1048574 (not a power of 2) point transform takes 0.17 seconds which is slower, but still very fast.

Bottom line: Except for real-time applications, where speed is critical, it is reasonable to analyze very large, arbitrary-length arrays.


One of the coolest things about the Fourier Series is that it is reversible. The coefficients of the spectrum shown in the red curve in the right frame of Figure 6 can be used to calculate the original time history (red curve, left frame, Figure 6).


When we go from the time domain to spectra it is called the Forward Fourier Transform (FT). The reverse operation is the Inverse Fourier Transform (IFT or FT-1) (Figure 7)

Figure 7: Fourier Transform Reversibility

This is a very powerful tool that allows the data analysis options described in earlier blogs (Ref 5,6)

The Fourier Transform Filter

Examination of Equation 2 and Figure 6 will show that each of the sine/cosine waves has an integer number of waves. The filter centers are at those frequencies. What are they?

Of course, Shannon and his famous theorem, have to enter the discussion. The maximum frequency that we can measure with a discreet measurement system is just under Sample Rate/2. Similarly, we require slightly more than two points to define a sine wave.

So, if we have N measured points in our window, we can determine the response of N/2 equally spaced filters. The increment is DF=S/N =1/T where S is the sample rate, N is the number of time points in the window, and T is the window duration.

So, what is the shape of the filter? One way to find out is with a sine sweep (Figure 8).

Figure 8: Fourier Transform Filter Shape

The frames in the figure are:

  • Upper and lower left: Sine and cosine wave at the filter center frequency (10 cycles)/frame
  • Center left: The input (swept-sine) signal (sweeping from 5 to 15 cycles/frame)
  • Center: The products of the input and filter frequency sine and cosine waves.
  • Sliders to the next Right: The average of the product waveforms (ak and bk).
  • Right frame: The resulting response magnitude (filter shape).

    So, the filter shape is like nothing we are used to seeing.

When we combine several adjacent filters into a filter bank, we get Figure 9.

Figure 9: Adjacent Fourier Transform Filters

Very pretty, but what does it mean to us?

  • We have a bunch of goofy-looking filters
  • There are N/2 filters between zero frequency and the Nyquist frequency (S/2).
  • The filters are spaced DF = S/N = 1/T
  • The filter shape “bounces”. The bounces are called “side lobes”. Figure 10 shows the first two lobes and the responses at critical frequencies. The phase reverses (shifts 180 degrees) at each lobe. It is a sync (hyperbolic sine) function.

Figure 10: Fourier Filter Close-up

  • Only frequency components that fall directly on a filter center are measured correctly.
    • A sine wave with a frequency halfway between filters (0.5DF) will have an indicated response of about 64% of the peak magnitude. Any frequency components that don’t fall on a filter center will produce significant errors; 84% of the time, the error is greater than 1%.
      Most of the time, the Fourier Transform gives us an inaccurate value for the spectral magnitude.

      Not a good result!

So, what’s wrong (or right)?

Fourier’s basic assumption is that the time history goes on continuously from the beginning to the end of time. The signal we are analyzing has a finite length.

The Data Window

We can only look at a short section of the whole time history--and maybe that’s all we care about (transient or other short test). In any case, the length of the data set is limited by the capabilities of our data acquisition system.

The length of the analysis has a period = T, which is called the data window.

When I first came on this term, I found it confusing. But it’s not. Just imagine looking at the world through your living room window. What you see is a small portion of the truth. In the signal-analysis world, the concept looks like Figure 11. The signal goes on forever, but we can only see a piece of it.

Figure 11: The Data Window

A simple idea but, as we will see, it can cause real headaches when we do spectral analysis.

In the discrete world, we have these constraints:

T = The length of our Window (Seconds)
N = Number of time points in the Window
S = Sample Rate = N/T
DF = S/N = 1/T=Filter Spacing

So, we have to fake it and the way we do it is to assume that we repeat the contents of the window forever. That works if the data at the end of the window flows smoothly into the beginning of the next repeated block. That is the case when a sine wave has an integer number of waves in the window (upper frame Figure 12).

Figure 12: Integer and No-Integer Sine Waves/Window

When there is a non-integer number of cycles in the window (lower frame), the peak spectrum indication is low and the spectrum is spread into adjacent filters over the full frequency range. This is wrong!

What we see is that the Fourier Transform is in trouble if the time history does not flow smoothly from one window to the next--that is if the beginning and end do not agree with one another. In mathematical terms, this means that the level, and all derivatives must match.

Fourier Analysis of Transient Signals

That assumption is the case for true transients. We can simply repeat them over and over (Figure 13).

Figure 13: Repeated Transients

However, it’s not quite that simple. The problem is that the magnitude of the components is the average over the time window. This means that, for transients, the magnitude of the Fourier Transform (and its specialized functions such as Power Spectral Density (PSD)) are not a good indication of damage potential for transient signals.

Fourier Analysis of General Signals

For other cases, we need to get more creative.

Figure 14 shows a variety of signals for which we might want to calculate the spectrum.

Figure 14: Signals That Violate Fourier’s Assumptions

  1. Real signals, such as Gaussian Random, will almost always have discontinuities at the window transitions.
  2. Sine wave with non-integer waves/buffer will cause a discontinuity at the window ends.
  3. A drifting signal where we would like to characterize the dynamic components, ignore the low-frequency drift, and minimize the effect of the window-end discontinuity.

We will suggest different strategies to “kluge” the time histories to improve our estimate of the frequency response.


For cases A and B, we will use a brute-force approach: Multiply the data in the window by a function that forces the ends to zero. This is called windowing!

From the huge number of options described in the literature (Ref 7), we will look at 3:

  • End Taper
  • Hanning
  • Flat Top

The End-Taper Window

The requirement of having the beginning and end of the data window agree is satisfied by simply tapering the window at the ends with an offset quarter cosine function (Figure 15).

Figure 15: 10% End Taper Window

Performing a sine sweep through this filter produces Figure 16.

Figure 16: 10% End Taper Window Sine Sweep

Shaping the time history with this window does not affect the filter shape much. It still has significant side lobes and frequency components off-center are poorly represented.

The Hanning Window

The most common and normally useful window is Hanning (Figure 17).

Figure 17: Hanning Window 

Performing a sine sweep on the window produces Figure 18:

Figure 18: Hanning Window Sine Sweep

The filter shape:

  • Is a little wider than the boxcar (no window).
  • Has much smaller side lobes
  • It is a better filter

To compensate for the signal reduction at the ends of the signals, the result must be multiplied by a factor to recover the signal loss. The factor depends on whether you are calculating level (G) or power (G2).

  • Level: 2
  • Power: 2.663

The Flat-Top Window

The flat-top is the most extreme window that is generally used. (Figure 19)

Figure 19: Flat-Top Window 

Performing the sine sweep through this Fourier filter produces Figure 20:

Figure 20: Flat-Top Window Sine Sweep

This window produces a filter that has essentially constant magnitude across a full center bin (Figure 21). It also has excellent rejection of frequency components outside its bandpass. It is useful for extracting the magnitude of sinusoidal components from a signal.

Its downside is that it has poor frequency resolution because of the broad filter shape.

The factors to recover level and energy for the Flat-Top window are:

  • Level: 3.977
  • Power: 5.707

Figure 22: Window Comparison

Figure 22 compares the characteristics of these windows. They all:

  • Taper the data with a function that forces the window ends and slopes to zero, satisfying Fourier’s basic requirement of smooth transitions between windows.
  • Attenuate the effect of the data at the ends of the window.
  • Change the shape of the filter. As the time-domain window is squished inward, the spectral domain filter gets broader, and the side lobes are diminished.

Signals With a Wandering Baseline

We often face signals of interest superimposed on a low-frequency baseline (Figure 23).

Figure 23: Signal With a Wandering Baseline

This signal violates the basic assumption of having the beginning and end of the window flow smoothly into each other.

An approach is to subtract a smooth curve to remove the wander from the data. Options that might be used to construct this curve include:

  • High-pass filtering
  • Polynomial
  • Cubic curve with end-slope Matching

The first suffers from end-point distortions because filter initialization issues at the beginning and end of the time block cause distortions. Only a portion of the time block is useable (Figure 24).

Figure 24 Back-to-Back Butterworth Filters (Two Pole)

Figure 25 shows the result of subtracting a third-order polynomial function from the data.

Figure 25: Polynomial Fit (3rd Order)

This approach gives a good fit over the full range. The ends of the window seem to agree well. However, the errors in the spectrum (red curve, bottom frame) are significant.

Figure 26 shows the result of a cubic curve fit.

Figure 26: Cubic Curve with End-Slope Matching

This is a two-stage process:

  • The slopes (A and B) at the beginning (1) and end (2) of time histories are characterized with a linear least-square fit.
  • A cubic curve is generated based on the slope and end points calculated in the first step.

The fit does not look as good as the polynomial result but the spectrum errors are much lower. That is because the ends of the window are characterized better by the linear fits calculated in the first step. The ends of the data with the curve approximation subtracted flow better between windows.

For this case, the cubic-curve approach works better than the polynomial approximation. Other cases may be different.

The Spectrum Resolution Problem-Spectral Upsampling

We have seen that the Fourier Transform calculates the spectral characteristics at N equally spaced frequencies between zero and the Nyquist frequency (linear spacing). However, much of the time, nature does not behave this way. For example, structural vibration modes, behave logarithmically. The result is that low-frequency behavior may be poorly characterized by the Fourier Transform.

An example is shown in Figure 27. It is a shock data set that has a lot of low-frequency energy.

Figure 27: Shock Data with Inadequate Spectral Resolution

This 16ms data set was acquired at 1 million samples/second. This produces a spectral resolution of 1/0.016384=61.04 Hz. This is obviously not fine enough to characterize the behavior below 500 Hz.

Our old friend Shannon comes to the rescue. His theorem says that if we sample fast enough to avoid aliasing, the signal is completely known. How do we make the spectral resolution finer? We increase the period.

The trick is to increase the length of the time history by padding it with zeroes.

Figure 28: Spectral Upsampling by Zero Padding the Time History

Figure 28 shows the effect of zero-padding the time history to increase its length by a factor of up to 10. The result is to reduce the spectral line spacing by the same factor. The upper left frame shows the acquired time history. The padded version is in the lower left. The right frame shows the acquired spectrum (red) and the upsampled version (black). The low-frequency behavior is bought out in detail and shows significant responses at 42 and 280 Hz that are not visible in the raw spectrum.

This trick will work only with signals that satisfy the criterion of continuity of the beginning and end of the original time history. Therefore, only transients, or signals that have been windowed, can be enhanced in this way.

A Second (and Quicker) Way to Calculate the Window Filter Shape

The zero-padding strategy may also be used to determine the filter shape of the various windows. Figure 29 shows the process:

  • The window shape (upper left) is padded with zeroes (lower left).
  • Its spectrum is calculated with a Fourier Transform (upper right).
  • The spectrum is reversed and added to produce the filter shape (lower right).

Figure 29: The Filter Shape of the Hanning Window by Fourier Transform

An infinite zero extension is required to get the exact answer but padding by 16 x the window period will produce satisfactory results.

Making the Results Useful for Engineering Applications

The Fourier Transform, as we have discussed it so far, gives us a good characterization of the frequency characteristics of the data. How do we interpret the results in a meaningful manner for engineering applications?

First, and most important, the magnitude of the Fourier Spectrum is meaningful only for continuous data like sine and random. The problem is that the magnitude is the average over the full window. The Fourier spectral magnitude of a transient is not a good indicator of the severity of the signal.

Our analysis approach depends on the nature of the data set.

Analysis of Data with Primarily Sinusoidal Components
                   Fourier Spectrum Magnitude

The Fourier Transform of a sinusoid is a “spike” at the signal frequency. Its magnitude is equal to the zero-to-peak of the sine (Figure 30).

Figure 30: Fourier Spectrum of 3Hz, 1 Volt(peak) + 8 Hz, 0.5 Volt(peak) Signal.

Analysis of Data with Primarily Random Components
                   Power Spectral Density

Signals that are essentially continuous, but non sinusoidal, are normally characterized with PSD (Ref 8,9):


This relationship calculates the power per unit bandwidth in the signal and is the accepted method to characterize potential damage from a random time history. Multiple blocks of the time history are normally averaged to improve the reliability of results (Figure 31).

Figure 31: Ensemble Averaging of a PSD Spectrum

As the number of averages rises, the estimate of the PSD (black curve) improves.

Characterization of signals with both random and sinusoidal components will require two separate analyses.


I have demonstrated some of the vagaries of the Fourier Transform and shown that it is perfect in concept but must be manipulated in most real cases because we don’t have an infinitely long time history. A variety of kluges have been suggested to overcome this issue to get the best approximation to the truth.


  1. Fourier Series - Wikipedia
  2. Fourier Transform - Wikipedia
  3. Fourier Transform, A Brief Introduction - Physics LibreTexts
  4. FFT: Equations and history - EDN Shayan Ushani EDN Magazine 2016
  5. Spectral-Domain Time-Series Analysis: Tools That Improve Our View and Understanding of the Data ( Strether Smith—Mide blog
  6. Reducing Measurement System Distortions with Transfer Function Correction Strategies ( Strether Smith—Mide blog
  7. Window function - Wikipedia
  8. Comparing the Fourier Transform, the Power Spectral Density, and the Aggregate FFT for Vibration Analysis ( Steve Hanly-Mide blog
  9. How to Calculate a Better FFT by Leveraging the PSD Method ( Steve Hanly-Mide blog
The Fourier Transform-What’s Wrong With It?
Share This

Strether Smith

I'm looking for new ways to create some dialog on modern/advanced methods for data acquisition and structural-dynamic system analysis. I have a lot of experience (good and bad) to share and think posting as a guest on the enDAQ blog is a perfect venue to help reach and educate engineers interested in these areas.

Click here to: Contact Me

Here's a list of my blogs...

By signing up you are agreeing to our Privacy Policy