## Saturday, February 12, 2011

### Quantized Redshifts. IV. The Fourier Transform, FFT & PSD

Continuing my series on Fourier analysis and its application to power spectra...

Fourier Series begat the Fourier Transform...
The Fourier series is a powerful technique, but had the limitation that it required waves whose wavelengths changed in integral fractions of your scale distance, L or 2L.  What happened if you needed wavelengths not subject to that limitation?  Some simple manipulations from the calculus enabled the Fourier series to be extended to the realm of continuous functions in the form of the Fourier Transform.

In this section, we'll make some notational changes to bring our conventions more in line with usage in the physics community.

1) We adopt the physics custom of presenting the frequency as angular frequency,
related to the traditional frequency, f,
.  Omega is a frequency measured in radians per unit time (there are 2*pi ~6.28  radians in a circle).

2) While studying the Fourier Series, we separated the sine and cosine terms.  Now we will combine them into the exponential term which is a complex number (it contains real and imaginary parts) using De Moivre's formula  (Wikipedia).

With these revisions, we can then write the Fourier transform as:

The inverse transform, which converts the transform back into the original function, f(t), becomes

3) We can also do the Fourier transform in a spatial (position) coordinate.   For this, we will choose x (it could easily be y or z).  In this case, the 'conjugate coordinate' is called the wave number, usually designated by the letter k.  The wave number is related to the wavelength, lambda, of the wave by the equation
The wave number is the number of waves (or radians of a wave) per unit of distance.  Note that in frequency and wavelength, their product is the speed of the wave.  Similarly, if we use angular frequency and wave number, it is basic algebra, using the definitions above, to demonstrate the relationship
In the case of spatial dimensions, the Fourier Transforms and its inverse can be written:
As with the Fourier series, these the normalization of these equations can change slightly based on the conventions chosen.  This is generally not a problem so long as the conventions are used consistently.

The Discrete Fourier Transform

But the Fourier transform has a problem when applying it to real data because real data generally does not make a truly continuous function.  When a measurement is made, it is determined at a specific time, t, and a position, (x,y,z), in a given coordinate system. A full dataset would consist of many of such measurements at discrete times and/or positions.

If we take measurements uniformly sampled in time, at intervals T, then each measurement takes place at a time t_k, where k is an integer.
We would say that T defines our bin size.  From this, we can compute the Discrete Fourier Transform (DFT)
where the value of the transform, F, is computed in bins corresponding to discrete frequencies, fn,
We can also define the inverse transformation,
Because of the discrete nature of the time sampling and the finite size of the dataset, this yields a finite frequency resolution.  That is, there is a minimum frequency that can be resolved by the DFT.  This minimum frequency also defines the minimum frequency that you can resolve for a given dataset.  You can make this frequency resolution smaller by increasing the duration of the total amount of data.

The DFT also an upper limit to the frequency which can be identified in a given dataset, called the Nyquist frequency.  This frequency can be increased by increasing the the number of samples, N, in a fixed duration of time.

In one-dimensional cases, we can generally freely interchange the pairs of time/frequency and position/wave number.

...begat the Fast-Fourier Transform (FFT)...

With the advent of digital computers, the hunt was on for algorithms that could efficiently use the computational power.  While the Fourier transforms were computationally simple, the fact was that researchers wanted to apply these transforms to large amounts of data.  Using the standard recipe above, computational time required grew as N^2, where N was the number of samples.  If you wanted to compute a transform for four times as many data points, the computation generally took 4^2 = 16 times longer.  This would quickly become computationally expensive as dataset became larger.

Enter the Fast-Fourier Transform (FFT), the best known of the algorithms developed by Cooley & Tukey (wikipedia) in the mid-1960s.  This technique sped up the computation of the DFT, but with a couple of trade-offs.  The speedup was achieved by efficiently allocating the computation in powers of two, and using shift registers inside the processor which could do powers of two operations very quickly.  The most notable trade-off was that the algorithm was most optimized with data samples with integral powers of two, such as 2, 4, 8, 16,...1024, 2048, ...16384,... 2^n.

The major implication of this is that the FFT is not identically equal to the Fourier transform of a given function.

...and Power Spectral Density (PSD)

The Fourier transform also has an analog to the Parseval Relation (wikipedia), as noted in Quantized Redshifts. III.  How is Fourier Series Related to the PSD?  For the Fourier Transform, the relation is
For the Discrete Fourier Transform (DFT) and the Fast-Fourier transform (FFT), this relationship can be written:
By this analogy, we define the Power Spectral Density (PSD), or Power Density Spectra (PDS), or Power Spectra (PS) as

By these definitions, the PSD is always positive (negative power doesn't really make sense).  Phase information about the waves in the dataset, which is determined by comparing the real and imaginary components, is partially lost in the PSD.

General Properties and Insights
• The DFT and FFT are symmetric functions, so the transform value at frequencies greater than zero is equal to the value at frequencies less than zero.  For this reason, we generally only plot the positive frequency side.  In the case of the FFT, the array bins corresponding to the negative frequencies are actually stored in the 'upper' half of the array.  Often, researchers are only interested in the location of any structures, but in cases where we want to measure the relative power, the researcher may multiply amiplitude of the positive frequencies by two to account for the power in the negative frequencies (and satisfy the Parseval relation).
• In the DFT and the FFT, the Fourier transformed term in bin zero (corresponding to zero frequency or a constant), contains information about the symmetry of the function above and below zero.  If the term is zero, it means the function contains equal positive and negative amplitudes.  For most real datasets where the data are all positive measurements, this quantity is kind of redundant, and often very large compared to the frequency distribution the researcher is interested in.  Therefore it is sometimes the custom to not plot the zero frequency bin.
• Another problem one encounters in the transition of Fourier transforms on functions to DFT and FFTs on discrete data points is the problem of aliasing.  In the case of Fourier transforms on functions, we can generally find that the transform has a no-zero amplitude at frequencies that can range from zero to infinite (and negative infintity).  This happens even if the original function is non-zero over a finite range of time.  But what happens in the DFT and FFT where there is a maximum frequency (the Nyquist frequency) that can be analyzed?  The answer is that the power from those frequencies above the Nyquist frequency get aliased into frequencies below the Nyquist frequency.  This means that some of the power measured at a given frequency is not necessarily from that frequency and care must be exercised in the interpretation.
Extending the Fourier Transform to Three Dimensions

We examined the one-dimensional Fourier transform above.  However, we can expand the tranform from 1-dimension to 3-dimensions.  This is easier when done using the wave number, k, because it acts as a 3-dimensional vector.

Using the wave number, we express the represents the direction of the wave with  k = 2pi/lambda * direction vector, or
With a 3-dimensional form of the wave number vector, we can define the phase as the projection of the wave number vector on the direction vector,
as

With this definition of the phase, the Fourier Transform can then be extended to three dimensions as

with the inverse transformation defined as
where the integration is performed over the volume of the system.

This is the proper 3-dimensional form of the Fourier transform that preserves the orthogonality characteristics (that the 'waves' identified are linearly independent).  I shall show in a future post that the radial form used by many quantized redshift supporters is NOT equivalent to this form and therefore cannot reliably identify periodicites in a 3-dimensional space.

The Fourier transform can also be extended to 4-dimensions (spatial + temporal) where the function is integrated over a volume and time.  The term in the complex exponential term becomes wt-k*x, which is an invariant in relativity.  This form is used heavily in electromagnetism in design of antennas and waveguides.

Next, we explore the power spectral density of some known datasets.

Next Weekend: Playing with the PSD