Marco Frasca and Alfonso Farina

From rainbow, to prism, to harmonic
analyzer, to PC


With Isaac Newton’s discovery of the property that white light decomposes in different colors, it was widely accepted that some of the phenomena, that happen in Nature and appear complex at a first sight, can be seen as the composite combination of simpler effects. The experiment of Newton’s disc was a first deeper understanding of the nature of light, largely inspired by a common effect seen in Nature like the rainbow [1]



Figure 1- Newton's original paper on the spectrum of the white light.

Later, the realization of an optical prism made possible to repeat Newton’s experiment, but this have had far reaching consequences with the possibility to understand what the star are made of and ultimately, the matter content of the Sun. Indeed, this lies at the foundations of the spectroscopy.

Figure 2 - Effect of sunlight through a prism as devised by Newton.

Spectroscopy was used to study the emission spectra of common gases as hydrogen and helium and some strange absorption lines were seen well described by the Rydberg’s formula (Wikipedia, Rydberg formula, s.d.). On a similar line, the black-body radiation was an open problem that caused a major revolution in our understanding of the Nature. Max Planck put the basis for quantum mechanics in 1900 with his black-body radiation formula where, again, all the different colors of light enter into such a description of a furnace but with different weights.

Quantum mechanics lies at the foundations of modern Personal Computers (PC) as the chips inside its circuits use a deeper comprehension of the behavior of the conducting solid matter due to Enrico Fermi and Paul Dirac. In this way, people learnt out semiconductors work and invented the main brick of our chips: The transistor (Bardeen, Brattain, and Shockley, 1947).


Figure 3 - Harmonic analyzer

In 1894, German mathematician Olaus Henrici designed a harmonic analyzer for determining the harmonic components of complex sound waves such as those from musical instruments. The device employed several pulleys and glass spheres connected to measuring dials that gave the phase and amplitudes of 10 Fourier harmonic components.

What has all this in common with the initial observation due to Newton? That complex signals in nature can be reduced to a composition of simpler ones. This motivated a deepest breakthrough due to Joseph Fourier with the invention of his series that is essential both for understanding the rainbow as for any mathematical treatment in quantum mechanics and in the great part of modern technologies as radars, cellphones and TV broadcasting. It is this the main purpose of this article. Figure 4 shows a PC, with suitable software inside to process gathered data from the operator, as a prism with state of art technology.




Figure 4 - PC as a modern prism.

 Fourier transform made easy

Fourier: A genius of humanity

Joseph Fourier (Farina, 2019) was born on March 21, 1768 at Auxerre in France, son of a tailor. He remained orphan at age of 9 so, he was educated by the Benedictine order at the convent of St. Mark. He promoted the French Revolution and, imprisoned during the Terror, risked his life. Napoleon Bonaparte took him, as scientific advisor, on his campaign in Egypt. His fundamental work was the book “Theorie analytique de la chaleur” (The Analytical Theory of Heat), published on 1822, where he laid down the foundations of the trigonometric series and postulated his most famous equation describing the heat diffusion. In the effort to solve the heat equation, that lies at the foundations of the stochastic processes, Fourier came out with his series. This work was mostly opposed by Laplace and Lagrange and the name of “false” theorem was attached to the Fourier series, the main reason being the unclear conditions of its convergence. This problem was finally addressed by Dirichlet in 1829, just a year before Fourier’s death.

Fourier transform definition and properties

 A generic periodic function, such that, (with T being the period of the function) has the property to be described by a combination of sine and cosine functions. Fourier discovered this property at the beginning of the XIX century and used it to solve the equation that describes heat diffusion, found by himself.

The reason why such a discovery is fundamental for all humankind is that Nature is strongly based on periodical phenomena. E.g., this entails a possible way to measure time. Therefore, a flooding of applications was found for the Fourier series in the course of the successive centuries that changed radically our history, improving our way of life.

A typical example of periodical function is the one that describes the motion of the Earth around the Sun as happens also for the other planets in the solar system. This explains way the Ptolemaic model worked so well for such a long time. Indeed, one can approximate such a motion by adding sine and cosine at different periods. By adding even more, we can fit quite satisfactorily such motions, even keeping wrongfully the Earth at the center of the Universe. Of course, when measurements improved, thanks to Tycho Brahe, the Ptolemaic system was doomed.  But, in principle, with such epicycles, we can represent whatever curve we like (Mathologer, 2018).

Such a powerful technique can have better uses as e.g., solving difficult equations that describe phenomena of our everyday life like for cell phones, TV broadcasting and radars. The idea to start with is given by a common vector in a Cartesian system of coordinates, representing our commonsense view of space with length, height and depth. A vector is a mathematical entity with an orientation and so, with a prefixed direction (you can imagine it as an arrow placed in our space). The length of a vector is an intrinsic property (you can move the arrow as you like but its length will be left unvaried). This is only possible if we combine contributions coming from the coordinate axes. These are said vector components and can change if we rotate our system of coordinates (messing around with the arrow will change the components). The invariant part, as already said, is the length. So, a vector in our ordinary space can be represented with 3 numbers because 3 are the dimensions of our space.

For functions, 3 dimensions are not enough. We learnt at very beginning of the last century, thanks to mathematicians like David Hilbert and Stefan Banach, that in this case you will need an infinite number of dimensions. Therefore, each component of our vector, that for our case is a function, is a function itself!

The meaning of the Fourier transform

Fourier, looking for a solution describing the propagation of the heat, proposed a series of sine and cosine. This series is a good representation of periodic functions. The formula is


Figure 5 - The Fourier Transform in detail.

The meaning of this deep equation is that most of the patterns that appears in Nature can be thought as a superposition of sinusoidal components at different frequencies. Therefore, such components can be used to analyze patterns, create them, extract essential features, and remove random noise (where is the signal?).

Fourier technique is very widely used, for example in signal processing for telecommunication, radar, sonar, image processing (for synthetic aperture radar, sonar, medicine) and quantum mechanics. It is used to find the structure of large biological molecules like DNA, to compress image data in digital photography, to clean up old or damaged audio recordings, and to analyze earthquakes. Modern variants are used to store fingerprints data efficiently and to improve medical scanners.

Fourier transform is like an exchange bureau where you present a pile of mixed-up currencies and you need to turn them back to well-ordered stacked money to be cashed in. You have an unreadable function varying in time and you are asking how to decode it through different frequencies that appears to be mixed-up. Fourier transform can turn back properly to you a readable ordered understanding of the content of the pile of frequencies. In some way, an undistinguishable confusing behavior can be simplified by a sum of simpler functions like sine and cosine.


Fast Fourier Transform

The idea to evaluate in a fast way the Fourier transform can be traced back to Carl Friedrich Gauss (1777-1855) in un unpublished work in 1805 and reported in his collected works. Gauss conceived it to interpolate the orbits and Pallas from observations. Indeed, this approach was remarkably similar to the one devised by James Cooley and John Tukey on 1965 that is used today and that credited them for the invention of the modern FFT algorithm. What can seem more unbelievable is that Gauss, in some way, also predated Fourier’s work.



Figure 6 - Portrait of Carl Friedrich Gauss by Christian Albrecht Jensen (about 1840).

Since then, other authors proposed algorithms for FFT and, in some way, Gauss’ work went unnoticed. Many of these methods just reduced the constant factor in the computational time being  by using symmetries, In 1942, G. C. Danielson and Cornelius Lanczos published a version the used periodicity, doubling elements, to reduce the computational time to .

The algorithm for the FFT commonly used today was devised by James Cooley and John Tukey in a paper published on 1965. It applies when N is not necessarily a power of 2. The idea was originally attributed to Tukey after a meeting with Kennedy's Science Advisory Committee in a discussion on how to unveil nuclear tests by Soviet Union with an extended setting of sensors surrounding externally the country. FFT was needed to analyze the output of such sensors. Richard Garwin, discussing with Tukey, recognized that the original idea could have extended applications, not just limited to the problem of sensors. Garwin and Cooley were working at the IBM’s T.J. Watson Labs at the time and so, he passed the idea to him to implement it. Colley and Tukey published the paper within six months. Due to the unclear situation about the patent (Tukey did not work at IBM), the algorithm went into the public domain. In this way, it went through as an indispensable algorithm in digital signal processing. The story of FFT is reported in Figure 7 which shows the relevance of the contribution by the Italian Francesco Carlini (Figure 8).


Figure 7 - Table taken from M. T. Heideman, D. H. Johnson and C. S. Burrus, "Gauss and the history of the fast Fourier transform", IEEE ASSP Magazine, October 1984.


Figure 8 - Francesco Carlini

Further mathematical developments have been recently done with the conception of the sparse FFT which has a computational load of  being .

 Implementation in the 1970s

One of the authors (A.F.) remembers, in his first period of employment in the glorious Selenia S.p.A., the first implementation of a FFT board reported in (Teodori, 1973). The first page is shown in Figure 9.


Figure 9 - First page of Teodori's article on 1973.


 Application to optical real-time SAR processing

Fourier transform has been pivotal for the image formation from Synthetic Aperture Radar (SAR) (McCandless, 2004). SAR history is summarized in the following Figure 10.


Figure 10 - SAR history in a glance.

SAR allows us to have a high-resolution mapping of the EM backscatter from an observed scene. More precisely, the radar data is obtained in polar coordinates, i.e., slant range and azimuth, while a two-dimensional image in the rectangular coordinates (x,y) is provided. High resolution in slant-range is obtained by transmitting a coded waveform, with a large value of the time-bandwidth product, and coherently processing—in a filter matched to the waveform—the echo signals. High resolution along the transversal direction is achieved by forming a synthetic aperture. This requires (i) to put the radar onboard a moving platform, e.g., an aircraft or a satellite; (ii) to record the EM signals from each scatterer that is illuminated by the moving antenna beam in successive instants of time, and (iii) to coherently combine the signals—via a suitable azimuthal matched filter—thus focusing the sliding antenna pattern on a narrower synthetic beam.

Radiometric resolution, another key parameter, is related to the capability of SAR of distinguishing different objects in the scene on the basis of their EM reflectivity. Radiometric resolution determines how fine a sensor can distinguish between objects with similar EM reflection properties. It is a parameter of great importance, especially for those applications oriented to extended target exploitation like polarimetry and classification. Thus, the radiometric resolution should be optimized mainly for good extended target interpretation, accounting for all kind of back-scatterers. Multi-look processing is commonly used in SAR image formation in order to reduce the speckle noise. Traditional digital multi-look processing consists of an incoherent addition of independent images (looks) of the same scene. The looks can be obtained by partitioning the available signal bandwidth (range and/or azimuth) and processing each look independently.

The final image is produced by adding the looks incoherently, pixel by pixel. The direct trade-off between geometric and radiometric resolution must be considered when choosing the number of looks for processing. One-look processing means a fully coherent use of the bandwidth (best geometric resolution), and in this case, the speckle noise will obey an exponential distribution where the standard deviation is equal to the mean value in the intensity image (multiplicative characteristic). For multi-look processing, the geometric resolution will degrade as the number of looks increases and the speckle statistics of the intensity image obey a gamma distribution, where the standard deviation decreases with the square root of the number of independent looks. 

Figure 11 shows an optical real-time SAR processor.


Figure 11 - From 1970, optical real-time SAR processor (photo kindly provided by Dr. Dale Ausherman of General Dynamics Advanced Information Systems).

Application to digital real-time SAR processing

A bit of personal history through the experience of LABSAR. Across the years 1988-1992 one of us (A.F.) conceived, organized, and was appointed director of SAR Laboratory (LABSAR) (Figure 12), a cross-company laboratory collecting the effort of colleagues from Alenia and Alenia Space to build
up technical competences in the field of SAR for remote sensing both for civilian and defense application systems. Further achievements were obtained in SAR interferometry (Costantini, 1999) and fusion (Costantini, 1997), (Costantini, 1997) and (Simone, 2002)


 Figure 12 - A. Farina, F. Vinelli, “LABSAR: a center of excellence of SAR signal processing”. Alenia Technical Journal, 1993.

A today key SAR processing algorithm is the so-called ω-k reported in Figure 13



Figure 13 – Omega-K SAR Processing (Cafforio, 1991), (Cumming, 2003).

 Butler microwave circuit and application to IFF conformal array with multi-beams

FFT has played a key role also at the radar antenna level. Legacy of radar systems comes across Selenia, Alenia, AMS, Selex, Leonardo. Achievements are in the ATC (Air Traffic Control) Systems with products installed in 150 Countries in the World and in defense systems.

In microwave antenna the embodiment of FFT is the Butler matrix (Wikipedia, Butler matrix).


Fig14 8x8 Butler matrix schematic

Figure 14 - Multiple beamforming with Butler matrix.

Recent achievements on operational IFF (Identification Friend or Foe) systems are described in (Angelilli, 2017).



Fig15 ConformalArray2

Figure 15 - Conformal antenna array exploiting Butler matrix implemented with state of art technology (courtesy of M. Angelilli, L. Infante and P. Pacifici).


It was born of necessity during the development of the physics of heat and waves, but the Fourier transform now belongs close to the heart of calculus.

It has been shown a link to heat equation, to Schrodinger equation, to Image processing (including digital photography, data compression, wavelets for blip signals, not discussed in this short overview) (Cochrane, 2016).



[1]In Latin, spectrum means “image” or “apparition”. In the 17th century, the word spectrum was introduced into optics by Isaac Newton, referring to the range of colours observed when white light was dispersed through a prism. Newton I. “A letter of Mr. Isaac Newton … containing his new theory about light and colours …”. Philosophical Transactions of the Royal Society of London. 1671;6(80):3075–3087. doi: 10.1098/rstl.1671.0072
The word “spectrum” to describe a band of colours that has been produced, by refraction or diffraction, from a beam of light first appears on p. 3076.




Angelilli, M., Infante, L., & Pacifici, P. (2017). A Family of Secondary Surveillance Radars based on Conformal Antenna Array Geometries. EDICON.

Cafforio, C., Prati, C., & Rocca, F. (1991). SAR Data Focusing Using Seismic Migration Techniques. IEEE Transactions on Aerospace and Electronic Systems, 27(2), 194-207.

Cochrane, R. (2016). The secret life of equations. The 50 greatest equations and how they work. Cassell Illustrated.

Costantini, M., Zirilli, F., & Farina, A. (1997). The fusion of different resolution SAR images. Proceedings of the IEEE, 85(1), 139-146.

Costantini, M., Zirilli, F., & Farina, A. (1999). A fast phase unwrapping algorithm for SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing, 37(1), 452-460.

Cumming, G., Neo, Y., & Wong, F. (2003). Interpretations of the Omega-K Algorithm and Comparisons with Other Algorithms. In IGARSS 2003. 2003 IEEE International Geoscience and Remote Sensing Symposium. Proceedings (IEEE Cat. No.03CH37477) (p. 1455-1458).

Farina, A., & Frasca, M. (2019). J.B. Fourier: The endless relevance of his scientific findings after 250m years from his birth. The Newsletter of the Department of Electronics Engineering, 1(1), 4-5.

Goertzel, G. (1958). An Algorithm for the Evaluation of Finite Trigonometric Series. American Mathematical Monthly, 65(1), 34–35.

Mathologer. (2018, luglio 6). Epicycles, complex Fourier series and Homer Simpson's orbit. Tratto da Youtube:

McCandless, S., & Huxtable, B. (2004). Fundamentals of Synthetic Aperture Radar. Applied Technology Institute.

Simone, G., Farina, A., Morabito, F., Serpico, S., & Bruzzone, L. (2002). Image fusion techniques for remote sensing applications. Information Fusion, 3(1), 3-15.

Teodori, E. (1973). Analisi di Segnali con la FFT. Rivista Tecnica Selenia, 1(3), 1-5.

Wikipedia. (n.d.). Butler matrix. Retrieved from Wikipedia:

Wikipedia. (s.d.). Rydberg formula. Tratto da Wikipedia: