By Yuseong Hong
This article features the Fourier transform, which is a crucial mathematical tool to process signals by decomposing them into separate ones and save time. This subject may be somewhat mathematically challenging, and thus there will be a simplified explanation on how the concept was derived, and its possible applications in real life and digitally as algorithms, such as the Fast Fourier Transform and the Schönhage-Strassen Algorithm. (Each further abbreviated as FFT and SSA)
But what is an algorithm? Algorithms are sequences of mathematical instructions, made to speed up operations. Speed is key to computation when it comes to large pieces of data. For example, faster methods are likely better in tasks such as multiplying big matrices or compressing large files. Another prerequisite; as the Fourier Transform regards mathematical functions, the words, domain (set of all possible input values, in this case especially time or frequency) and range (set of all possible output values made by values of inputs) are important for understanding.
It is now the time to delve into the Fourier Transform. In daily life, signals such as audio are a sum of separate waves of manifold frequencies. But how is it possible to determine the components of the mix? The Fourier Transform is a mathematical tool made to do that.
Start by drawing the graph of sound. The graph’s domain would be time and range would be the ‘strength’ of the sound. Then, the graph is wrapped around a circle in a complex plane, similarly to the image below. Suppose there is a point on the wound graph, and as it rotates, there is a specific frequency of rotations. Draw a point representing the center of mass (G). As the frequency changes, something that represents the ‘energy’ of the frequency (such as how far point G is away from the origin) spikes at a specific value. Those are exactly that of the components. When the relationship is plotted on a frequency-energy graph, it gets easier to determine the frequencies of component waves.
With integration and properties of imaginary numbers, and with the variable set as frequency, it is able to plot the graph of the result of the Fourier Transform and deconstruct the signal to separate waves.
This is the general Fourier Transform. However, computers handle not continuous data, but discrete data: continuous signals such as sound are recorded into a group of separate segments. As a Fourier Transform in a digitalized form, the DFT (Discrete Fourier Transform) changes discrete time or space domain data to frequency domain data. And IDFT (Inverse DFT) does vice versa. FFT (Fast Fourier Transform) is an algorithm to make DFT or IDFT faster.
The ‘strength’ for each signal frequency is derived using a sum of the products of discrete signals and imaginary numbers related to the frequency. The computer can perform matrix operations to calculate the strength value of each piece of data. In DFT, the algorithm changes the two mentioned above into matrices and then computes the product. (The product is a 1xN matrix, the former is a 1xN, and the latter is an NxN matrix, if N is the length of data)
It is costly to compute with brute force if the data given becomes lengthier. This is when FFT steps in: in situations such as evaluating the product of polynomials, there are loopholes (arithmetic symmetries) that enable DFT to become speedier. In a recursive loop of FFT, the operation becomes simpler. This lowers the computational complexity from O(n²) to O(n log n). For example, if there are 1 billion numbers to calculate, FFT reduces the time from around three decades to a few dozen seconds. In short, the recursion of FFT speeds the process of DFT, which has applications beyond signal processing.
Multiplying small numbers directly is easy for computers, but as the number gets larger, basic multiplication gets costly. The Schönhage-Strassen Algorithm optimizes the time to calculate large numbers, with a complexity of O(n log n log log n).
Previous algorithms used algebraic methods to reduce multiplication, such as the Toom-2 (Karatsuba) algorithm (whose complexity is O(n^1.58)). SSA, however, recursively uses FFT to ‘skip’ some calculations using numerical symmetries.
Published in 1971 in the article Schnelle Multiplikation großer Zahlen, (Fast multiplication of large numbers)it was the fastest known multiplication algorithm for around four decades, surpassing other algorithms’ speeds beyond 10k~100k digits. (However, when the numbers are relatively smaller, other algorithms, such as the Toom-Cook multiplication are better) Recently, another faster algorithm defeated SSA, but it excels only in impractically large numbers.
Starting from the Fourier Transform, we’ve taken a look at mathematical concepts that may, at first glance, seem complicated and confusing.
But they all have in common the ability to handle signals, either analog or digital. And using these algorithms, computers expedite the processing of many types of data.
It is said that quantum computers are fast, but rather than those, what is practically needed in daily life is your silicon processor working efficiently by using instructions of algorithms.
From the Fourier Transform to the Schönhage-Strassen Algorithm, we’ve explored how these seemingly complex mathematical concepts are used to make things work in the digital world. Using concepts of the FFT, we are using enhanced audio quality, image compression, and faster encoding and decoding of information. And the SSA enables calculations of gigantic numbers and better RSA (a method of cryptography) key generation.
Learn more about the Fourier Transform and other algorithms from interaction signal-processing simulations online, as they provide visual explanation to boost understanding. And those who want to read documentation can click on the links put in the ‘Sources’ tab.