Parabolic Peak Interpolation

Python. I applied interpolation technique to the spectral peak based on fundamental frequency after FFT was taken. It is basically to find a better estimation of the peak by using the max spectral bin (of fundamental), its left and right bins with parabola equation. In other words, it is to fit a parabola onto those frequency bins.

import os
import numpy as np
import matplotlib.pyplot as plt
from scipy.io.wavfile import read
 
def CalculatePeakInterpolation(fileName, startTime = 0.4, threshold = 0.7):
 
    #Reading a wav file and converting it to a normalized floating point array
    #fs: sampling rate of the wav file, x: integer point array
 
    if (os.path.isfile(fileName) == False):					
        raise ValueError("Input file is not valid")
            
    fs, x = read(fileName)

    #We only accept mono and 44.1khz audio files for this implementation  
    
    if (len(x.shape) != 1):                                  
            raise ValueError("Audio file is not mono")
    if (fs != 44100):                                        
            raise ValueError("Sampling rate of input sound is not 44100")
    
    xSegment = x[int(startTime* fs):]
    Xf = np.fft.fft(xSegment)/len(xSegment)    #frequency domain after fft 
    mX = abs(Xf)                       
    mX = mX[:int(len(Xf)/2)]                   #amplitude of the first half  
    mXdb = 20 * np.log10(mX)                   #amplitude in dB
    
    #FINDING FUNDAMENTAL FREQUENCY 

    thresholdValue = threshold * max(mXdb)
    
    for i in range(len(mXdb)):                            
        if( mXdb[i] >= thresholdValue ):          
            bandwidth = i                      #finding bandwidth, first-hit
            if (bandwidth < 2):
                bandwidth = 2
            break               
    
    start = int(bandwidth/2);                  #start of the fundamental band
    finish = int(3 * bandwidth/2);             #end of the fundamental band 
    peakFreqBin = np.argmax(mX[start:finish])
    freq = fs * peakFreqBin / len(mXdb)       

    #FINDING INTERPOLATED FREQUENCY  
    
    lv = mX[peakFreqBin - 1]                   #magnitude of left bin
    rv = mX[peakFreqBin + 1]                   #magnitude of right bin   
    maxValue = max(mXdb[start:finish])
    ifreq = freq + 0.5 * (lv - rv)/(lv - 2 * maxValue + rv);    
    
    print("Fundamental frequency:", freq)
    print("Interpolated frequency:", ifreq)
       
    return

I used a strings fragment to test.  Sometimes, the second and/or third harmonic might be louder than the fundamental. For example, some of trumpet or guitar string effects. That’s why, I implemented the fundamental frequency finding stage at first. In this audio file, the fundamental is around 1180.3144 Hz. After the interpolation implementation, it shows 1180.3156 Hz. However, this implementation is only for a pedal note (if we use an instrument sound, like I did).  As a conclusion, this implementation for the idea is very weak. It will be good to apply this interpolation technique for different manipulations.

References:

[1] Mathematics of the Discrete Fourier Transform, Julius O. Smith III

[2] Improving FFT Frequency Measurement Resolution by Parabolic and Gaussian Interpolation, M. Gasior, J.L. Gonzalez

[3] Is It Possible for a Harmonic Louder than the Fundamental Frequency