Final 4,5,6
Final 4,5,6
M = 4034
w = signal.windows.kaiser(M, beta=4.088)
h = (np.sin((np.pi)*(np.arange(M)-int(M/2))/40))/(np.pi*(np.arange(M)-int(M/2)))
h[int(M/2)] = 1/40
plt.plot(h)
plt. tle("Filter Impulse Response: Kaiser Window (beta=4.088)")
plt.ylabel("Amplitude")
plt.xlabel("Time (Sample Index)")
plt.show()
h_padded = np.zeros(fs)
h_filtered = h * w
h_padded[0:M] = h_filtered
H = np. . (h_padded)
plt.stem(np.abs(H))
plt. tle("Magnitude Response of Decimator Filter")
plt.ylabel("Magnitude")
plt.xlabel("Frequency (Hz)")
plt.show()
Output:
Code:
y = np.convolve(x, h_filtered, mode='same')
plt.plot(y)
Code:
M_s = M / 2
w_s = signal.windows.kaiser(M_s, beta=4.088)
s = (np.sin((np.pi)*(np.arange(M_s)-int(M_s/2))/20))/(np.pi*(np.arange(M_s)-
int(M_s/2)))
s[int(M_s/2)] = 1/20
plt.plot(s)
plt. tle("Impulse Response of First Stage Filter: Kaiser Window (beta=4.088)")
plt.ylabel("Amplitude")
plt.xlabel("Sample Index")
plt.show()
s_filtered = s * w_s
S = np. . (s_filtered)
plt.stem(np.abs(S))
plt. tle("Magnitude Response of First Stage Filter")
plt.ylabel("Magnitude")
plt.xlabel("Frequency (Hz)")
plt.show()
s_upsampled = np.zeros(fs)
for i in range(int(M_s)):
s_upsampled[2*i] = s_filtered[i]
S_upsampled = np. . (s_upsampled)
plt.stem(np.abs(S_upsampled)[:150])
plt. tle("Magnitude Response of Upsampled First Stage Filter")
plt.ylabel("Magnitude")
plt.xlabel("Frequency (in Hz)")
plt.show()
H_stage2 = S
plt.stem(np.abs(H_stage2))
plt. tle("Magnitude Response of Second Stage Filter")
plt.ylabel("Magnitude")
plt.xlabel("Frequency (in Hz)")
plt.show()
Outputs:
F) Process the input signal by the two-stage IFIR decimator. Plot the output of the
decimator and compare it with that of part (2).
Code:
y_stage1 = np.convolve(x, s_filtered, mode='same')
y_stage2 = np.convolve(y_stage1, s_upsampled, mode='same')
plt.plot(y_stage2)
plt. tle("Output of the Two-Stage IFIR Decimator")
plt.ylabel("Amplitude")
plt.xlabel("Sample Index")
plt.show()
Output:
Part – B:
Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
sampling_freq = 8000
freq1 = 50
freq2 = 500
freq3 = 1000
me_samples = np.arange(0, sampling_freq)
input_signal = np.sin(2*np.pi*freq1* me_samples/sampling_freq) +
np.sin(2*np.pi*freq2* me_samples/sampling_freq) +
np.sin(2*np.pi*freq3* me_samples/sampling_freq)
interpola on_factor = 40
filter_length = 4034
kaiser_window = signal.windows.kaiser(filter_length, beta=4.088)
filter_coeffs = (np.sin((np.pi)*(np.arange(filter_length)-
int(filter_length/2))/interpola on_factor))/(np.pi*(np.arange(filter_length)-
int(filter_length/2)))
filter_coeffs[int(filter_length/2)] = 1/interpola on_factor
padded_filter = np.zeros(sampling_freq)
windowed_filter = filter_coeffs * kaiser_window
padded_filter[0:filter_length] = windowed_filter
frequency_response = np. . (padded_filter)
plt.plot(np.real(upsampled_filtered))
plt. tle("Output of the Standard Interpolator")
plt.ylabel("Amplitude")
plt.xlabel("Sample Index")
plt.show()
return padded_filter
stretch_factor = 20
filter_coeffs = design_filter(sampling_freq, filter_order, stretch_factor, window_coeffs)
frequency_response = np. . (filter_coeffs)
plt.plot(np.real(upsampled_filtered))
plt. tle("Output of the Two-Stage IFIR Interpolator")
plt.ylabel("Amplitude")
plt.xlabel("Sample Index")
plt.show()
Output:
Conclusion:
The results obtained from the conven onal decima on method and the IFIR (Interpolated
Finite Impulse Response) decima on technique are comparable. However, the IFIR approach
requires fewer computa onal resources compared to the tradi onal method. Consequently,
the IFIR decima on process offers a more efficient alterna ve for decima on opera ons.
Experiment – 5
ALLAMSETTI JAYARAM BT21ECE052
signal_length = 4000
me_vector = np.arange(0, 1, 4/signal_length)
synthe c_signal = np.zeros(signal_length)
plt.plot(synthe c_signal)
plt.plot(np.abs(np. . (synthe c_signal)))
block_size = 400
gaussian_param = 0.01
index += block_size
return s t_matrix
num_freqs = 800
s t_result = s t_jaya(synthe c_signal, block_size, num_freqs, window_func)
fig = plt.figure()
plot_axis = fig.add_subplot(111, projec on='3d')
plot_axis.plot_surface(freq_indices, me_indices, (np.abs(s t_result)), cmap='viridis')
plot_axis.set_xlabel('Frequency')
plot_axis.set_ylabel('Time')
plot_axis.set_zlabel('Magnitude')
plt.show()
speech_block_size = 3000
speech_max_freq = 4000
speech_window = gaussian_window(gaussian_param, speech_block_size)
s t_speech = s t_jaya(audio_data, speech_block_size, speech_max_freq,
speech_window)
plot_axis.set_ylabel('Time')
plot_axis.set_zlabel('Magnitude')
Outputs:
Conclusion:
The Short-Time Fourier Transform (STFT) is employed for analyzing non-sta onary
signals, as it enables the iden fica on of me-varying frequency components that
cannot be discerned through the tradi onal Fourier Transform alone. The selec on
of the block size and window func on can significantly influence the visual
representa on of the STFT plot.
Experiment – 6
ALLAMSETTI JAYARAM BT21ECE052
import numpy as np
import librosa
Audio(data=audio_data, rate=sampling_rate)
audio_data = np.array(audio_data)
signal_length = len(audio_data)
if signal_length % 2 == 1:
signal_length = len(audio_data)
# Add zero-mean Gaussian noise
plt.plot(noisy_audio)
Audio(data=noisy_audio, rate=sampling_rate)
wavelet_coeffs = np.zeros(signal_length)
plt.plot(wavelet_coeffs)
Output:
Code:
plt.plot(wavelet_coeffs)
reconstructed_signal = np.zeros(signal_length)
idx = 0
idx += 1
plt.plot(reconstructed_signal)
Audio(data=reconstructed_signal, rate=sampling_rate)
Output:
Image data:
a. Import an image (convert to grayscale if it is a colour image) and add zero
mean Gaussian noise with a standard devia on of 20.
b. Perform level-1 DWT decomposi on using Haar wavelet.
c. Suppress noise using a hard/so threshold and reconstruct the image.
d. Compute the following: PSNR (original_grayscale_image, noisy_image) and
e. PSNR (original_grayscale_image, denoised_image)
Code:
import cv2
import numpy as np
import pywt
# image to grayscale
img = cv2.imread('image.jpg')
# To resize the denoised image to match the original grayscale image size
cv2.imshow('Original', gray)
cv2.imshow('Noisy', noisy_img.astype(np.uint8))
cv2.imshow('Denoised', denoised_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Outputs:
Conclusion:
The applica on of the Haar transform to audio signals decomposes the signal into low and
high-frequency components, enabling the mi ga on of high-frequency noise through the
implementa on of hard or so thresholding techniques.