. .
Digital Signal Processing Operations

                                Digital Signal Processing (DSP)



Digital Signal Processing is involved in the representation of signals by a series of numbers or symbols and the processing of these signals. Digital and analog signal processing are subfields of signal processing. DSP includes subfields such as audio and speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, etc.


The aim of DSP is generally to measure, filter and/or compress continuous real-world analog signals. By sampling an analog signal using an analog-to-digital converter (ADC), we convert it to a digital form, which turns the analog signal into a stream of numbers. However, often, the required output signal is another analog output signal, which requires a  digital-to-analog converter (DAC). Provided the fact that this process is more complicated than analog processing and has a  discrete value range, the application of computational power to digital signal processing allows for added advantages over analog processing in many applications, such as  error detection and correction in transmission as well as data compression.


 Signal Statistics                  

This function calculates the following parameters:     

Arithmetic Mean- The sum of the values of a variable divided by the number of values is called arithmetic mean.


Variance- The calculation of how far values lie from the mean value in any distribution curve is done using variance.                                                                                       

 (Where X is any random variable, or E[X] is its mean and Var(X) its variance.)

Mode- The value that occurs most frequently in any distribution curve is called its mode.

RMS- The root mean square, also known as the quadratic mean, is a statistical measure of the magnitude of any varying quantity.

Standard deviation- It measures how much a distribution is spread. It is the square root of variance

  (Where x is any random variable, µ or E(x) is its mean and its standard deviation.)

Skewness- It is a measure of the asymmetry of the probability distribution of a random variable

(Where  x is any random variable, µ or E(x) is mean and is third standardized moment or skewness.)

Kurtosis-    It is a measure of peakedness of any curve and is normalized form of fourth standardized moment.

is fourth standardized moment.

is fourth standardized moment.

Median-   It is defined as the numeric value separating the higher half of a sample, from the lower half.



Convolution is a mathematical way of combining two signals to form a third signal, defined as the integral of the product of the two functions after one is reversed and shifted.

Convolution of 2 signals f and g is given as:  


Cross Correlation- In signal processing, cross-correlation is a measure of how similar the two  waveforms  are  as a function of a time-lag applied to one of them. For continuous functions, f and g, the cross-correlation is defined as: 

(Where f * denotes the complex conjugate off.)

Autocorrelation is the cross-correlation of a signal with itself. Given a signal f(t), the continuous autocorrelation  Rff() is most often defined as the continuous cross-correlation integral of f(t) with itself, at lag


 Power Spectrum

Power Spectrum shows frequencies containing the signal´s power, by plotting a distribution of power values as a function of frequency, where "power" is considered to be the average of the signal. For a given signal, the power spectrum generates a plot of the section of a signal's power (energy per unit time) falling within given frequency bins. The most common way of generating a power spectrum is by using a discrete Fourier transform, but other techniques available such as the maximum entropy method can also be used.




Histograms are used to plot the density of data, and also for density estimation: i’e. estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all the same and equal to 1, then a histogram is similar to a relative frequency plot.

 FFT (Fast Fourier Transform)  

An FFT computes the DFT and gives exactly similar results as evaluating the DFT definition directly; the only difference being that an FFT is much faster. The basic idea is to break up a transform of length into two transforms of length by using the identity

Since the Fourier transform gives the information about the frequency component of any signal, it is used for finding out what are the frequencies present in a stationary signal i.e. the frequency component of the signal is not changing with time.


DCT (Discrete Cosine Transform)

A Discrete Cosine Transform (DCT) represents a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important for various applications in science and engineering, from  lossy compression of  audio and images (where small high-frequency components can be discarded), to  spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is important in these applications: for compression, it turns out that cosine functions are much more efficient (as explained below, fewer are needed to approximate a typical  signal), whereas for differential equations the cosines express a particular choice of  boundary conditions.


The Discrete Cosine Transform DCT {X} of a sequence X is defined by the following equations:


Where N is the length of X, xn is the nth element of X,yk is the kth element of DCT {X}

DST- The Discrete Sine Transform DST {X} of a sequence X is defined by the following equations:

Where N is the length of the input sequence X,xn is the nth element of the input sequence X, and yk is the kth element of the output sequence DST {X}.)


 Hilbert Transform

In mathematics and in  signal processing, the Hilbert transform is a that takes a function, u(t), and produces a function, H(u)(t), having the same domain. This transform is named after  David Hilbert, who first introduced the operator to solve a special case of the Riemann–Hilbert problem for holomorphic functions. It is a basic tool in Fourier analysis, and provides a concrete means for determining the conjugate of a given function or Fourier series. Furthermore, in  harmonic analysis, it is an example of a singular integral operator, and of a Fourier multiplier. The Hilbert transform is also important in the field of signal processing where it is used to derive the  analytic representation of a signal u(t).

 The Hilbert transform can be thought  of as the convolution of u(t) with the function h(t)=1/(3.14*t), and is given by:


 Wavelet Transform

In numerical and functional analysis, a Discrete Wavelet Transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. Similar to the other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution i’e it captures both frequency and location information (location in time).


The wavelet transform replaces the Fourier transform's sinusoidal waves by a family generated by translations and dilations of a window called a wavelet. It consist of two types of wavelets: orthogonal (Haar, Daubechies (dbxx), Coiflets (coifx), Symmlets (symx)) and biorthogonal (Biorthogonal (biorx_x), including FBI (bior4_4 (FBI))), where x indicates the order of the wavelet. The higher the order, the smoother the wavelet.                                                                                                                                       

 Chirp-Z Transform

Bluestein's FFT algorithm, commonly known as chirp z-transform algorithm, is a Fast Fourier transform (FFT) algorithm that computes the Discrete Fourier transform (DFT) of arbitrary sizes (including prime sizes) by re-expressing the DFT as a convolution.

 IIR (Infinite Impulse Response) Filters

Infinite Impulse Response (IIR) is a property of signal processing systems. Systems with

this property are called IIR systems or, when working with  filter systems, as IIR filters. IIR systems possess an impulse response function that is non-zero over an infinite length of time. This is contrary to Finite Impulse Response (FIR) filters, which have fixed-duration impulse responses. The simplest analog IIR filter is an  RC filter which is made up of a single  resistor (R) feeding into a node that is being shared with a single  capacitor (C). This filter has an exponential impulse response characterized by an  RC time constant.

 IIR filters may be implemented as either analog or digital filters. Digital filters are implemented in terms of the difference equation as given below:

P is the feedforward filter order,                                                                                                        

bi are the feedforward filter coefficients,                                                                                              

Q is the feedback filter order,                                                                                                       

ai are the feedback filter coefficients,                                                                                                 

x[n] is the input signal and                                                                                                         

y[n] is the output signal


 FIR (Finite Impulse Response) Filters

Finite Impulse Response filters are one of the basic filters used in Digital Signal Processing. They are said to be finite because they do not have any feedback. For studying any Filter we plot 2 graphs - the Magnitude response curve (magnitude vs. frequency curve) and the Phase response curve (phase vs. frequency curve).

The difference equation that defines the output of an FIR filter in terms of its input is:

x[n] is the input signal,                                                                                                                                     

 y[n] is the output signal,                                                                                                                              

bi are the filter coefficients, also known as tap weights,                                                                           

and N is the filter order - an Nth-order filter has (N + 1) terms on the right-hand side.


Window Implementation 


Signal Filtering

Signal filtering involves the addition of noise to input signal and then filtration of the distorted signal with the help of previously discussed filters.


Median Filter

The median filter is a nonlinear digital filtering technique, that is often used to remove noise. The Median Filter obtains the elements of Filtered X using the following equation:

yi = Median(Ji) for i = 0, 1, 2, …, n - 1,

Where Y represents the output sequence filtered X,

n is the number of elements in the input sequence X,

Ji is a subset of the input sequence X centered about the ith element of X,

and the indexed elements outside the range of X equal zero.


Smoothing Filter

Smoothing filters are also known as low-pass filters because they allow passing of low frequency components and attenuate the high frequency components.

The impulse response of a normal low-pass filter signifies that all the coefficients of the mask should be positive. Low-pass filtering in effect blurs the image and removes speckles of high frequent noise. Larger masks result in more blurring effect. To avoid a general amplification or damping of the data the sum of the filter coefficients should be 1.0.

 Many different types of algorithms are used in smoothing. One of the most commonly used algorithms is the "moving average", often used to try to capture important trends in repeated statistical surveys.

Given a series of numbers and a fixed subset size, the moving average can be obtained by first taking the average of the first subset. The fixed subset size is then shifted forward, creating a new subset of numbers that is averaged. This process is repeated over the entire data series. The plot line connecting all the (fixed) averages is the moving average. Thus, a moving average is not a single number, but it is a set of numbers, each of which is the average of the corresponding subset of a larger set of data points. A moving average may also use unequal weights for each data value in the subset to emphasize particular values in the subset.



In compression, we acquire a large number of data points and compress the data points into a smaller number of points by applying reduction methods to input signal on the basis of value contained in each segment.

Consider for example: An input sine wave of frequency 30 Hz and amplitude 5V is given to a

median filter. Let the reduction factor be 25. This means at a time, 25 data points are taken and the average is taken. Again, this point is plotted and the next 25 data points are considered and the average is taken. This process goes on repeating itself to get the compressed data.


The value -1.00978 specifies the instantaneous mean of the current reduction factor size.

Cite this Simulator:

..... .....
Copyright @ 2017 Under the NME ICT initiative of MHRD (Licensing Terms)
 Powered by AmritaVirtual Lab Collaborative Platform [ Ver 00.11. ]