Skip to content

Latest commit

 

History

History
74 lines (44 loc) · 4.41 KB

README.md

File metadata and controls

74 lines (44 loc) · 4.41 KB

pfb-mod

Modifying the Polyphase Filter Bank to make it robust to quantization effects

The polyphase filter bank (PFB) is a widely used digital signal processing tool used for channelizing input from radio telescopes. Quantization of the channelized signal leads to blow ups in error. We present a practical method for inverting the PFB with minimal quantization-induced error that requires as little as 3% extra bandwidth.

Outline of code

  • pfb.py contains fuctions to perform the forward and inverse PFB, and methods to quantize the inverse.
  • helper.py utility functions used to analyzing the quantization errors induced in the quantized iPFB.
  • conjugate_gradient.py contains functions to optimize the chi-squared value of the iPFB.
  • matrix_operations is a helper that wraps PFB related operations in a way that makes them look like the linear operators that they really are. These are used in conjugate gradient descent algorithm.
  • optimal_wiener_thresh.py finds the optimal Wiener threshold parameter.
  • plots/plotall.sh generates all plots (which can be found in plots/img).

Dependencies / libraries used in scritps:

  • Jax, for autograd custom gradient descent functions

The usual suspects

  • Numpy
  • Scipy (scipy.signal, scipy.optimize)
  • Matplotlib

Optimal quantization

Let $X$ be a gaussian random variable with $\mu=0$ and $\sigma=1$, and let $(x_n)$ be a sequence of i.i.d realizations of $X$. If we would like to quantize this signal to four bits, what is the optimal quantization interval? This is not a mathematically precise question because it depends on what your optimizing. Lets say we wish to minimize the expected magnitude squared of the residual $R=X-\tilde X$, where $\tilde X$ is a quantized signal.

Let $(y_n)$ be the normalized FFT of $(x_n)$ defined as follows

$$y_n = \frac{1}{\sqrt N} \sum_{k=0}^N \exp(-2\pi i nk)x_k$$

Optimizing the Inverse PFB using extra information

In conjugate_gradient.py, we have code that lets us optimize the inverse PFB based on some added information.

We perform conjugate gradient descent on a matrix equation of the form

$$B x = u$$

The equation we are minimizing, the chi-squared equation, takes the form

$$\chi^2 = (d - Ax)^T N^{-1} (d - Ax)$$

Taking a derivative wrt to the model ($x$) and setting that to zero we get

$$\frac{d\chi^2}{dx} = -2A^TN^{-1}(d - Ax) = 0 \Rightarrow A^TN^{-1}(d - Ax) = 0$$

Plots

eigenvalues_ntap4_lblock2048

four_segments_sinc_hanning

RMSE_analytic_lblock

RMSE_conjugate_gradient_descent_0percent RMSE_conjugate_gradient_descent_1percent RMSE_conjugate_gradient_descent_3percent RMSE_conjugate_gradient_descent_5percent RMSE_log_virgin_IPFB_residuals_wiener rmse_wiener_eigenspec RMSE_wiener_lblock RMSE_wiener_long_time sidelobes

sample_prior_algorithm

sw_extract_tikz

sw_matrix