Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vector Fitting: Inaccurate results for networks with nonlinear frequency? #846

Open
Vinc0110 opened this issue Feb 13, 2023 · 12 comments
Open
Labels
Is it a bug? To be confirmed if it is a bug or not VectorFitting

Comments

@Vinc0110
Copy link
Collaborator

Vinc0110 commented Feb 13, 2023

There seems to be an issue with vector fitting of networks with nonlinear frequency vectors.

So far, the testing has probably only been done with linear frequency and I never noticed. Is the algorithm even supposed to work with nonlinear frequency data?

Example of an rf inductor (Coilcraft 4310LC-132), with a frequency spacing that is neither linear nor logarithmic:

import skrf
import matplotlib.pyplot as mplt

nw = skrf.Network('4310LC-132_series.S2P')
print(nw.frequency.sweep_type)      # output: 'unknown', should be 'log'
vf = skrf.VectorFitting(nw)
vf.vector_fit(n_poles_real=3, n_poles_cmplx=10, init_pole_spacing='lin')   # initial spacing not important

fig, ax = mplt.subplots(1, 2, figsize=(9, 6))
fig.suptitle(nw.name)
vf.plot_s_db(ax=ax[0])
ax[0].set_xscale('linear')
vf.plot_s_db(ax=ax[1])
ax[1].set_xscale('log')
mplt.show()

vf_ind_norm

@Vinc0110 Vinc0110 added Is it a bug? To be confirmed if it is a bug or not VectorFitting labels Feb 13, 2023
@Vinc0110
Copy link
Collaborator Author

The output of nw.frequency.sweep_type is actually misleading. The frequency spacing is logarithmic, but it is ever so slightly off the ideal curve, so the function falsely rejects that test. We should probably increase the tolerance.

@andree-sc
Copy link

Hi Vincent,
I don't believe it's an error concerning frequency, I've tried it with measurement data from an R&S ZVL VNA (DUT is a Panasonic ELC11D220F) with log frequency spacing and it looks just fine:
grafik

Your data seems weird though. Are those S-params of a coil? Because it should be symmetric far as I know. Do you have the measurement data also with lin-spaced frequency?

One hypothesis I can think of: I believe the fitted responses share a common pole set (as far as I understood) so that might cause you some trouble when trying to fit asymmetric networks?

@Vinc0110
Copy link
Collaborator Author

For the DUT I randomly picked a (Coilcraft 4310LC-132). On that website they provide 2-port S parameters measured with the inductor in shunt or in series. In my test I used the data with the inductor in series.

You are right, the network is not symmetric, but it should be, right? That could definitely be a problem.
4310-132_series_2port
4310-132_series_asymmetric

@andree-sc
Copy link

Alright, so I've given this some thought...
The difference between S11 and S22 is also present in the files from my DUTs. I think that is just measurement error. Inductors are quite tricky components and especially the reflection method measurements should be taken with a grain of salt (It's good to see that S12 and S21 are quite similar, since that is what I mainly use to calculate the impedance). Btw: yes, we also always use series measurements for coils (and shunt for caps).

However, I've tried your code snippet from above with several DUTs and I didn't run into a similar problem (even though they are slightly asymmetric). All of my data is log-spaced in frequency, so I don't think the problem lies here...
At first I thought it might be due to overfitting, so I lowered the number of poles (and also increased it afterwards) but nothing seems to make a difference here.
This leads me to conclude that there's some peculiar edge case here... but I don't believe it's connected to the frequency spacing of the data.

Here's the plot from above, recreated for the ELC[...]. Note the difference between S11 and S22 in the lin plot, yet the fit seems to work just fine:
grafik

All in all, I think you picked a very interesting coil at random :p

@Vinc0110
Copy link
Collaborator Author

Thanks your tests!

I'm also starting to think that those Coilcraft measurements are somewhat "interesting". None of the data for the 4310LC is passive, none is symmetric, and none is reciprocal. Trying to vector fit these particular networks turns out to be quite a challenge. See plots below for n_poles_real=3, n_poles_cmplx=11.

Still, I don't understand why is does not work. It should not even be important to have passive, symmetric, or reciprocal data. And as you said, logarithmic frequency spacing should also not be an issue.

4310-132_series
4310-132_shunt
4310-352_series
4310-352_shunt

@andree-sc
Copy link

The data isn't passive? Hmm... I think it is though. 0.5\*(S+S\*) seems to be unitary bounded everywhere... there's two small bumps where it dips below zero (is that allowed for S-param representation?) but other than that it's < 1 over the whole frequency range.

grafik

I was thinking maybe it has to do with the shared pole set, so I generated the network from only its S11 parameters, but the results are almost equal... also I extended the frequency range for the model a bit (just out of curiosity) and the model seems to dip below zero at around 10GHz...
grafik

Okay, so we can conclude: it doesn't seem to be correlated to either, the log frequency spacing or the shared pole set. I think we can also exclude the reciprocity (given the 1-Port test).
You're absolutely right, passivity, reciprocity and symmetry shouldn't matter.

I'm honestly out of ideas here... in a last resort I thought: "there is usually a "sweet spot" for the number of poles", so I tried n_poles_real=3, n_poles_cmplx=1 but it's still off by quite a bit (this is the 1-Port again):
grafik

Is there some sort of weighting in the algorithm that weighs higher frequencies more or something? Or is it some idiosyncracy of the algorithm? I truly have nothing more to go on here...

@Vinc0110
Copy link
Collaborator Author

Your definition of symmetry is new to me, or maybe I just don't get it. For my statement I trusted the output from Network.is_symmetric(), which is False for all four networks. Now I remember this has bothered me before: the function is only checking for $S_{11} == S_{22}$.

I tried to start a discussion about the definition used in Network.is_symmetric() some time ago. It would be interesting to hear your option about it. See this old post: #467 (comment)

In addition, I found this report, see section 2 about network symmetry.

Regarding the actual issue here, I'm also out of ideas for now. I tried different weighting methods for the residue fitting. Currently, uniform weighting is used, i.e. frequency independent weights. Other weights do change things, such as weight = np.sqrt(self.network.frequency.df / np.mean(self.network.f) * self.network.frequency.npoints), but I never got it to work for these example networks.

Sharing a common set of poles should not be an issue. As long as a sufficiently large number of poles is present, all features of all network responses should be represented by the model. After all, the residues are fitted individually for all responses.

@Vinc0110
Copy link
Collaborator Author

Reading your comment again, I realize you were writing about passivity, not symmetry. I'm sorry!

Still, Network.is_passive() also reports False for all four networks.

For the lossless case, passivity is tested with the unitary condition $S^T \cdot S^* = \mathbb{1}$:

  • $S_{11}^* \cdot S_{11} + S_{21}^* \cdot S_{21} = 1 \Longleftrightarrow |S_{11}|^2 + |S_{21}|^2 = 1$
  • $S_{11}^* \cdot S_{12} + S_{21}^* \cdot S_{22} = 0$
  • $S_{12}^* \cdot S_{11} + S_{22}^* \cdot S_{21} = 0$
  • $S_{12}^* \cdot S_{12} + S_{22}^* \cdot S_{22} = 1 \Longleftrightarrow |S_{12}|^2 + |S_{22}|^2 = 1$

Now, I'm honestly not sure about the lossy case. Here is what we have for the left hand sides for one of the Coilcraft example:
passivity

I was wandering if that "wild" behavior at higher frequencies has something to do with the error of the vector fit. Not sure if it does...
passivity_error

@andree-sc
Copy link

No problem, nothing to apologize shrug
Well, I have to admit, my footing about passivity conditions for S-params is not completely sound... I just tested for S[i,j] + np.conj(S[i,j]) < 1 I'm not sure if that holds... anyways I have some catching up to do here

I'm not entirely sure what you mean by error. Is the error fed back to the algorithm in some way? From my understanding if the model is that far off, the error should be large as well

@ryan-workFromHome
Copy link

Other weights do change things, such as weight = np.sqrt(self.network.frequency.df / np.mean(self.network.f) * self.network.frequency.npoints), but I never got it to work for these example networks.

I'm curious about how to apply the customized weight for the vector_fit.
Could you please let me know where I should put that equation to make it work?
Thank you!

@Vinc0110
Copy link
Collaborator Author

I'm not entirely sure what you mean by error. Is the error fed back to the algorithm in some way? From my understanding if the model is that far off, the error should be large as well

What I called error is just the deviation of the fit from the original data, evaluated and plotted at each sample frequency:
error_s11 = np.abs(vf.get_model_response(0, 0, freqs=nw.f) - nw.s[:, 0, 0]).

@Vinc0110
Copy link
Collaborator Author

I'm curious about how to apply the customized weight for the vector_fit.

This is not currently possible via the regular API, but you can obviously modify your local copy of the code. The weights should be applied to the coefficient matrix A and the network responses freq_responses before both get passed to the least-squares solver.

# part 2: constant (variable d) and proportional term (variable e)
A[:, idx_constant] = 1
A[:, idx_proportional] = s[:, None]
logging.info(f'Condition number of coefficient matrix = {int(np.linalg.cond(A))}')
# solve least squares and obtain results as stack of real part vector and imaginary part vector
x, residuals, rank, singular_vals = np.linalg.lstsq(np.vstack((A.real, A.imag)),
np.hstack((freq_responses.real, freq_responses.imag)).transpose(),
rcond=None)

For my tests I placed the following code after line 548, at which point the matrix A is fully assembled:

# weighting
#weight = np.ones_like(self.network.f)
weight = np.sqrt(self.network.frequency.df / np.mean(self.network.f) * self.network.frequency.npoints)
print(np.shape(weight))
print(np.shape(A))

A = weight[:, None] * A
freq_responses = weight * freq_responses

As you see from the printed shape of A, axis 0 has the length of the frequency vector in self.network.f, so broadcasting is required for the multiplication.
With the way A is reused for the least-squares fitting of each frequency response, it is not possible to set individual weights for each response. That would required a bit more work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Is it a bug? To be confirmed if it is a bug or not VectorFitting
Projects
None yet
Development

No branches or pull requests

3 participants