-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement low-variance sampling algorithm #48
Comments
I'm almost sure it doesn't require that, you only require the previous number of particles. Another advantage of the method is that it only requires generating one random number, which may improve performance. |
@ivanpauno When doing the following computation on each iteration: It seems to me that Taking less than |
Indeed, LVS requires M to be known in advance for its qualities to be true in the form shown in Chapter 4, and M is the number of particles you want after resampling. As it is it's not fit for drop in replacement in line 8 of the implementation of KLD proposed in Table 8.4 . At it heart, LVS make sure the first random number it generates offsets a "sampler" that moves from start to end of the weighted sequence of particles, so that:
Now, say we replace M by The main drawback I can think of this is that if That would probably keep the good qualities of LVS while making it fit for use with KLD at the expense of a bit more complexity in the sample selection algorithm, but a more detailed analysis should be done to ensure that's the case. |
As a follow up and in order for me not to forget it, a more elegant solution to the bias-to-sample-the-first-ones problem of the LVS-for-KLD proposal above is to randomize the sampling order with a full cycle pseudorandom number generator. A FC PRNG is useful for many things, but one way to think about it is like a counter that is guaranteed to go through all the numbers in its range (for LSFR, all but 0), but doing so in a pseudorandom order. LSFRs PRNGs are trivial to implement (literally xor operation and shift), and full cycle counters are tabulated for different cycle lenghts. The main changes would be, the LVS value for M should be the largest cycle C that's still smaller than |
It's worth noting that we don't need to solve the LVS-KLD compatibility problem. From an implementation standpoint, we can simply use low-variance sampling with the existing fixed resampling policy. Regarding the proposals, I think we could devise an experiment to test how population diversity evolves over time with different algorithms and distributions. And there are probably specific techniques and metrics that statisticians use to verify that their algorithms are behaving as expected that we could definitely delve into if necessary. Not saying that we should though. |
Isn't it the same 🤔? Order is arbitrary. Since KLD resampling works with a lower bound on particle count, we could also sample in batches i.e. apply low variance sampling N times to the same M particle approximation of the proposal distribution until the KLD condition is satisfied. |
As of today, this is a low-hanging fruit for anyone that wants to contribute by implementing a new range adaptor called |
Description
From Probabilistic Robotics, Chapter 4.2.4:
The first proposed strategy for variance reduction is already partially implemented in
beluga_amcl
by not updating the filter in case the robot is not moving (note: there is still work to do if we want to reduce the resampling frequency by measuring the variance of the importance weights but that's for another ticket).The second strategy consists in using a low-variance sampling algorithm for the resampling step. We should probably investigate if it's worth implementing in our first iteration of the library.
We also need to consider how this algorithm interacts with KLD resampling, since it seems to require the number of particles to be determined from the start.
Definition of Done
beluga::views::low_variance_sample
.The text was updated successfully, but these errors were encountered: