-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support V2 primitives #136
Comments
Is it possible to submit some small PR to help supporting V2 primitives? Like, would a PR allowing VQE to support both V1 and V2 be useful? Or is it better to wait for qiskit-community/qiskit-machine-learning#817 to be merged? Or, another possibility, is it something you're already working on and the PR won't help much? |
@tnemoz There is no work being done by present maintainers (IBM) on V2 (or ISA #164) changes for algorithms. See disclaimer in readme
The Qiskit Machine Learning maintainers, in that PR you linked, they have copied just the parts from algos that are needed/used in ML to be able to maintain/support them there going forwards so its independent. Qiskit Optimization has a PR that does likewise qiskit-community/qiskit-optimization#623 I do not think either will change VQE though as neither use it. There is a fair amount overall to be changed here, if algorithms were kept like it is and to be maintained going forwards. |
Would it be possible to be assigned to this issue with @tnemoz ? We'd like to give it a try if possible. |
The new Is it worth it to add if options is not None:
if not isinstance(estimator, qiskit_ibm_runtime.BaseEstimatorV2):
raise ValueError("options is only supported for a runtime Estimator") This problem happens for instance for migrating |
@LenaPer @tnemoz I assigned the issue to you as requested. Now clearly to support runtime it needs the ISA change as well. Very shortly V1 primitives will be removed from runtime (they have been deprecated there for a while) and those remaining in Qiskit (the reference ones that run classically) will be deprecated. As was noted above Qiskit Machine Learning has copied over what was used from here, Qiskit Optimization likewise. ML has yet to do anything about ISA and V2 while Opt does have a draft PR addressing things. If you go ahead and look at things, perhaps in more depth than you might have done so far, and decide you still want to move forwards perhaps outline what/how you want to change things to have a discussion up front before really doing too much. In regards of the runtime comment above, in places we do have optional install checks so that the check for runtime could be done more conditionally based on it being installed or not, rather than having it as a hard dependency. |
The goal would be to remove the support for V1 primitives since those are going to be deprecated and add ISA support, in a similar fashion to what's being done in the qiskit-optimization PR. Eventually, I think the goal would be to be able to use every algorithm in here with the Runtime and Aer primitives (as noted above, the qiskit ones have a somewhat different interface than these two) with the least friction possible to move from one to another. If that's OK with you, we'll issue a Draft PR shortly that will address only a short part of the issue (probably on minimal eigensolvers or phase estimation), just so that we can discuss with actual examples whether things move in the right direction according to you. That would avoid a painful review where the same mistake is repeated throughout all files. |
Sounds good. Just supporting V2 will be a breaking change, in regards of code done with the Qiskit or Aer primitives that still works, but since its broken with Runtime, as V1 has already been removed and V1 are about to be deprecated in Qiskit, just focusing on V2 seems fine. I am not sure what interface difference exists, having not looked in detail at them in a while. The way algorithms was done with V1 was even if there were construction differences, as the primitives are built and passed to the algorithms, as long as the run is the same that mattered not. I think its the case that the options on the run could vary with V1, but since options are implementation dependent (though shots was often common) algorithms would not, in general, alter them leaving them as they set during construction. I had put some thoughts into #164 (the ISA one). Doing it similarly to how Opt did things is ok - there the return of the new Sampler was switched back to probabilities to make it compatible with how the algos processed V1. I had thought for V2 Sampler usage maybe things should go over to counts - especially so if this is reflected in any result as it would be more compatible with what is expected now etc. Sure doing a draft PR and starting with some chosen subset seems great. Might want to try and find both an Estimator and Sampler example to see how it looks across both when used here. |
@woodsp-ibm The Draft PR is #197. Let us know what you think!
I might have overlooked this 😅 |
While adapting the code for Grover, I noticed that some tests would randomly fail, even though I set a seed for the @idata(itertools.product(["shots"], [[1, 2, 3], None, 2]))
@unpack
def test_iterations_with_good_state_sample_from_iterations(self, use_sampler, iterations):
"""Test the algorithm with different iteration types and with good state"""
grover = self._prepare_grover(use_sampler, iterations, sample_from_iterations=True)
problem = AmplificationProblem(Statevector.from_label("111"), is_good_state=["111"])
result = grover.amplify(problem)
self.assertEqual(result.top_measurement, "111") The problem comes from these lines in if self._sample_from_iterations:
power = algorithm_globals.random.integers(power) During the test, Using the What would be the best way to deal with this? I must admit that I don't really understand the point of this option, so I'm unsure what's the best course of action here. Does it make sense to use this option when performing a single iteration with a given power? Note that the test would (probably) pass if we sampled a power in (0, power) instead of [0, power), since that would ensure that we at least progressed a bit in the right direction. But once again, since I don't really understand this option in the first place, I don't know whether it would make sense to do such a thing. |
@Cryoris would you have any response to the prior comment above? I think you had done refactoring at some point in Grover and may have more insight. |
I will just add this here as further changes around the primitives it seems a utility method
|
@woodsp-ibm A question concerning the use of options. I'll take the example of There are a lot if lines that are used to update the local options and to define which options to use. But as far as I can tell, these options are only ever used in the Sampler's We could let it like this in case additional options are used, but the problem is that some lines now break with the V2 primitives. For instance, this line: opts = copy(self._sampler.options) This would work for a I initially wanted to simply replace every mention of return AlgorithmJob(ComputeUncompute._call, sampler_job, circuits, self._local, local_opts) And I'm unsure why, which means I'm also unsure about how to replace it, especially since it [seems to be tested]( self.assertEqual(result.options.dict, {"shots": 1024})): self.assertEqual(result.options.__dict__, {"shots": 1024}) What would be the best course of action here? Supporting only the number of shots for the options also considerably simplifies the code. If it can help discussing it, I can update the draft PR with these changes to better show what I'm talking about. |
Options was part of primitive base class before for the V1 versions. We passed it so you could run things without creating a new instance/primitive with new default options, but could re-use it/change options, like the other primitives of the time. If shots is all that is supported generically now your suggestion to limit things to that makes sense. (I think its precision for the Estimator right). Why was options passed to the AlgorithmJob... I imagine we would have done that for compatibility with the other primitives and what they did. E..g here in the original Qiskit Sampler https://github.com/Qiskit/qiskit/blob/8e5fab6c7176edcb134e9b00d141efa32f2fe3db/qiskit/primitives/sampler.py#L145 The creation of the job in the newer statevector_sampler looks different and maybe that should now be more the model/reference so as to have things compatibly done. Does that help at all? |
It definitely does, thanks! |
@woodsp-ibm I have two questions:
with self.subTest("check final accuracy"):
self.assertAlmostEqual(result.fun[0], 0.473, places=3) When I changed Also, when I increase the maxiter to higher values like |
The goal of unit testing is coverage to make sure code works as expected - all paths are tested so it covers all code. In my mind you could limit the transpilation to just certain test(s) to ensure things work as expected, when that is specified, and otherwise just use the reference primitives where transpilation is not needed.
What is unclear - the subTest bit? That unit test has a couple of asserts in it right. Should the first assert fail the test, without it being treated by the subTest, would fail the test and the next assert would never be run. The asserts may be checking some result aspect that is independent where if the first fails the latter may still pass, without the subtest you would never know as it would not do that assert until things were fixed where the first passes. Having the result of both may give you insight into what might be failing. It of course depends on the code its testing etc. Or were you just asking why it checks that value specifically. For that I do not know. Maybe it was just the outcome with things working as expected. Do note that with the V1 reference primitives they produced an ideal result by default though I believed the new StatevectorEstimator did too. The V2 StatevectorSampler though now does 1024 shots by default I believe - did you try increasing that value (default_shots) when you create the new V2 Sampler to be used for the fidelity computation. I do not believe there is a way to get the ideal distribution back, just approaching that with larger shots counts. With the StavevectorSampler it also has a seed for the sampling that you might like to set as well so the samples are predicable too. |
That's on me, carelessly modifying seeds isn't a super strategy it seems, especially when it is used to build a circuit 🙃
Ok, I'll modify accordingly. Thanks! |
@woodsp-ibm There seems to be a problem with the tests for the VQD algorithm. Though they run with a perfect self.fidelity = ComputeUncompute(Sampler()) by: self.fidelity = ComputeUncompute(Sampler(options={"shots": 100_000})) This poses two problems:
In the context of this PR, what would be the best course of action? As a side note, one thing I don't understand is that this remains true even for a high number of shots (like 1_000_000). This should give results close enough to the actual probability distribution for the optimizer to work properly, so the fact that it doesn't would indicate that it's a problem in the code for VQD? Though I'm not sure at all about it. |
What might be worth trying is VQD with the V1 Sampler with it set it to shots, so it does not give an exact outcome and does that "match" what you are seeing with V2 one - I say "match" in that since it samples I am not sure even if things are seeded that the sampler outcome would be identical so may "match" should more be behave similarly. |
@woodsp-ibm It does! In fact, to be sure, I cloned the main version of this repo and changed only the line I mentioned. Upon running the VQD tests again, they failed in the same way. I haven't performed an extensive diagnosis yet, but it seems that the first eigenvalue is usually found without issue, but the second is absurdly high (like, for getting the eigenvalues of a Z gate, it returns something like It might also have to do with the values for |
The issue seems to come from from qiskit.quantum_info import SparsePauliOp
from qiskit.circuit import QuantumCircuit, Parameter
from qiskit.primitives import Sampler, Estimator
from qiskit_algorithms.optimizers import COBYLA as Optimizer
from qiskit_algorithms.state_fidelities import ComputeUncompute
from qiskit_algorithms import VQD
H2_op = SparsePauliOp("Z")
ansatz = QuantumCircuit(1)
ansatz.rx(Parameter(r"$\theta$"), 0)
optimizer = Optimizer()
estimator = Estimator()
sampler = Sampler(options={"shots": 100_000})
fidelity = ComputeUncompute(sampler)
k = 2
betas = [33]
vqd = VQD(estimator, fidelity, ansatz, optimizer, k=k, betas=betas)
result = vqd.compute_eigenvalues(operator=H2_op)
vqd_values = result.eigenvalues
print(vqd_values.real) prints I've performed the tests with a fresh version of Is it OK if I replace |
Pre-existing classical optimizers often will not work well in the presence of noise, such as the shot noise (variable in result), SLSQP does gradients using a small eps with points around the current one to compute a local gradient using finite diff, That will badly be affected by any noise so in going from a statevector ideal, to sampled with noise one needs to be careful - sorry I did not notice. COBYLA is gradient free and often works ok, Noise is why we had specific ones we created like SPSA to work in such conditions whether it be sampling/shot noise or other device noise. You may well hit that issue in other cases too if things can no longer run with an ideal outcome. So it seems fine to switch the test around, seed the Sampler etc for reproducibility of the samples etc. |
What should we add?
The Sampler and Estimator are being updated with new V2 versions that change the interface and will be available from Qiskit 1.0.0. V1 versions will at some point be deprecated and removed. Algorithms should add support for V2 versions and ultimately drop support for V1 once they are removed.
The text was updated successfully, but these errors were encountered: