Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support V2 primitives #136

Open
woodsp-ibm opened this issue Jan 19, 2024 · 22 comments · May be fixed by #197
Open

Support V2 primitives #136

woodsp-ibm opened this issue Jan 19, 2024 · 22 comments · May be fixed by #197
Assignees

Comments

@woodsp-ibm
Copy link
Member

woodsp-ibm commented Jan 19, 2024

What should we add?

The Sampler and Estimator are being updated with new V2 versions that change the interface and will be available from Qiskit 1.0.0. V1 versions will at some point be deprecated and removed. Algorithms should add support for V2 versions and ultimately drop support for V1 once they are removed.

@tnemoz
Copy link
Contributor

tnemoz commented Jul 29, 2024

Is it possible to submit some small PR to help supporting V2 primitives? Like, would a PR allowing VQE to support both V1 and V2 be useful? Or is it better to wait for qiskit-community/qiskit-machine-learning#817 to be merged? Or, another possibility, is it something you're already working on and the PR won't help much?

@woodsp-ibm
Copy link
Member Author

@tnemoz There is no work being done by present maintainers (IBM) on V2 (or ISA #164) changes for algorithms. See disclaimer in readme

Qiskit Algorithms is no longer officially supported by IBM.

Like any other Apache 2 licensed code, you are free to use it or/and extend it, but please be aware that it is under your own risk.

The Qiskit Machine Learning maintainers, in that PR you linked, they have copied just the parts from algos that are needed/used in ML to be able to maintain/support them there going forwards so its independent. Qiskit Optimization has a PR that does likewise qiskit-community/qiskit-optimization#623

I do not think either will change VQE though as neither use it. There is a fair amount overall to be changed here, if algorithms were kept like it is and to be maintained going forwards.

@LenaPer
Copy link

LenaPer commented Aug 5, 2024

Would it be possible to be assigned to this issue with @tnemoz ? We'd like to give it a try if possible.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 5, 2024

The new BaseSamplerV2 and BaseEstimatorV2 from Qiskit don't have an option attribute like the V1 used to, while the BasePrimitiveV2 from qiskit-ibm-runtime does.

Is it worth it to add qiskit-ibm-runtime as a dependency and to check with isinstance whether it is actually possible to set these options? Something along the lines of:

if options is not None:
    if not isinstance(estimator, qiskit_ibm_runtime.BaseEstimatorV2):
        raise ValueError("options is only supported for a runtime Estimator")

This problem happens for instance for migrating ComputeUncompute to a V2 Sampler.

@woodsp-ibm
Copy link
Member Author

woodsp-ibm commented Aug 6, 2024

@LenaPer @tnemoz I assigned the issue to you as requested. Now clearly to support runtime it needs the ISA change as well. Very shortly V1 primitives will be removed from runtime (they have been deprecated there for a while) and those remaining in Qiskit (the reference ones that run classically) will be deprecated. As was noted above Qiskit Machine Learning has copied over what was used from here, Qiskit Optimization likewise. ML has yet to do anything about ISA and V2 while Opt does have a draft PR addressing things. If you go ahead and look at things, perhaps in more depth than you might have done so far, and decide you still want to move forwards perhaps outline what/how you want to change things to have a discussion up front before really doing too much. In regards of the runtime comment above, in places we do have optional install checks so that the check for runtime could be done more conditionally based on it being installed or not, rather than having it as a hard dependency.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 8, 2024

If you go ahead and look at things, perhaps in more depth than you might have done so far, and decide you still want to move forwards perhaps outline what/how you want to change things to have a discussion up front before really doing too much.

The goal would be to remove the support for V1 primitives since those are going to be deprecated and add ISA support, in a similar fashion to what's being done in the qiskit-optimization PR. Eventually, I think the goal would be to be able to use every algorithm in here with the Runtime and Aer primitives (as noted above, the qiskit ones have a somewhat different interface than these two) with the least friction possible to move from one to another.

If that's OK with you, we'll issue a Draft PR shortly that will address only a short part of the issue (probably on minimal eigensolvers or phase estimation), just so that we can discuss with actual examples whether things move in the right direction according to you. That would avoid a painful review where the same mistake is repeated throughout all files.

@woodsp-ibm
Copy link
Member Author

Sounds good. Just supporting V2 will be a breaking change, in regards of code done with the Qiskit or Aer primitives that still works, but since its broken with Runtime, as V1 has already been removed and V1 are about to be deprecated in Qiskit, just focusing on V2 seems fine.

I am not sure what interface difference exists, having not looked in detail at them in a while. The way algorithms was done with V1 was even if there were construction differences, as the primitives are built and passed to the algorithms, as long as the run is the same that mattered not. I think its the case that the options on the run could vary with V1, but since options are implementation dependent (though shots was often common) algorithms would not, in general, alter them leaving them as they set during construction.

I had put some thoughts into #164 (the ISA one). Doing it similarly to how Opt did things is ok - there the return of the new Sampler was switched back to probabilities to make it compatible with how the algos processed V1. I had thought for V2 Sampler usage maybe things should go over to counts - especially so if this is reflected in any result as it would be more compatible with what is expected now etc.

Sure doing a draft PR and starting with some chosen subset seems great. Might want to try and find both an Estimator and Sampler example to see how it looks across both when used here.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 9, 2024

@woodsp-ibm The Draft PR is #197. Let us know what you think!

Might want to try and find both an Estimator and Sampler example to see how it looks across both when used here.

I might have overlooked this 😅

@tnemoz tnemoz linked a pull request Aug 9, 2024 that will close this issue
10 tasks
@tnemoz
Copy link
Contributor

tnemoz commented Aug 15, 2024

While adapting the code for Grover, I noticed that some tests would randomly fail, even though I set a seed for the StatevectorSampler. In particular, the test that fails is this one:

    @idata(itertools.product(["shots"], [[1, 2, 3], None, 2]))
    @unpack
    def test_iterations_with_good_state_sample_from_iterations(self, use_sampler, iterations):
        """Test the algorithm with different iteration types and with good state"""
        grover = self._prepare_grover(use_sampler, iterations, sample_from_iterations=True)
        problem = AmplificationProblem(Statevector.from_label("111"), is_good_state=["111"])
        result = grover.amplify(problem)
        self.assertEqual(result.top_measurement, "111")

The problem comes from these lines in grover.py:

            if self._sample_from_iterations:
                power = algorithm_globals.random.integers(power)

During the test, power is set to 2, which means that this line samples a new power being either 0 or 1. If it samples 1, then everything works fine, but if it samples 0, no oracle is added to the circuit during the single iteration of the algorithm, which results in a uniform distribution over the possible bitstrings.

Using the SamplerV1, it turns out that 111 is still the most sampled bitstring, but unless I'm missing something, it's not something that is relevant to test: we're supposed to get the uniform distribution if we don't add an oracle, it just so happens that when using the seed 123, 111 is the most sampled outcome, but using the seed 42 for instance fails the test.

What would be the best way to deal with this? I must admit that I don't really understand the point of this option, so I'm unsure what's the best course of action here. Does it make sense to use this option when performing a single iteration with a given power?

Note that the test would (probably) pass if we sampled a power in (0, power) instead of [0, power), since that would ensure that we at least progressed a bit in the right direction. But once again, since I don't really understand this option in the first place, I don't know whether it would make sense to do such a thing.

@woodsp-ibm
Copy link
Member Author

woodsp-ibm commented Aug 15, 2024

@Cryoris would you have any response to the prior comment above? I think you had done refactoring at some point in Grover and may have more insight.

@woodsp-ibm
Copy link
Member Author

I will just add this here as further changes around the primitives it seems a utility method init_observable is now deprecated in the newly released Qiskit 1.2.0 so this ideally needs to be addressed too. In searching this repo it's used in a few places, this message comes from one of them

/home/runner/work/qiskit-algorithms/qiskit-algorithms/qiskit_algorithms/gradients/lin_comb/lin_comb_estimator_gradient.py:144: 
DeprecationWarning: The function ``qiskit.primitives.utils.init_observable()`` is deprecated as of qiskit 1.2. It will be removed no 
earlier than 3 months after the release date. Use the constructor of ``SparsePauliOp`` instead.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 30, 2024

@woodsp-ibm A question concerning the use of options. I'll take the example of ComputeUncompute, but this can also be seen in other files for the Estimator, like the gradients one.

There are a lot if lines that are used to update the local options and to define which options to use. But as far as I can tell, these options are only ever used in the Sampler's run method. The thing is, the V2 Sampler only supports the number of shots in the run method now, so it seems a bit overkill to support arbitrary dictionaries.

We could let it like this in case additional options are used, but the problem is that some lines now break with the V2 primitives. For instance, this line:

        opts = copy(self._sampler.options)

This would work for a SamplerV2 from qiskit-ibm-runtime, but not a StatevectorSampler from qiskit. The fact that the API between these two objects is different makes it quite unclear to know how to deal with this.

I initially wanted to simply replace every mention of options by a single int representing the number of shots, but there's one thing I'm unsure about. The local options are passed to the AlgorithmJob constructor here:

        return AlgorithmJob(ComputeUncompute._call, sampler_job, circuits, self._local, local_opts)

And I'm unsure why, which means I'm also unsure about how to replace it, especially since it [seems to be tested]( self.assertEqual(result.options.dict, {"shots": 1024})):

            self.assertEqual(result.options.__dict__, {"shots": 1024})

What would be the best course of action here?

Supporting only the number of shots for the options also considerably simplifies the code.

If it can help discussing it, I can update the draft PR with these changes to better show what I'm talking about.

@woodsp-ibm
Copy link
Member Author

woodsp-ibm commented Aug 30, 2024

Options was part of primitive base class before for the V1 versions. We passed it so you could run things without creating a new instance/primitive with new default options, but could re-use it/change options, like the other primitives of the time. If shots is all that is supported generically now your suggestion to limit things to that makes sense. (I think its precision for the Estimator right).

Why was options passed to the AlgorithmJob... I imagine we would have done that for compatibility with the other primitives and what they did. E..g here in the original Qiskit Sampler https://github.com/Qiskit/qiskit/blob/8e5fab6c7176edcb134e9b00d141efa32f2fe3db/qiskit/primitives/sampler.py#L145 The creation of the job in the newer statevector_sampler looks different and maybe that should now be more the model/reference so as to have things compatibly done.

Does that help at all?

@tnemoz
Copy link
Contributor

tnemoz commented Sep 2, 2024

Does that help at all?

It definitely does, thanks!

@tnemoz
Copy link
Contributor

tnemoz commented Sep 3, 2024

@woodsp-ibm I have two questions:

  • What would be the best way to test the insertion of the transpiler in the algorithms? On the one hand, I want to test it on every already existing test, but on the other hand, this increases the time taken to run tests (and makes the whole thing look ugly, e.g.).
  • I'm not sure to understand the purpose of these lines of test:
        with self.subTest("check final accuracy"):
            self.assertAlmostEqual(result.fun[0], 0.473, places=3)

When I changed ComputeUncompute to support BaseSamplerV2, this test broke, even though all the tests for ComputeUncompute pass (though I must say that I had to reduce the expected precision to 1e-2 in these tests). Furthermore, I'm consistently hitting 0.298... instead of the expected 0.473.... What bothers me is that I don't see where the discrepancy could come from. Note that I changed the initial seed that was set to be sure that it's not just some random luck that makes us pass the test.

Also, when I increase the maxiter to higher values like 10 or 100, I seem to get closer to a negative value, which would indicate that this value in the test has nothing special and is clearly seed-dependent. But even when I change the seed, I end up around 0.3, far from the expected value. Do you have any idea about what could be going on? Or would you have a simple case using which I could test whether I'm getting the right results?

@woodsp-ibm
Copy link
Member Author

What would be the best way to test the insertion of the transpiler in the algorithm

The goal of unit testing is coverage to make sure code works as expected - all paths are tested so it covers all code. In my mind you could limit the transpilation to just certain test(s) to ensure things work as expected, when that is specified, and otherwise just use the reference primitives where transpilation is not needed.

I'm not sure to understand the purpose of these lines of test

What is unclear - the subTest bit? That unit test has a couple of asserts in it right. Should the first assert fail the test, without it being treated by the subTest, would fail the test and the next assert would never be run. The asserts may be checking some result aspect that is independent where if the first fails the latter may still pass, without the subtest you would never know as it would not do that assert until things were fixed where the first passes. Having the result of both may give you insight into what might be failing. It of course depends on the code its testing etc.

Or were you just asking why it checks that value specifically. For that I do not know. Maybe it was just the outcome with things working as expected. Do note that with the V1 reference primitives they produced an ideal result by default though I believed the new StatevectorEstimator did too. The V2 StatevectorSampler though now does 1024 shots by default I believe - did you try increasing that value (default_shots) when you create the new V2 Sampler to be used for the fidelity computation. I do not believe there is a way to get the ideal distribution back, just approaching that with larger shots counts. With the StavevectorSampler it also has a seed for the sampling that you might like to set as well so the samples are predicable too.

@tnemoz
Copy link
Contributor

tnemoz commented Sep 10, 2024

Or were you just asking why it checks that value specifically. For that I do not know. Maybe it was just the outcome with things working as expected. Do note that with the V1 reference primitives they produced an ideal result by default though I believed the new StatevectorEstimator did too. The V2 StatevectorSampler though now does 1024 shots by default I believe - did you try increasing that value (default_shots) when you create the new V2 Sampler to be used for the fidelity computation. I do not believe there is a way to get the ideal distribution back, just approaching that with larger shots counts. With the StavevectorSampler it also has a seed for the sampling that you might like to set as well so the samples are predicable too.

That's on me, carelessly modifying seeds isn't a super strategy it seems, especially when it is used to build a circuit 🙃

The goal of unit testing is coverage to make sure code works as expected - all paths are tested so it covers all code. In my mind you could limit the transpilation to just certain test(s) to ensure things work as expected, when that is specified, and otherwise just use the reference primitives where transpilation is not needed.

Ok, I'll modify accordingly. Thanks!

@tnemoz
Copy link
Contributor

tnemoz commented Sep 11, 2024

@woodsp-ibm There seems to be a problem with the tests for the VQD algorithm. Though they run with a perfect Sampler that returns the probability distributions, almost all tests fail when replacing this line:

        self.fidelity = ComputeUncompute(Sampler())

by:

        self.fidelity = ComputeUncompute(Sampler(options={"shots": 100_000}))

This poses two problems:

  1. Since the BaseSamplerV2 don't offer the possibility to do exact computations, it's not possible to make them pass the tests in the code's current state
  2. It means that either the tests are wrong, that the code for VQD is wrong, or that VQD as a whole is basically unusable in any real-world use case.

In the context of this PR, what would be the best course of action?

As a side note, one thing I don't understand is that this remains true even for a high number of shots (like 1_000_000). This should give results close enough to the actual probability distribution for the optimizer to work properly, so the fact that it doesn't would indicate that it's a problem in the code for VQD? Though I'm not sure at all about it.

@woodsp-ibm
Copy link
Member Author

What might be worth trying is VQD with the V1 Sampler with it set it to shots, so it does not give an exact outcome and does that "match" what you are seeing with V2 one - I say "match" in that since it samples I am not sure even if things are seeded that the sampler outcome would be identical so may "match" should more be behave similarly.

@tnemoz
Copy link
Contributor

tnemoz commented Sep 17, 2024

@woodsp-ibm It does! In fact, to be sure, I cloned the main version of this repo and changed only the line I mentioned. Upon running the VQD tests again, they failed in the same way.

I haven't performed an extensive diagnosis yet, but it seems that the first eigenvalue is usually found without issue, but the second is absurdly high (like, for getting the eigenvalues of a Z gate, it returns something like [-0.999723, 32.5689].

It might also have to do with the values for betas which seem quite high, but since I haven't read the VQD paper yet, I don't know of their purpose.

@tnemoz
Copy link
Contributor

tnemoz commented Sep 17, 2024

The issue seems to come from SLSQP. The following code:

from qiskit.quantum_info import SparsePauliOp
from qiskit.circuit import QuantumCircuit, Parameter
from qiskit.primitives import Sampler, Estimator

from qiskit_algorithms.optimizers import COBYLA as Optimizer
from qiskit_algorithms.state_fidelities import ComputeUncompute
from qiskit_algorithms import VQD

H2_op = SparsePauliOp("Z")

ansatz = QuantumCircuit(1)
ansatz.rx(Parameter(r"$\theta$"), 0)
optimizer = Optimizer()

estimator = Estimator()
sampler = Sampler(options={"shots": 100_000})
fidelity = ComputeUncompute(sampler)

k = 2
betas = [33]

vqd = VQD(estimator, fidelity, ansatz, optimizer, k=k, betas=betas)
result = vqd.compute_eigenvalues(operator=H2_op)
vqd_values = result.eigenvalues
print(vqd_values.real)

prints [-1. 1.00027877] (up to small variations fixed by a seed) while replacing COBYLA by SLSQP prints [-0.99999985 28.06863812].

I've performed the tests with a fresh version of qiskit-algorithms. Using SLSQP but removing the shots option in the Sampler yields the correct value.

Is it OK if I replaceSLSQP by COBYLA in the tests for now and open an issue about this?

@woodsp-ibm
Copy link
Member Author

woodsp-ibm commented Sep 17, 2024

Pre-existing classical optimizers often will not work well in the presence of noise, such as the shot noise (variable in result), SLSQP does gradients using a small eps with points around the current one to compute a local gradient using finite diff, That will badly be affected by any noise so in going from a statevector ideal, to sampled with noise one needs to be careful - sorry I did not notice. COBYLA is gradient free and often works ok, Noise is why we had specific ones we created like SPSA to work in such conditions whether it be sampling/shot noise or other device noise. You may well hit that issue in other cases too if things can no longer run with an ideal outcome. So it seems fine to switch the test around, seed the Sampler etc for reproducibility of the samples etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants