Skip to content
This repository has been archived by the owner on Dec 7, 2021. It is now read-only.

MPS (Matrix Product State) giving wrong result for QAOA #1433

Open
amitracal opened this issue Nov 16, 2020 · 23 comments
Open

MPS (Matrix Product State) giving wrong result for QAOA #1433

amitracal opened this issue Nov 16, 2020 · 23 comments

Comments

@amitracal
Copy link

Information

  • Qiskit Aqua version:
    'qiskit-terra': '0.16.0',
    'qiskit-aer': '0.7.0',
    'qiskit-ignis': '0.5.0',
    'qiskit-ibmq-provider': '0.11.0',
    'qiskit-aqua': '0.8.0',
    'qiskit': '0.23.0'

  • Python version:
    3.7.6

  • Operating system:
    Windows 10

What is the current behavior?

Cplex provides optimized value as -23.5 when MPS is producing values above +50 (see the attached notebook)

Steps to reproduce the problem

Run the attached notebook

What is the expected behavior?

Opitimized value should be close to -23.5
RQAOA_MPS.zip

Suggested solutions

@woodsp-ibm
Copy link
Member

Is this really just just #1434 taken further because with MPS you can get to even more qubits?

At smaller qubit sizes does the MPS simulator produce a correct result that is line with the other simulation modes? - I would hope it does.

@amitracal
Copy link
Author

I did some more experiment and wanted to enlist the result here, head to head seems to me MPS is having more fluctuations and error around 9-10 qubits than default QASM simulator. Please see the "Final" tab.
Cplex vs QAOA on simulators.xlsx

@amitracal
Copy link
Author

Please let me know if anything else like code I am using will be helpful to look at the problem on either #1433 or #1434

@yaelbh
Copy link
Contributor

yaelbh commented Nov 22, 2020

What happens when a simulator reaches timeout? What will a QAOA user see?

@woodsp-ibm
Copy link
Member

woodsp-ibm commented Nov 22, 2020

@yaelbh If running the circuits on the backend returns failure, if it times out, then the user will see a circuit execution error raised from Aqua https://github.com/Qiskit/qiskit-aqua/blob/858305641429197560da2e31eb89bf362c8e6210/qiskit/aqua/utils/run_circuits.py#L338 (If it raises an error internally itself then that should be seen)

@yaelbh
Copy link
Contributor

yaelbh commented Nov 22, 2020

Thanks @woodsp-ibm. @amitracal can you please share the code.

@woodsp-ibm
Copy link
Member

@yaelbh There is a zip attached with a jupyter notebook, that I believe demonstrates the problem, at the end of the issue description above

@yaelbh
Copy link
Contributor

yaelbh commented Nov 22, 2020

Yes I know, but I think there is another piece of code for the last spreadsheet.

@amitracal
Copy link
Author

@yaelbh @woodsp-ibm I am attaching another zip with code and latest excel which I have attached in the new raised RQAOA issue as well https://github.com/Qiskit/qiskit-aqua/issues/1453.
Github Issues batch Nov 23 2020.zip

@yaelbh
Copy link
Contributor

yaelbh commented Nov 24, 2020

Based on the notebooks, I created the following code:

from qiskit.providers.aer import QasmSimulator
from qiskit.optimization import QuadraticProgram
from qiskit.optimization.algorithms import MinimumEigenOptimizer
from qiskit.aqua.algorithms import QAOA
from qiskit.aqua.components.optimizers import SLSQP
from qiskit.aqua import QuantumInstance, aqua_globals

sim = QasmSimulator()
seed = 10598

qubo = QuadraticProgram()
qubo.binary_var('x1')
qubo.binary_var('x2')
qubo.binary_var('x3')
qubo.binary_var('x4')
qubo.binary_var('x5')
qubo.binary_var('x6')
qubo.binary_var('x7')
qubo.binary_var('x8')
qubo.binary_var('x9')

qubo.minimize(linear=[1,-2,3,-6,5,4,4,5,5], 
              quadratic={('x1', 'x2'): 1, ('x1', 'x3'): -1, ('x1', 'x4'): 2, 
                                            ('x2' , 'x3'): 1, ('x2', 'x4'): 6, ('x3', 'x5'): 3,
                                            ('x4', 'x6'): 3, ('x7', 'x8'): -1, ('x7', 'x9'): -1
                        })

lengths = [1, 2, 5, 6, 7, 8, 9, 10, 20, 30]
for p in lengths:
    print('p =', p)
    seed = seed + 1000
    
    for method in ['statevector', 'matrix_product_state']:
        print('method:', method)
        sim.set_options(method=method)
        aqua_globals.random_seed = seed
        quantum_instance = QuantumInstance(sim, seed_simulator=seed, seed_transpiler=seed)

        slsqp = SLSQP()
        slsqp.set_options(maxiter=500)

        qaoa_mes = QAOA(quantum_instance=quantum_instance, include_custom=True, optimizer=slsqp, p=p)
        qaoa = MinimumEigenOptimizer(qaoa_mes)
        qaoa_result = qaoa.solve(qubo)
        print(qaoa_result)

For this code, the printed results are identical between the simulators, in all the runs.

Questions and comments that will help me proceed with investigation:

  1. I'm using Aer simulator, not sure if it's exactly the same as the cloud one.
  2. In the Excel file, what is the meaning of multiple rows for the same number of qubits?
  3. In the Excel file, which length is used?
  4. What is the QUBO for 20 qubits? (in the Excel file I see difference between the simulators, would like to restore it at my end).
  5. I see that the options for SLSQP are different in the different notebooks.

@amitracal
Copy link
Author

  1. I'm using Aer simulator, not sure if it's exactly the same as the cloud one.

  2. I do not know. May be @woodsp-ibm can help ?

  3. In the Excel file, what is the meaning of multiple rows for the same number of qubits?

  4. For some options I ran the QUBO in a loop (specially for QASM and MPS) just to get a better sample and they came up with different values but because these are all for the same number of qubits they are noted in the excel in multiple rows. For other options the value remains constant for these rows.

  5. In the Excel file, which length is used?

  6. Do not understand the question, length of what ?

  7. What is the QUBO for 20 qubits? (in the Excel file I see difference between the simulators, would like to restore it at my end).

  8. This is the QUBO, I have separate notebooks for QASM, MPS and hardware for each Qubit#. Also attaching the notebook for 20.

create a QUBO

qubo = QuadraticProgram()

qubo.minimize(linear=[1,-2,3,-6,5,4,4,5,5,5,6,6,0.3,6,6,-2,-2,-2,-2,-2],
quadratic={('x1', 'x2'): 1, ('x1', 'x3'): -1, ('x1', 'x4'): 2,
('x2' , 'x3'): 1, ('x2', 'x4'): 6, ('x3', 'x5'): 3,
('x4', 'x6'): 3, ('x7', 'x8'): -1, ('x7', 'x9'): -1, ('x8', 'x10'): -1,
('x11', 'x12'): 3, ('x13', 'x14'): -1, ('x10', 'x13'): -1, ('x12', 'x15'): -1,
('x11', 'x16'): 3, ('x13', 'x17'): -1, ('x18', 'x13'): -1, ('x19', 'x20'): -1
})

  1. I see that the options for SLSQP are different in the different notebooks.
  2. That must be a mistake, can you please point out where you saw that and I will correct it.

@amitracal
Copy link
Author

RQAOA20_MPS.zip

@amitracal
Copy link
Author

@yaelbh @woodsp-ibm I also had an observation today which I think I should mention to you, as I ran a different QUBO from an actual business problem through real hardware, MPS, QASM and RQAOA. Actually MPS did very similar to QASM although both were wrong with respect to Cplex the classical solver which I consider to be my north star. Let me attach those results and notebooks as well. I only did this for 15 qubits.
Actual Data.zip

qubo.minimize(linear=[137.02211292926253,92.010710781040601,21.047319760897697,105.14998995419403,60.25426907360287,
120.50853714720574,97.822816757230228,37.737122848842169,-95.551830754716107,7.5,-65.810905914017752,
84.512119515723512,30,30,-2.4489795918367347],
quadratic={('x1', 'x2'): 60, ('x1', 'x3'): -3.749986726428272, ('x1', 'x11'): -14.999973155646421,
('x1' , 'x12'): 3.749986726428272, ('x2', 'x7'): 7.499973750042658,
('x2', 'x8'): 7.499973750042658,
('x3','x11'): 59.999931340984269, ('x3', 'x12'): -3.749986726428272,
('x3', 'x14'): -15,('x4', 'x5'): 15,
('x4', 'x6'): 30,
('x4', 'x7'): 30,
('x4', 'x8'): -30, ('x4', 'x9'): -30, ('x5', 'x6'): 120, ('x5', 'x7'): 30,
('x5', 'x10'): 15, ('x5', 'x12'): -15, ('x5', 'x13'): -30, ('x6', 'x7'): 60,
('x6', 'x10'): 30, ('x6', 'x12'): -30, ('x6', 'x13'): -60,
('x7', 'x8'): 14.999947500085316, ('x8', 'x9'): 60,
('x9', 'x15'): 4.8979591836734695, ('x11', 'x12'): 11 -14.999973155646421, ('x11', 'x14'): -60})

Here is the result for this QUBO with 15 qubits -
Cplex : -191.36
QASM with different p values : -96.9, -100.85, -98.65, -110.85, - 151.41, -117.80, -91.85, -158.91, -122.87, -110.85
Hardware (paris) : -131
MPS (interestingly MPS did very similar to QASM) : -96.9, -100.85, -98.65, -110.85, - 151.41, -117.80
RQAOA : 137.02

@yaelbh
Copy link
Contributor

yaelbh commented Nov 25, 2020

I'm able to restore the results for 20 qubits, add see the difference between the simulators. I'm checking it now.

@yaelbh
Copy link
Contributor

yaelbh commented Nov 29, 2020

Here's an update.

  • Some of the differences are because in the QASM notebook, the options in the set_options line (of the optimizer) are different from the MPS notebook.
  • The circuits are not transpiled the same way for the two simulators in the stable branch. I don't see that it can cause differences. In the master branch the transpilation is identical.
  • I have a case (20 qubits, p=20) where the simulators yield different results, with the master branch, and the same optimizer options. Debugging it, I can see a circuit for whom they calculate different expectation values, when binding the same set of parameters. This is where I'm investigating right now. Can be something numerical, but I still need to check.

@amitracal
Copy link
Author

Thank you @yaelbh, please let me know if you need any help

@yaelbh
Copy link
Contributor

yaelbh commented Dec 1, 2020

I understand now what's going on. The bottom line is a numerical difference 10 positions after the decimal point, which propagates to totally change the flow of QAOA. Note that this implies something to be improved in the sensitivity of QAOA.

This is the instance where we see differences:

from time import time
from qiskit import transpile
from qiskit.providers.aer import QasmSimulator
from qiskit.optimization import QuadraticProgram
from qiskit.optimization.algorithms import MinimumEigenOptimizer
from qiskit.aqua.algorithms import QAOA
from qiskit.aqua.components.optimizers import SLSQP
from qiskit.aqua import QuantumInstance, aqua_globals
seed = 15598
qubo = QuadraticProgram()
qubo.binary_var('x1')
qubo.binary_var('x2')
qubo.binary_var('x3')
qubo.binary_var('x4')
qubo.binary_var('x5')
qubo.binary_var('x6')
qubo.binary_var('x7')
qubo.binary_var('x8')
qubo.binary_var('x9')
qubo.binary_var('x10')
qubo.binary_var('x11')
qubo.binary_var('x12')
qubo.binary_var('x13')
qubo.binary_var('x14')
qubo.binary_var('x15')
qubo.binary_var('x16')
qubo.binary_var('x17')
qubo.binary_var('x18')
qubo.binary_var('x19')
qubo.binary_var('x20')
qubo.minimize(linear=[1,-2,3,-6,5,4,4,5,5,5,6,6,0.3,6,6,-2,-2,-2,-2,-2], 
              quadratic={('x1', 'x2'): 1, ('x1', 'x3'): -1, ('x1', 'x4'): 2, 
                                            ('x2' , 'x3'): 1, ('x2', 'x4'): 6, ('x3', 'x5'): 3,
                                            ('x4', 'x6'): 3, ('x7', 'x8'): -1, ('x7', 'x9'): -1, ('x8', 'x10'): -1,
                                            ('x11', 'x12'): 3, ('x13', 'x14'): -1, ('x10', 'x13'): -1, ('x12', 'x15'): -1,
                                            ('x11', 'x16'): 3, ('x13', 'x17'): -1, ('x18', 'x13'): -1, ('x19', 'x20'): -1 
                        })

for method in ['statevector', 'matrix_product_state']:
    print('method:', method)
    sim = QasmSimulator(method=method)
    slsqp = SLSQP(maxiter=1)
    aqua_globals.random_seed = seed
    quantum_instance = QuantumInstance(sim, seed_simulator=seed, seed_transpiler=seed)
    qaoa_instance = QAOA(quantum_instance=quantum_instance, include_custom=True, optimizer=slsqp, p=20,
                         callback=store_intermediate_result)
    qaoa = MinimumEigenOptimizer(qaoa_instance)
    qaoa_result = qaoa.solve(qubo)
    print(qaoa_result)

Note that I work with the master branches of all the repositories.
What is happening:

  • I'm not familiar with the SLSQP optimizer, but I can see in my debug prints that it starts by running the parametrized circuit 42 times, each time binding a different set of parameter values. Each time consists of 1024 shots, and an average is performed over them. In the end, we obtain 42 averages.
  • Out of the 42 averages, the simulators agree on 41 averages, but disagree on one.

Now I need to explain why this disagreement occurs, and what are its consequences.

Why it occurs:

  • The simulators disagree on the average when binding the 13th set of parametrized values. In fact, out of 1024 shots, they agree on the measurement result of 1023 shots, but there is a single shot - shot no, 1015 - where the measurement is different: the statevector simulator measures 00101000010011101111, while the MPS simulator measures 00101000010011101110.
  • Simulation of measurement is done by splitting the segment [0, 1] (real numbers ranging from 0 to 1) to 2^20 bins, a bin for each possible measurement result. A bin's size is equal to the probability of the corresponding measurement result. Then a number is uniformly randomized in [0, 1], and the measurement result is the bin where the number falls.
  • The statevector simulator calculates that the boundary between bins 165102 and 165103 is at 0.207915411592. The MPS simulator calculates the boundary to be at 0.207915411612. The random number is 0.207915411597. So, we end up in different bins, which explains the different measurement results.

Consequences: for the statevector simulator, average no. 13 is the maximum over the 42 averages. For the MPS simulator, the average is elsewhere. This seems to drastically affect the optimizer's subsequence choices.

I wonder what can be done to make QAOA less sensitive to 1 shot out of 42*1024 shots. Maybe increase the number of shots? The story here is not in the type of simulators; we learn that a different randomization with the same simulator, or with a real device, can yield a very different QAOA result. I guess that this can be seen if one runs only the statevector simulator, several times, each time with a different random seed.

@amitracal
Copy link
Author

@yaelbh I can start working on it starting on Thursday because of other priorities, its great that you found the root cause, please let me know what changes you want me to do.

@yaelbh
Copy link
Contributor

yaelbh commented Dec 2, 2020

I think it's best to consult with someone from Aqua about the best way to use QAOA (for example, what is the recommended number of shots?). Also, following discussion in #1463, it may help to stop fixing the simulator seed (i.e., remove the parameter seed_simulator from QuantumInstance), and maybe also the transpiler seed.

@woodsp-ibm
Copy link
Member

SLSQP is a gradient based optimizer and by default its using a finite difference where eps (the epsilon distance from the current point to surrounding points) is very small. I can imagine that small perturbations here (ie sampling noise) can have quite an impact. Normally in such 'noisy' environments we suggest using an optimizer that is designed to work in the presence of noise such as SPSA. In this case though. since include_custom is true, is the outcome not supposed to be ideal the same as using statevector?

@yaelbh
Copy link
Contributor

yaelbh commented Dec 2, 2020

Although include_custom is True, when I debug I see a circuit without snapshots, executed 1024 times.

@woodsp-ibm
Copy link
Member

woodsp-ibm commented Dec 2, 2020

Hmmm, I wonder if the check for Aer qasm simulator is not returning correctly when the QasmSimulator is given directly like that with the MPS method. To include snaphots the ExpectationFactory would need to select the AerPauliExpectation. Perhaps you would like to set the expectation manually and see if that works as expected - on QAOA constructor add expectation=AerPauliExpectation() - the latter being imported from qiskit.aqua.operators. The include_custom will be ignored since that controls the ExpectationFactory outcome when it (QAOA) needs to autoselect an expectation object.

@amitracal
Copy link
Author

I did one QUBO with 100 variables from a real example through QAOA MPS, it ran ok for 4 days but provided wrong results, attaching it with results through CPLEX and MPS QAOA
RQAOA100_MPS - Actual data.zip

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants