Observable Estimation

There are many cases in Forest Benchmarking, and QCVV broadly, where we are interested in estimating the expectations of some set of Pauli observables at the end of some circuit (aka pyquil.Program). Additionally, we may be interested in running the same circuit after preparation of different starting states. The observable_estimation.py module is designed to facilitate such experiments. In it you will find

  • the ObservablesExperiment class which is the high-level structure housing the several ExperimentSettings you wish to run around its core (unchanging) program
  • the ExperimentSetting dataclass which specifies the preparation in_state to prepare and observable to measure for a particular experimental run. The in_state is represented by a TensorProductState while the observable is a pyquil.PauliTerm
  • the function estimate_observables() which packages the steps to compose a list of pyquil.Programs comprising an ObservablesExperiment, run each program on the provided quantum computer, and return the ExperimentResults.
  • the ExperimentResult dataclass that records the mean and associated std_err that was estimated for an ExperimentSetting, along with some other metadata.

In many cases this core functionality allows you to easily specify and run experiments to get the observable expectations of interest without explicitly dealing with shot data, state and measurement preparation programs, qc memory addressing, etc.

Later in this notebook we will discuss additional functionality that enables grouping compatible settings to run in parallel and ‘calibrating’ observable estimates, but for now let’s dive into an example to build familiarity.

The procedure which likely leverages this abstraction most fruitfully is quantum process tomography. If you aren’t familiar with tomography at all, check out this notebook for some background. The goal is to experimentally determine/characterize some unknown process running on our quantum computer (QC). Typically, we have some ideal operation in mind specified by a pyquil.Program that we want to run on the QC. Running process tomography will help us understand what the actual experimental implementation of this process is on our noisy QC. Since process tomography involves running our process on many different start states and measuring several different observables, an ObservablesExperiment should considerably facilitate implementation of the procedure.

Create the experiment

First, we need to specify our process. For simplicity say we are interested in the QC’s implementation of a bit flip on qubit 0.

from pyquil import Program
from pyquil.gates import X
process = Program(X(0)) # bit flip on qubit 0

Now we need to specify our various ExperimentSettings. For now I’ll make use of a helper in forest.benchmarking.tomography.py that does this for us.

from forest.benchmarking.tomography import _pauli_process_tomo_settings
tomo_expt_settings = list(_pauli_process_tomo_settings(qubits = [0]))
[ExperimentSetting[X+_0→(1+0j)*X0], ExperimentSetting[X+_0→(1+0j)*Y0], ExperimentSetting[X+_0→(1+0j)*Z0], ExperimentSetting[X-_0→(1+0j)*X0], ExperimentSetting[X-_0→(1+0j)*Y0], ExperimentSetting[X-_0→(1+0j)*Z0], ExperimentSetting[Y+_0→(1+0j)*X0], ExperimentSetting[Y+_0→(1+0j)*Y0], ExperimentSetting[Y+_0→(1+0j)*Z0], ExperimentSetting[Y-_0→(1+0j)*X0], ExperimentSetting[Y-_0→(1+0j)*Y0], ExperimentSetting[Y-_0→(1+0j)*Z0], ExperimentSetting[Z+_0→(1+0j)*X0], ExperimentSetting[Z+_0→(1+0j)*Y0], ExperimentSetting[Z+_0→(1+0j)*Z0], ExperimentSetting[Z-_0→(1+0j)*X0], ExperimentSetting[Z-_0→(1+0j)*Y0], ExperimentSetting[Z-_0→(1+0j)*Z0]]

The first element of this list is displayed as ExperimentSetting[X+_0→(1+0j)*X0]. The symbols inside ExperimentSetting encode information about the preparation and measurement. They should be interpreted as "prepare"→ "measure".

So the encoding of this particular setting preparation, X+_0, tells us that this setting will prepare the + eigenstate of X, i.e. the state \(|+\rangle = \frac{1}{\sqrt 2}\left( |0\rangle + |1 \rangle\right)\) on the qubit 0. Furthermore, after the program is run on this state, we will measure the observable \(X\) on qubit 0. (1+0j) is just a multiplicative coefficient, in this case \(1\). This coefficient will automatically be factored in to the estimates of the expectation and std_err for the corresponding observable output by estimate_observables.

Now we need to pair up our settings with our process program for the complete experiment description.

from forest.benchmarking.observable_estimation import ObservablesExperiment
tomography_experiment = ObservablesExperiment(settings=tomo_expt_settings, program=process)
X 0
0: X+_0→(1+0j)*X0
1: X+_0→(1+0j)*Y0
2: X+_0→(1+0j)*Z0
3: X-_0→(1+0j)*X0
4: X-_0→(1+0j)*Y0
5: X-_0→(1+0j)*Z0
6: Y+_0→(1+0j)*X0
7: Y+_0→(1+0j)*Y0
8: Y+_0→(1+0j)*Z0
9: Y-_0→(1+0j)*X0
10: Y-_0→(1+0j)*Y0
11: Y-_0→(1+0j)*Z0
12: Z+_0→(1+0j)*X0
13: Z+_0→(1+0j)*Y0
14: Z+_0→(1+0j)*Z0
15: Z-_0→(1+0j)*X0
16: Z-_0→(1+0j)*Y0
17: Z-_0→(1+0j)*Z0

The first line is our process program, i.e. tomography_experiment.program, and each subsequent indexed line is one of our settings. Now we can proceed to get estimates of expectations for the observable in each setting (given the preparation, of course).

Run the experiment

We’ll need to get a quantum computer object to run our experiment on. We’ll get both an ideal Quantum Virtual Machine (QVM) which simulates our program/circuit without any noise, as well as a noisy QVM with a built-in default noise model.

from pyquil import get_qc
ideal_qc = get_qc('2q-qvm', noisy=False)
noisy_qc = get_qc('2q-qvm', noisy=True)

Let’s predict some of the outcomes when we simulate our program on ideal_qc so that our process X(0) is implemented perfectly without noise.

First take setting 0: X+_0→(1+0j)*X0. We prepare the plus eigenstate of \(X\) after which we run our program X(0) which does nothing to this particular state. When we measure the \(X\) observable on the plus eigenstate then we should get an expectation of exactly 1.

Now for setting 12: Z+_0→(1+0j)*X0 we prepare the plus eigenstate of \(Z\), a.k.a. the state \(|0\rangle\) after which we perform our bit flip X(0) which sends the state to \(|1\rangle\). When we measure \(X\) on this state we expect our results to be mixed 50/50 between plus and minus outcomes, so our expectation should converge to 0 for a large number of shots.

from forest.benchmarking.observable_estimation import estimate_observables
results = list(estimate_observables(ideal_qc, tomography_experiment, num_shots=500))
for idx, result in enumerate(results):
    if idx == 0:
        print('\nWe expect the result to be 1.0 +- 0.0')
        print(result, '\n')
    elif idx == 12:
        print('\nWe expect the result to be around 0.0. Try increasing the num_shots to see if it converges.')
        print(result, '\n')

We expect the result to be 1.0 +- 0.0
X+_0→(1+0j)*X0: 1.0 +- 0.0

X+_0→(1+0j)*Y0: 0.036 +- 0.04469237071357929
X+_0→(1+0j)*Z0: 0.064 +- 0.044629676225578875
X-_0→(1+0j)*X0: -1.0 +- 0.0
X-_0→(1+0j)*Y0: 0.104 +- 0.04447884890596878
X-_0→(1+0j)*Z0: 0.024 +- 0.04470847794322683
Y+_0→(1+0j)*X0: -0.072 +- 0.044605291165959224
Y+_0→(1+0j)*Y0: -1.0 +- 0.0
Y+_0→(1+0j)*Z0: -0.028 +- 0.04470382533967311
Y-_0→(1+0j)*X0: 0.012 +- 0.04471813949618208
Y-_0→(1+0j)*Y0: 1.0 +- 0.0
Y-_0→(1+0j)*Z0: 0.028 +- 0.04470382533967311

We expect the result to be around 0.0. Try increasing the num_shots to see if it converges.
Z+_0→(1+0j)*X0: 0.056 +- 0.044651181395344956

Z+_0→(1+0j)*Y0: -0.068 +- 0.04461784396404649
Z+_0→(1+0j)*Z0: -1.0 +- 0.0
Z-_0→(1+0j)*X0: 0.024 +- 0.04470847794322683
Z-_0→(1+0j)*Y0: -0.132 +- 0.04433003496502118
Z-_0→(1+0j)*Z0: 1.0 +- 0.0

The first number after the setting is the estimate of the expectation for that observable along with the standard error.

If we perform the same experiment but simulate with the somewhat noisy QVM then we no longer expect the run of setting 0: X+_0→(1+0j)*X0 to yield an estiamte of 1.0 with certainty.

# this time use the noisy_qc
noisy_results = list(estimate_observables(noisy_qc, tomography_experiment, num_shots=500))
for idx, result in enumerate(noisy_results):
    if idx == 0:
        print('\nWe expect the result to be less than 1.0 due to noise.')
        print(result, '\n')
    elif idx == 12:
        print('\nWe expect the result to be around 0.0, but it may be biased due to readout error.')
        print(result, '\n')

We expect the result to be less than 1.0 due to noise.
X+_0→(1+0j)*X0: 0.952 +- 0.01368911976717276

X+_0→(1+0j)*Y0: 0.16 +- 0.044145214916228456
X+_0→(1+0j)*Z0: 0.04 +- 0.044685568140060604
X-_0→(1+0j)*X0: -0.812 +- 0.026101953949848274
X-_0→(1+0j)*Y0: 0.032 +- 0.04469845634918503
X-_0→(1+0j)*Z0: 0.02 +- 0.04471241438347967
Y+_0→(1+0j)*X0: 0.12 +- 0.04439819816163715
Y+_0→(1+0j)*Y0: -0.808 +- 0.02634904172830579
Y+_0→(1+0j)*Z0: 0.04 +- 0.04468556814006061
Y-_0→(1+0j)*X0: 0.016 +- 0.04471563484956912
Y-_0→(1+0j)*Y0: 0.924 +- 0.01710111107501498
Y-_0→(1+0j)*Z0: 0.12 +- 0.044398198161637155

We expect the result to be around 0.0, but it may be biased due to readout error.
Z+_0→(1+0j)*X0: 0.1 +- 0.04449719092257398

Z+_0→(1+0j)*Y0: 0.08 +- 0.0445780214904161
Z+_0→(1+0j)*Z0: -0.768 +- 0.028641787653706254
Z-_0→(1+0j)*X0: 0.092 +- 0.044531696576708156
Z-_0→(1+0j)*Y0: 0.02 +- 0.04471241438347967
Z-_0→(1+0j)*Z0: 0.94 +- 0.015257784898208521

Error mitigation

In the last example we saw the estimated expectations deviate from the ideal. In fact, if you set num_shots to some large number, say 500,000, then you’ll notice that the list of results have expectations grouped around three distinct values, roughly .95, -.82, and .064. These values are what one would predict given the states measured are \(|+\rangle\), \(|-\rangle\), and some equal superposition, respectively, and given that the readout noise model has the default parameters \(\Pr(\rm{measure } \ 0 \ | \ \rm{actually }\ 0) = .975\) and \(\Pr(\rm{measure } \ 1 \ | \ \rm{actually }\ 1) = .911\).

\[\langle + \left| X \right| + \rangle = .975 - (1 - .975) = .95\]
\[\langle - \left| X \right| - \rangle = -.911 + (1 - .911) \approx -.82\]
\[\frac{1}{2}\left(\langle + \left| + \langle - \right|\right) X \left(\left| + \rangle + \right|- \rangle \right) = \frac{1}{2}(.975 + (1 - .911) - .911 - (1 - .975)) = .064\]

Note that the parameterization of readout noise assumes that we can only measure in the computational (Z) basis. Indeed, the way we ‘measure X’ or any other non-Z observable is by first performing some pre-measurement rotation and then measuring in the computational basis (we do these rotations so that the plus eigenstate on a single qubit corresponds to 0). Thus, all of our observables are impacted similarly by readout error; some measurements may incur some additional observable-dependent error due to the noisy pre-measurement rotations.

Symmetrizing readout

The fact that the noise is not symmetric, that is \(\Pr(\rm{measure } \ 0 \ | \ \rm{actually }\ 0) \neq \Pr(\rm{measure } \ 1 \ | \ \rm{actually }\ 1)\), complicates the impact of the noise. For a general state both parameters impact the resulting expectation. Instead of explicitly measuring each such parameter we instead opt to ‘symmetrize’ readout, which has the effect of averaging these parameters into a single ‘symmetrized readout error’. For a single qubit this is achieved by measuring two ‘symmetrized’ programs for each original measurement. The first program is exactly equivalent to the original. The second program is nearly identical, except that we perform a quantum bit flip before measuring the qubit, and post measurement we classically flip the result. We then aggregate these results into one collection of ‘symmetrized readout results’. We can directly calculate how this procedure affects the above calculations:

\[\langle + \left| X \right| + \rangle_{\rm{symm}} = \left(\langle + \left| X \right| + \rangle - \langle - \left| X \right| - \rangle \right)/2 = \left([.975 - (1 - .975)] - [-.911 + (1 - .911)]\right)/2 = (.95 + .82) / 2 = .885\]
\[\langle - \left| X \right| - \rangle_{\rm{symm}} = \left(\langle - \left| X \right| - \rangle - \langle + \left| X \right| + \rangle \right)/2 = \left([-.911 + (1 - .911)] - [.975 - (1 - .975)]\right)/2 = (-.82 - .95) / 2 = -.885\]
\[\left(\frac{1}{2}\left(\langle + \left| + \langle - \right|\right) X \left(\left| + \rangle + \right|- \rangle \right)\right)_{\rm{symm}} = 0\]

Where the last example is perhaps a bit more subtle; the physical state is not changed by the symmetrizing quantum bit flip but nonetheless half of the aggregated results are classically flipped so the readout bias is nonetheless eliminated.

Exhaustively symmetrizing over \(n\) qubits is done similarly by running \(2^n\) different ‘symmetrized’ programs, one for each bitstring, where each combination of qubits (and corresponding results) are flipped. Again, this has the effect of averaging over asymmetric readout error so that the aggregated results are characterized by a single ‘symmetrized readout error’ parameter. It is this single parameter that we correct for, or ‘calibrate’, below.

Calibrating estimates of symmetrized observable expectations

We know that when running on a QPU we will likely have substantial readout error (which you can estimate here). If we symmetrize the results then the symmetrized readout error will scale all of our expectations in a predictable way. We have included a calibration procedure that can help mitigate the impact of symmetrized readout error by estimating and undoing this scaling.

Specifically, after obtaining symmetrized estimates for expectations of some set of observables during an experiment you can pass on these results to calibrate_observable_estimates. By default, for each observable in the set this will prepare all eigenstates of the observable and estimate the expectations. The calibration_expectation recorded is the average magnitude of all of these expectations. Each new ExperimentResult stores the original input expectation under raw_expectation and populates expectation with the original expectation scaled by the inverse calibration_expectation.

The calibrate_observable_estimates method itself leverages readout symmetrization. We first create an experiment that prepares one particular positive eigenstate for the observable and measures its expectation. We then symmetrize this program (under default exhaustive symmetrization this essentially prepares all eigenstates) to get back a single estimate of the calibration_expectation; this is precisely an estimate of the ‘symmetrized readout error’ parameter which is what we need to rescale our results to +/- 1.

Let’s get symmetrized results for our experiment above and then calibrate these.

from forest.benchmarking.observable_estimation import calibrate_observable_estimates, exhaustive_symmetrization
pre_cal_symm_results = list(estimate_observables(noisy_qc, tomography_experiment, num_shots=500,
post_cal_results = list(calibrate_observable_estimates(noisy_qc, pre_cal_symm_results, num_shots=1000))
for post_cal_res in post_cal_results:
    print(post_cal_res, ' with pre-calibrated expectation ', round(post_cal_res.raw_expectation,2))
    print('The calibration estimate is ', round(post_cal_res.calibration_expectation, 2))
X+_0→(1+0j)*X0: 0.9932279909706546 +- 0.020554498352280456  with pre-calibrated expectation  0.88
The calibration estimate is  0.89

X+_0→(1+0j)*Y0: 0.08463251670378619 +- 0.035125067014716  with pre-calibrated expectation  0.08
The calibration estimate is  0.9

X+_0→(1+0j)*Z0: -0.0502283105022831 +- 0.03606940045580178  with pre-calibrated expectation  -0.04
The calibration estimate is  0.88

X-_0→(1+0j)*X0: -1.0135440180586908 +- 0.019679964510975017  with pre-calibrated expectation  -0.9
The calibration estimate is  0.89

X-_0→(1+0j)*Y0: 0.0 +- 0.03521467327581714  with pre-calibrated expectation  0.0
The calibration estimate is  0.9

X-_0→(1+0j)*Z0: 0.0091324200913242 +- 0.03609807995469683  with pre-calibrated expectation  0.01
The calibration estimate is  0.88

Y+_0→(1+0j)*X0: -0.0654627539503386 +- 0.03563977179455264  with pre-calibrated expectation  -0.06
The calibration estimate is  0.89

Y+_0→(1+0j)*Y0: -0.9710467706013363 +- 0.02025654760268173  with pre-calibrated expectation  -0.87
The calibration estimate is  0.9

Y+_0→(1+0j)*Z0: 0.08447488584474885 +- 0.036015104360305916  with pre-calibrated expectation  0.07
The calibration estimate is  0.88

Y-_0→(1+0j)*X0: 0.05191873589164785 +- 0.03565901613379986  with pre-calibrated expectation  0.05
The calibration estimate is  0.89

Y-_0→(1+0j)*Y0: 0.9910913140311804 +- 0.01938346248560857  with pre-calibrated expectation  0.89
The calibration estimate is  0.9

Y-_0→(1+0j)*Z0: -0.0045662100456621 +- 0.036098815026857044  with pre-calibrated expectation  -0.0
The calibration estimate is  0.88

Z+_0→(1+0j)*X0: 0.020316027088036117 +- 0.035686630882177495  with pre-calibrated expectation  0.02
The calibration estimate is  0.89

Z+_0→(1+0j)*Y0: -0.017817371937639197 +- 0.03521070663662003  with pre-calibrated expectation  -0.02
The calibration estimate is  0.9

Z+_0→(1+0j)*Z0: -0.9817351598173516 +- 0.022032317230781875  with pre-calibrated expectation  -0.86
The calibration estimate is  0.88

Z-_0→(1+0j)*X0: -0.024830699774266364 +- 0.03568416614848569  with pre-calibrated expectation  -0.02
The calibration estimate is  0.89

Z-_0→(1+0j)*Y0: 0.035634743875278395 +- 0.03519880403696721  with pre-calibrated expectation  0.03
The calibration estimate is  0.9

Z-_0→(1+0j)*Z0: 1.0 +- 0.021324005357311247  with pre-calibrated expectation  0.88
The calibration estimate is  0.88

Note that statistical fluctuations may drive the magnitude of calibrated expectations above 1.

Estimate compatible observables from the same data

We can use fewer runs on a quantum computer to speed up estimation of some sets of observables. Consider two different settings on two qubits: Z+_0 * Z+_1→(1+0j)*Z0Z1 and Z+_0 * Z+_1→(1+0j)*Z0. If we think about translating these settings into circuits (assuming they are part of the same ObservablesExperiment) then we note that the circuits will be identical even though our observables are different. Indeed ZZ and ZI commute, so we expect to be able to measure both observables simultaneously. In particular, each of these terms is diagonal in a ‘Tensor Product Basis’ i.e. a basis which is the tensor product of elements of single qubit bases (note we could throw in the term IZ too). In this case the basis is the computational basis, i.e. each vector in the sum

\[(|0\rangle + |1 \rangle)_0 \otimes (|0\rangle + |1 \rangle)_1 = |00\rangle + |01\rangle + |10\rangle + |11\rangle\]

Another example of observables which are compatible in this way would be \(X0I1\), \(I0Z1\), and \(X0Z1\). Meanwhile, although \(X0Z1\) and \(Z0X1\) commute, they are not simultaneously diagonalized in a Tensor Product Basis, so we don’t consider grouping these terms (this might change in the future–feel encouraged to contribute this! :).

We provide a method group_settings which will automatically form groups of settings whose observables share a Tensor Product Basis and can be estimated from the same set of shot data. When you pass the resultant experiment with grouped settings into estimate_observables then each group of observables is estimated using num_shots many results.

We’ll group a 2 qubit tomography experiment to see how many experiment runs we save.

from forest.benchmarking.observable_estimation import group_settings
from pyquil.gates import CNOT

two_q_tomo_expt = ObservablesExperiment(list(_pauli_process_tomo_settings(qubits=[0,1])), Program(CNOT(0,1)))
grouped_tomo_expt = group_settings(two_q_tomo_expt)

# print the pre-grouped experiment

# print the grouped experiment. There are multiple settings per line
CNOT 0 1
0: X+_0 * X+_1→(1+0j)*X1
1: X+_0 * X+_1→(1+0j)*Y1
2: X+_0 * X+_1→(1+0j)*Z1
3: X+_0 * X+_1→(1+0j)*X0
4: X+_0 * X+_1→(1+0j)*X0X1
5: X+_0 * X+_1→(1+0j)*X0Y1
6: X+_0 * X+_1→(1+0j)*X0Z1
7: X+_0 * X+_1→(1+0j)*Y0
8: X+_0 * X+_1→(1+0j)*Y0X1
9: X+_0 * X+_1→(1+0j)*Y0Y1
... 520 not shown ...
... use e.settings_string() for all ...
530: Z-_0 * Z-_1→(1+0j)*X0Y1
531: Z-_0 * Z-_1→(1+0j)*X0Z1
532: Z-_0 * Z-_1→(1+0j)*Y0
533: Z-_0 * Z-_1→(1+0j)*Y0X1
534: Z-_0 * Z-_1→(1+0j)*Y0Y1
535: Z-_0 * Z-_1→(1+0j)*Y0Z1
536: Z-_0 * Z-_1→(1+0j)*Z0
537: Z-_0 * Z-_1→(1+0j)*Z0X1
538: Z-_0 * Z-_1→(1+0j)*Z0Y1
539: Z-_0 * Z-_1→(1+0j)*Z0Z1

CNOT 0 1
0: X+_0 * X+_1→(1+0j)*X1, X+_0 * X+_1→(1+0j)*X0, X+_0 * X+_1→(1+0j)*X0X1
1: X+_0 * X+_1→(1+0j)*Y1, X+_0 * X+_1→(1+0j)*X0Y1
2: X+_0 * X+_1→(1+0j)*Z1, X+_0 * X+_1→(1+0j)*X0Z1
3: X+_0 * X+_1→(1+0j)*Y0, X+_0 * X+_1→(1+0j)*Y0X1
4: X+_0 * X+_1→(1+0j)*Y0Y1
5: X+_0 * X+_1→(1+0j)*Y0Z1
6: X+_0 * X+_1→(1+0j)*Z0, X+_0 * X+_1→(1+0j)*Z0X1
7: X+_0 * X+_1→(1+0j)*Z0Y1
8: X+_0 * X+_1→(1+0j)*Z0Z1
9: X+_0 * X-_1→(1+0j)*X1, X+_0 * X-_1→(1+0j)*X0, X+_0 * X-_1→(1+0j)*X0X1
... 304 not shown ...
... use e.settings_string() for all ...
314: Z-_0 * Z+_1→(1+0j)*Z0Z1
315: Z-_0 * Z-_1→(1+0j)*X1, Z-_0 * Z-_1→(1+0j)*X0, Z-_0 * Z-_1→(1+0j)*X0X1
316: Z-_0 * Z-_1→(1+0j)*Y1, Z-_0 * Z-_1→(1+0j)*X0Y1
317: Z-_0 * Z-_1→(1+0j)*Z1, Z-_0 * Z-_1→(1+0j)*X0Z1
318: Z-_0 * Z-_1→(1+0j)*Y0, Z-_0 * Z-_1→(1+0j)*Y0X1
319: Z-_0 * Z-_1→(1+0j)*Y0Y1
320: Z-_0 * Z-_1→(1+0j)*Y0Z1
321: Z-_0 * Z-_1→(1+0j)*Z0, Z-_0 * Z-_1→(1+0j)*Z0X1
322: Z-_0 * Z-_1→(1+0j)*Z0Y1
323: Z-_0 * Z-_1→(1+0j)*Z0Z1

The number of experimental runs has been reduced from 540 to 324! (where num_shots measurements will be repeated for each run). This grouped experiment can be passed to estimate_observables per usual.

This potential for grouping motivates the design choice to represent the _settings within an ObservablesExperiment as a nested List[List[ExperimentSetting]], where the inner lists organize the groups of settings that will be estimated from a shared set of data. When creating an ObservablesExperiment we’ve provided the option of passing in a flat List[ExperimentSetting] which will be expanded internally into a List of length-1 List[ExperimentSetting]s.

Keep in mind that currently (3 July ‘19) the covariance between simultaneously estimated expectations is not recorded, so you may run into trouble if you want a variance of some function of these expectations.

Easy Parallelization

The ability to group compatible settings for parallel data acquisition on the same run of the QC makes it quite simple to generate a single parallel experiment from a group of ObservablesExperiments which operate on disjoint sets of qubits. The settings across these experiments will automatically be compatible because they operate on different qubits, and the programs can be composed without ambiguity. Even when two settings share a qubit this parallelization can be done, although in this case it may be to less effect. When two programs act on a shared qubit they can no longer be so thoughtlessly composed since the order of operations on the shared qubit may have a significant impact on the program behaviour.

However, even when the individual experiments act on disjoint sets of qubits you must be careful not to associate ‘parallel’ with ‘simultaneous’ execution. Physically the gates specified in a pyquil Program occur as soon as resources are available; meanwhile, measurement happens only after all gates. There is no specification of the exact timing of gates beyond their causal relationships. Therefore, while grouping experiments into parallel operation can be quite beneficial for time savings, do not depend on any simultaneous execution of gates on different qubits and be wary of the fact that measurement happens only after all gates have finished. The latter fact may lead to significant time delays for the measurement of a qubit that would be measured promptly in the original program–remember that during this ‘idle time’ the qubit is likely experiencing decoherence, which will likely impact the results.

from pyquil.gates import CNOT, CZ, Z
# make a couple of different tomography experiments on different qubits
from forest.benchmarking.tomography import generate_state_tomography_experiment
from forest.benchmarking.observable_estimation import merge_disjoint_experiments

disjoint_sets_of_qubits = [(0,4),(1,2),(3,)]
programs = [Program(X(0), CNOT(0,4)), Program(CZ(1,2)), Program(Z(3))]

expts_to_parallelize = []
for qubits, program in zip(disjoint_sets_of_qubits, programs):
    expt = generate_state_tomography_experiment(program, qubits)
    # group the settings only for fair comparison later (and for generality)

# get a merged experiment with grouped settings for parallel data acquisition
parallel_expt = merge_disjoint_experiments(expts_to_parallelize)

print(f'Original number of runs: {sum(len(expt) for expt in expts_to_parallelize)}')
print(f'Parallelized number of runs: {len(parallel_expt)}')
Original number of runs: 21
Parallelized number of runs: 11

We also provide a convenient means of re-separating the results via get_results_by_qubit_groups. This is demonstrated in the tomography notebooks and used in randomized_benchmarking.py

More on ExperimentSetting’s in_state and observable

ExperimentSetting is the dataclass that specifies a preparation and measurement to be paired with the program in an ObservablesExperiment.

in_state itself has the type of a TensorProductState dataclass which also resides in observable_estimation.py. You can instantiate a single qubit Pauli eigenstate using methods such as plusX(qubit) or minusY(qubit). Tensor products of such states can be built up by multiplying TensorProductStates. We also include SIC states, which help reduce the number of runs for process tomography.

from forest.benchmarking.observable_estimation import zeros_state, plusX, minusY, TensorProductState
# creating settings

# all zeros, i.e. all plusZ
typical_start_state = zeros_state(qubits = range(6))
print(typical_start_state, '\n')

# X+_2 * Y-_5
xy = plusX(2) * minusY(5)
print(xy, '\n')

# Z-_0 * Y+_3
zy = TensorProductState.from_str('Z-_0 * Y+_3')
print(zy, '\n')

# tensor states together
zyxy = zy * xy
print(zyxy, '\n')

# SIC states. You only need the 4 SIC states on a qubit for process tomography, instead of the 6 Pauli eigenstates.
from forest.benchmarking.observable_estimation import SIC0, SIC3
# SIC0_0 * SIC3_1
sic0_0 = SIC0(0)
sic3_1 = SIC3(1)
sic0sic3 = sic0_0 * sic3_1
Z+_0 * Z+_1 * Z+_2 * Z+_3 * Z+_4 * Z+_5

X+_2 * Y-_5

Z-_0 * Y+_3

Z-_0 * Y+_3 * X+_2 * Y-_5

SIC0_0 * SIC3_1
from pyquil.paulis import PauliTerm, sX, sZ, sY
# creating observables
zy_pauli = sZ(0) * sY(1)
xzzy = .5j * sX(1) * sZ(2) * sZ(4) * sY(5)

from forest.benchmarking.observable_estimation import ExperimentSetting
setting = ExperimentSetting(typical_start_state, xzzy)
# the -> separates the start state from the observable

obs_expt = ObservablesExperiment([setting], Program(X(0)))
print('\nIn an experiment with program X(0):\n', obs_expt)
Z+_0 * Z+_1 * Z+_2 * Z+_3 * Z+_4 * Z+_5→0.5j*X1Z2Z4Y5

In an experiment with program X(0):
 X 0
0: Z+_0 * Z+_1 * Z+_2 * Z+_3 * Z+_4 * Z+_5→0.5j*X1Z2Z4Y5

Breaking estimate_observables into parts

You may be interested in working directly with Programs or shot data instead of an ObservablesExperiment. We will step through the decomposition of estimate_observables to illustrate the discrete steps which would allow you to do this.

Construct an ObservablesExperiment and turn it into a list of Programs

from forest.benchmarking.observable_estimation import generate_experiment_programs
q = 0
# make ExperimentSettings for tomographing the plus state
expt_settings = [ExperimentSetting(plusX(q), pt) for pt in [sX(q), sY(q), sZ(q)]]

# make an ObservablesExperiment.
# We already specified the plus state as initial state, so do an empty (identity) program
obs_expt = ObservablesExperiment(expt_settings, program=Program())

# convert it to a list of programs and a list of qubits for each program
expt_progs, qubits = generate_experiment_programs(obs_expt)
print('Programs and Qubits:\n====================\n')
for prog, qs in zip(expt_progs, qubits):
    print(prog, qs, '\n')

 [ExperimentSetting[X+_0→(1+0j)*X0], ExperimentSetting[X+_0→(1+0j)*Y0], ExperimentSetting[X+_0→(1+0j)*Z0]]


0: X+_0→(1+0j)*X0
1: X+_0→(1+0j)*Y0
2: X+_0→(1+0j)*Z0

Programs and Qubits:

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0

At this point we can run directly or first symmetrize

1) run programs directly

from forest.benchmarking.observable_estimation import _measure_bitstrings
# this is a simple wrapper that adds measure instructions for each qubit in qubits and runs each program.
results = _measure_bitstrings(noisy_qc, expt_progs, qubits, num_shots=5)
for bits in results:

2) First symmetrize, then run

A symmetrization method should return
1. a list of programs 2. a list of measurement qubits associated with each program 3. a list of bits/bools indicating which qubits where flipped before measurement for each program 4. a list of ints indicating the group of results to which each program’s outputs should be re-assigned after correction
from forest.benchmarking.observable_estimation import exhaustive_symmetrization, consolidate_symmetrization_outputs
symm_progs, symm_qs, flip_arrays, groups = exhaustive_symmetrization(expt_progs, qubits)
print('2 symmetrized programs for each original:\n======================\n')
for prog in symm_progs:

# now these programs can be run as above
results = _measure_bitstrings(noisy_qc, symm_progs, symm_qs, num_shots=5)

# we now need to consolidate these results using the information in flip_arrays and groups
results = consolidate_symmetrization_outputs(results, flip_arrays, groups)

print('After consolidating we have twice the number of (symmetrized) shots as above:\n======================\n')
# we now have twice the number of symmetrized shots
for bits in results:
2 symmetrized programs for each original:

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0
RX(pi) 0

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RX(pi) 0

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0

RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi) 0

After consolidating we have twice the number of (symmetrized) shots as above:


Now we can translate our bitarray shots into ExperimentResults with expectation values

from forest.benchmarking.observable_estimation import shots_to_obs_moments, ExperimentResult
import numpy as np
expt_results_as_obs = []
# we use the settings from the ObservablesExperiment to label our results
for bitarray, meas_qs, settings in zip(results, qubits, obs_expt):
    # settings is a list of settings run simultaneously for a given program;
    # in our case, we just had one setting per program so this isn't strictly uncessary
    for setting in settings:
        observable = setting.observable # get the PauliTerm

        # Obtain statistics from result of experiment
        obs_mean, obs_var = shots_to_obs_moments(bitarray, meas_qs, observable)

        expt_results_as_obs.append(ExperimentResult(setting, obs_mean, std_err = np.sqrt(obs_var),
                                                    total_counts = len(bitarray)))

for res in expt_results_as_obs:
X+_0→(1+0j)*X0: 1.0 +- 0.0
X+_0→(1+0j)*Y0: 0.2 +- 0.30983866769659335
X+_0→(1+0j)*Z0: 0.2 +- 0.30983866769659335

Finally, we note that calibrate_observable_estimates can be decomposed in a similar using calls to get_calibration_program. The workflow is essentially the same as above.

[ ]: