Distance measures¶
It is often the case that we wish to measure how close an experimentally prepared quantum state is to the ideal, or how close an ideal quantum gate is to its experimental implementation. In this notebook we explore some quantitative measures of comparing quantum states and processes using the forest.benchmarking module distance_measures.py
.
Distance measures between states or processes can be subtle. We recommend thinking about the operational interpretation of each measure before using the measure.
More information
The following references are good starting points for further reading of the literature.
[1]:
import numpy as np
import forest.benchmarking.operator_tools.random_operators as rand_ops
from forest.benchmarking.operator_tools.calculational import outer_product
import forest.benchmarking.distance_measures as dm
Distance measures between quantum states¶
When comparing quantum states there are a variety of different measures of (in-)distinguishability, with each usually being the answer to a particular question, such as “With what probability can I distinguish two states in a single experiment?”, or “How indistinguishable are measurement samples of two states going to be?”.
[2]:
# some pure states
psi1 = rand_ops.haar_rand_state(2)
rho_haar1 = outer_product(psi1,psi1)
psi2 = rand_ops.haar_rand_state(2)
rho_haar2 = outer_product(psi2,psi2)
# some mixed states
rho = rand_ops.bures_measure_state_matrix(2)
sigma = rand_ops.bures_measure_state_matrix(2)
The fidelity between \(\rho\) and \(\sigma\) is
When \(\rho = |\psi\rangle \langle \psi|\) and \(\sigma= |\phi\rangle \langle \phi|\) are pure states, the definition reduces to the squared overlap between the states: \(F(\rho, \sigma)=|\langle\psi|\phi\rangle|^2\).
In this case, it is easy to see that the fidelity is a probability. Suppose you are trying to prepare the state \(|\psi\rangle\) but end up preparing \(|\phi\rangle\), next you perform the measurement \(\Pi_\psi = |\psi \rangle \langle \psi|\) vs \(\Pi_{\neg \psi} = I - \Pi_\psi\) then the fidelity is equal to the probability that you measure \(\Pi_\psi\) i.e.
Be careful not to confuse this definition with the square root fidelity \(\sqrt{F}\), which has a subtle operational interpretation.
[3]:
dm.fidelity(rho, sigma)
[3]:
0.44391492439312213
[4]:
print('Infidelity is 1 - fidelity:', dm.infidelity(rho, sigma), '\n')
Infidelity is 1 - fidelity: 0.5560850756068778
Another important measure is the Trace distance between \(\rho\) and \(\sigma\) which we denote by
The Trace distance has the physical / operational interpretation of being related to the measurement that achieves the maximum probability of distinguishing between \(\rho\) and \(\sigma\) in a single measurement
see the Wikipedia entry and Fuchs’ PhD thesis for more information.
[5]:
dm.trace_distance(rho, sigma)
[5]:
0.5148017705415839
More information about the bures_distance
and bures_angle
can be found on the Bures metric Wikipedia article.
[6]:
dm.bures_distance(rho, sigma)
[6]:
0.8169829762394096
[7]:
dm.bures_angle(rho, sigma)
[7]:
0.8416015216867558
The Hilbert Schmidt inner product is a useful concept in quantum information.
[8]:
dm.hilbert_schmidt_ip(rho, sigma)
[8]:
0.3510710632692492
Above we mentioned in passing how the trace distance is related to the optimal probability for distinguishing two states in a single measurement.
A basic question in statistics and information theory is “What is the optimal probability if you are given \(n\) measurements?”. Herman Chernoff solved this problem in 1952, see the open access paper here; the problem is still interesting today.
He showed, in the limit of large \(n\), that the probability of error \(P_{\rm err}\) in discriminating two probability distributions decreases exponentially in \(n\):
where the exponent \(\xi_{CB}\) is called the Chernoff bound. \(P_{\rm err}\) is one minus the optimal probability for distinguishing two states.
In distance_measures
we provide a utility to calculate The Quantum Chernoff Bound.
[9]:
qcb_exp, s_opt = dm.quantum_chernoff_bound(rho,sigma)
print('The non-logarithmic quantum Chernoff bound is:', qcb_exp)
print('The s achieving the minimum qcb_exp is:', s_opt, '\n')
The non-logarithmic quantum Chernoff bound is: 0.6157194691457855
The s achieving the minimum qcb_exp is: 0.4601758017841054
Next we calculate the total variation distance (TVD) between the classical outcome distributions associated with two random states in the Z basis.
[10]:
Proj_zero = np.array([[1, 0], [0, 0]])
# Pr(0|rho) = Tr[ rho * Proj_zero ]
p = np.trace(rho_haar1 @ Proj_zero)
q = np.trace(rho_haar2 @ Proj_zero)
# Pr(Proj_one) = 1 - p or Pr(Proj_one) = 1 - q
P = np.array([[p], [1-p]])
Q = np.array([[q], [1-q]])
dm.total_variation_distance(P,Q)
[10]:
0.02833199827251809
The next two measures are not really measures between states; however you can think of them as a measure of how close (or how far, respectively) the given state is to a pure state.
The purity is defined as
while the impurity is defined as
and is sometimes referred to as the linear entropy.
[11]:
print('Pure states have purity P = ', np.round(dm.purity(rho_haar1),4))
print('Mixed states have purity <=1. In this case P = ', np.round(dm.purity(rho),4), '\n')
print('Pure states have impurity L = 1 - Purity = ', np.round(dm.impurity(rho_haar1),4))
print('Mixed states have impurity >= 0. In this case L = ', np.round(dm.impurity(rho),4))
Pure states have purity P = 1.0
Mixed states have purity <=1. In this case P = 0.9461
Pure states have impurity L = 1 - Purity = 0.0
Mixed states have impurity >= 0. In this case L = 0.0539
Some researchers us a dimensional renormalization that makes the purity lie between [0,1]. In this case maximally mixed state has purity = 0, independent of dimension D. The mathematical expression for dimensional renormalization for the purity is
The dimensional renormalization for the impurity gives
[12]:
# calculate purity WITH and WITHOUT dimensional renormalization
print(dm.purity(rho, dim_renorm=True))
print(dm.purity(rho, dim_renorm=False))
# calculate impurity WITH and WITHOUT dimensional renormalization
print(dm.impurity(rho, dim_renorm=True))
print(dm.impurity(rho, dim_renorm=False))
0.892259133986272
0.946129566993136
0.10774086601372801
0.053870433006864005
[13]:
dm.purity(rho, dim_renorm=True)+dm.impurity(rho, dim_renorm=True)
[13]:
1.0
Distance measures between quantum processes¶
For processes the two most popular metrics are: the average gate fidelity \(F_{\rm avg}(P,U)\) of an actual process P relative to some ideal unitary gate U, and the diamond norm distance.
This example is related to test cases borrowed from qutip, which were in turn generated using QuantumUtils for MATLAB by C. Granade.
[14]:
Id = np.asarray([[1, 0], [0, 1]])
Xd = np.asarray([[0, 1], [1, 0]])
from scipy.linalg import expm
# Define unitary
theta = 0.4
Ud = expm(-theta*1j*Xd/2)
# This unitary is:
# close to Id for theta small
# close to X for theta np.pi (up to global phase -1j)
print(Ud)
[[0.98006658+0.j 0. -0.19866933j]
[0. -0.19866933j 0.98006658+0.j ]]
Process Fidelity between Pauli-Liouville matrices
In some sense the Process Fidelity measures the average fidelity (averaged over all input states) with which a physical channel implements the ideal operation. Given the Pauli transfer matrices \(\mathcal{R}_P\) and \(\mathcal{R}_U\) for the actual and ideal processes, respectively, the average gate fidelity is
The corresponding infidelity
can be seen as a measure of the average gate error, but it is not a proper metric.
[15]:
from forest.benchmarking.operator_tools import kraus2pauli_liouville
[16]:
plio0 = kraus2pauli_liouville(Id)
plio1 = kraus2pauli_liouville(Ud)
plio2 = kraus2pauli_liouville(Xd)
[17]:
dm.process_fidelity(plio0, plio1)
[17]:
0.9736869980009618
[18]:
dm.process_infidelity(plio0, plio1)
[18]:
0.026313001999038188
Diamond norm distance between Choi matrices
The diamond norm distance has an operational interpretation related to the trace distance, i.e. single measurement channel discrimination.
Readers interested in the subtle issues here are referred to
- John Watrous’s Lecture Notes Lecture 20: Channel distinguishability and the completely bounded trace norm
- Fundamental limits to quantum channel discrimination, by Pirandola et al.
- slides from an over view talk by Blume-Kohout
[19]:
from forest.benchmarking.operator_tools import kraus2choi
[20]:
choi0 = kraus2choi(Id)
choi1 = kraus2choi(Ud)
choi2 = kraus2choi(Xd)
[21]:
# NBVAL_SKIP
# our build environment has problems with cvxpy so we skip this cell
dnorm = dm.diamond_norm_distance(choi0, choi1)
print("This gate is close to the identity as the diamond norm is close to zero. Dnorm= ",dnorm)
This gate is close to the identity as the diamond norm is close to zero. Dnorm= 0.3973386615692544
[22]:
# NBVAL_SKIP
dnorm = dm.diamond_norm_distance(choi0, choi2)
print("This gate is far from identity as diamond norm = ",dnorm)
This gate is far from identity as diamond norm = 2.0000000004366494
[23]:
dm.watrous_bounds((choi0 - choi1)/2)
[23]:
(0.3973386615901225, 1.58935464636049)
[ ]: