10 0 10-1
10-2
10-3
lower bound single trajectory
0 1 2 3 4 5
time
0 1 2 3 4 5
time
0 1 2 3 4 5
-1 -2 -3 -4 -5
0 1 2 3 4 5
-1 -2 -3 -4 -5
5
-5 0
upper bound
(A) (B) (C)
Figure 9.2: Many open-loop moment trajectories [36] of the SSE, equation (9.5). The trajectory of figure 9.1 is darkened. (A) At short times, the evolution of the variance (shown on a log scale) is deterministic and given byh∆ ˆFz2is(t). At long times, the variances become stochastic but bounded (above by 1/4 and below by exp[−2(M t−1)]/4). The average of all 10,000 trajectories (only 10 are shown) gives E[h∆ ˆFz2i(t)]. (B) The projective nature of the measurement is made clear by the evolution of 100 trajectories of hFˆzi(t).
The distribution of the final results is given by the first histogram of figure 9.1A. (C) The evolution of the 100 trajectories all starting in an x-polarized CSS. When η = 1, certain regions of Hilbert space are forbidden by the evolution.
equate at long times, and deriving simple reduced models with a field present is difficult, thus forcing us to use the full SME in our state-based controller. Despite this difficulty, during our subsequent feedback analysis we assume sufficient control bandwidth that the SME can be evolved by the observer in real time.
the pure Dicke state, ˆρ=|mihm|, into the SME with no Hamiltonian (or only a field along z), we find that it is a steady-state, dˆρ = 0, no matter what happens with the subsequent measurement record. Of course, this does not yet prove that the state will eventually be obtained, as we have not discussed the stability of attractors in stochastic systems.
Without a field present, the SME has several convenient properties. First of all, from the evolution equation for the variance notice that the variance is a stochastic process that decreases on average. In fact it is a supermartingale, in that for times s≤twe have
Es[h∆ ˆFz2i(t)]≤ h∆ ˆFz2i(s) (9.29) where the notation E[x(t)] denotes the average of the stochastic variable x(t) at timetand the s subscript represents conditional expectation given a particular stochastic trajectory up to the time s. Additionally, it can be shown [71] that the average variance obeys the equation
E[h∆ ˆFz2i(t)] = h∆ ˆFz2i(0)
1 + 4M ηh∆ ˆFz2i(0)(t+ξ(t)) (9.30) where
ξ(t) = Z t
0
E[(h∆ ˆFz2i(s)−E[h∆ ˆFz2i(s)])2] E[h∆ ˆFz2i(s)]2 ds
≥ 0. (9.31)
A more explicit solution of ξ(t) is not necessarily needed as its positivity ensures that h∆ ˆFz2i(t) stochastically approaches zero. This implies that a Dicke state is eventually prepared. The numerical simulation of figures 9.1 and 9.2 demonstrates this behavior for an initially x polarized state. As expected, E[h∆ ˆFz2i(t)] in figure 9.2A appears to be less than the short-time solutionh∆ ˆFz2is(t), equation (9.18), at long times.
Other useful properties of the stochastic evolution are evident from the moment equa- tions. For example, we can show that
dhFˆzni= 2p
M η(hFˆzn+1i − hFˆznihFˆzi)dW(t) (9.32) for integern, hence
dE[hFˆzni] = 0 (9.33)
and for times s≤twe have the martingale condition
Es[hFˆzni(t)] =hFˆzni(s). (9.34) This equation forn= 1 gives us the useful identity
E[hFˆzi(t)hFˆzi(s)] = E[hFˆzi(s)2] (9.35) fors≤t. Also, we can rewrite the expression forn= 2 as
Es[hFˆzi(t)2+h∆ ˆFz2i(t)] =hFˆzi(s)2+h∆ ˆFz2i(s). (9.36) This implies a sort of conservation of uncertainty as the diffusion in the mean, shown in figure 9.1B, makes up for the decreasing value of the variance.
9.4.2 Zero Measurement Efficiency
It is insightful to examine the behavior of the master equation withη= 0, which corresponds to ignoring the measurement results and turns the SME equation (B.43) into a deterministic unconditional master equation. We continue to consider only those initial states that are polarized. This is because these states are experimentally accessible (via optical pumping) and provide some degree of selectivity for the final prepared state. To see this, let us consider a spin-1/2 ensemble polarized in the xz-plane, making angle θ with the positive z-axis, such that
hFˆxi(0) = sin(θ)N/2 (9.37)
hFˆyi(0) = 0
hFˆzi(0) = cos(θ)N/2 h∆ ˆFx2i(0) = cos2(θ)N/4 h∆ ˆFy2i(0) = N/4
h∆ ˆFz2i(0) = sin2(θ)N/4.
Solving the unconditional moment equations, and labeling them with u subscripts, we get hFˆxiu(t) = sin(θ) exp(−M t/2)N/2 (9.38) hFˆyiu(t) = 0
hFˆziu(t) = cos(θ)N/2
h∆ ˆFx2iu(t) = sin2(θ)[N2−N −2N2exp(−M t) +(N2−N) exp(−2M t)]/8 +N/4
→ sin2(θ)(N2−N)/8 +N/4 h∆ ˆFy2iu(t) = sin2(θ)[N2−N
+(N−N2) exp(−2M t)]/8 +N/4
→ sin2(θ)(N2−N)/8 +N/4 h∆ ˆFz2iu(t) = sin2(θ)N/4.
Note that, because the unconditional solutions represent the average of the conditional solution, i.e., ˆρu(t) = E[ ˆρ(t)], we have
E[hFˆzi(t)] =hFˆziu(t) =hFˆzi(0) = cos(θ)N/2. (9.39) This also follows from the martingale condition forhFˆzi(t). From the martingale condition forhFˆz2i(t) we get
E[(hFˆzi(t) − E[hFˆzi(t)])2]
= h∆ ˆFz2i(0)−E[h∆ ˆFz2i(t)] (9.40)
→ h∆ ˆFz2i(0) = sin2(θ)N/4.
Thus, when 0 < η ≤ 1, we expect the final random conditional Dicke state on a given trial to fall within the initial z distribution. Given θ, the distribution will have spread
|sin(θ)|√
N /2 about the value cos(θ)N/2. Although the final state is generally random, starting with a polarized state clearly gives us some degree of selectivity for the final Dicke state because √
N N.
9.4.3 Nonzero Measurement Efficiency
When η6= 0, the measurement record is used to condition the state, and we can determine which Dicke state the system diffuses into. Given the task of preparing the state |mdi, the above analysis suggests the following experimental procedure. First, polarize the ensemble (via optical pumping) into an unentangled coherent state along any direction. Then rotate the spin vector (with a magnetic field) so that the z-component is approximately equal to md. Finally, continuously measurez until a time t 1/ηM. The final estimate will be a random Dicke state in the neighborhood of md. When the trial is repeated, the final states will make up a distribution described by the initial moments of ˆFz (hFˆzi(0),h∆ ˆFz2i(0), ...).
To reduce the effects of stray field fluctuations and gradients, a strong holding field could be applied along the z-axis. Because this Hamiltonian commutes with the observable ˆFz, the final open-loop measurement results would be unchanged.
This process (with zero field) is shown schematically in figure 9.1 for md = 0 where the initial state is polarized along x. Because hFˆzi(0) = 0, the final state with the highest probability is the entangled Dicke statemd= 0. In contrast, ifhFˆzi(0) =F the state would start in an unentangled CSS polarized along z and would not subsequently evolve.
One way of characterizing how close the state is to a Dicke state is through the variance, h∆ ˆFz2i(t). Figure 9.2A displays many trajectories for the variance as a function of time.
For timest1/ηM the variance is approximately deterministic and obeys the short-time solution of equation (9.18). During this period, the meanhFˆxi(t) is decreasing at rateM/2.
Before this mean has completely disappeared, a conditional spin-squeezed state is created.
However, for larger times the mean and the variance stochastically approach zero, and the state, while still entangled, no longer satisfies the spin-squeezing criterion [17].
There are several features to notice about the approach to a Dicke state that are evident in figures 9.1 and 9.2. The variance at time t= 1/ηM is already of order unity. Thus, at this point, only a few neighboringm levels contain any population, as can be seen in figure 9.1C. Also, it can be numerically shown that, for x-polarized initial states, the diffusion of the variance at long times t1/ηM is bounded above and below by
exp[−2(ηM t−1)]/4<h∆ ˆFz2i(t)≤1/4, (9.41) which is evident from figure 9.2A. These facts indicate that the population is divided among
at most two levels at long times which “compete” to be the final winner. If we assume that only two neighboring levels are occupied and apply the SSE (with η= 1), the probability, p, to be in one level obeys the stochastic equation
dp=−2M p(1−p)dW(t) (9.42)
and the variance takes the form h∆ ˆFz2i(t) =p(1−p). As simple as it looks, this stochastic differential equation (SDE) is not analytically solvable [53, 52]. The maximum variance is 1/4 and it can be shown that, for p ≡ 1−, with small, the lower bound is of the exponential form stated above, so the two-level assumption seems to be a good one. The fact that occupied Hilbert space becomes small at long times is also evident in figure 9.2C, where the allowed states are seen to be excluded from certain regions when η = 1. The arclike boundaries of the forbidden space are where the two level competition occurs. This few-level SDE is discussed more fully in section 9.8.
In practice, an experimentalist does not always have an infinite amount of time to prepare a state. Eventually spontaneous emission and other decoherence effects will destroy the dispersive QND approximation that the present analysis is based upon. Suppose our task was to prepare a Dicke state with, on average, a desired uncertainty, h∆ ˆFz2id 1, such that one level was distinguishable from the next. From equation (9.30), we see that the time that it would take to do this on average is given by
td=
"
1
h∆ ˆFz2id− 1 h∆ ˆFz2i(0)
#
/4M η. (9.43)
Thus if h∆ ˆFz2id 1 is our goal, thentd is how long the state must remain coherent. The larger h∆ ˆFz2i(0) is the more entangled the final states are likely to be (m≈0) [17], hence, by equation (9.43), the longer it takes to prepare the state for a given h∆ ˆFz2id. Hence, we arrive at the intuitively satisfying conclusion that conditional measurement produces entangled states more slowly than unentangled states. Of course, equation (9.43) is an average performance limit. In a best-case scenario, the variance would attain the lower bound of equation (9.41) where the state reduction happens exponentially fast.
9.4.4 Performance of Suboptimal Estimators
Now we consider the performance of the suboptimal estimators discussed previously, in particular the current average hFˆzia(t) of equation (9.28). It makes sense to associate the overall “error” of this estimator, denoted Va, to be the average squared distance of the estimator from the optimal estimator plus the average uncertainty of the optimal estima- tor itself, E[h∆ ˆFz2i(t)]. Using the martingale properties of the optimal estimate and the definition of the photocurrent gives this quantity as
Va ≡ E[(hFˆzia(t)− hFˆzi(t))2] + E[h∆ ˆFz2i(t)]
= 1
4M ηt. (9.44)
This is just the error in estimating a constant masked by additive white noise with the same signal-to-noise ratio [20]. The optimal estimator is better than this suboptimal estimator at long times only through the quantity ξ(t), equation (9.31).
In the open-loop experimental procedure described at the beginning of the last section, the above observation indicates that we can replace the optimal estimator with the pho- tocurrent average and still resolve the projective behavior (given sufficient elimination of extraneous noise). The price paid for the simplicity of the averaging estimator is that it converges more slowly and it only works when a field is not present (hence without control).
In appendix E, we analyze the performance of the averaging estimator when incoherent decay is added to the model.