Therefore, for large n, there is no local correlation between the zeros of the considered random paraorthogonal polynomials. In the case of the unit circle, similar localization results were obtained by Teplyaev [Tep].
Outline of the Proof of the Main Theorem
In other words, the distribution of the eigenvalues of the matrix C(N) can be asymptotically approximated by the distribution of the eigenvalues of the direct sum of the matrices C˜1(N),C˜2(N). We will also introduce some of the main tools used in the study of these mathematical objects.
Definition, Basic Properties, Examples
In this chapter we will present a few fundamental results in the theory of orthogonal polynomials on the unit circle. Several examples of orthogonal polynomials on the unit circle are presented in section 1.6 of [Sim4].
Zeros of OPUC and POPUC
It turns out that the zeros of the orthogonal polynomials can only be located in the convex hull of supp(µ). Then all the zeros of Φn(z, µ) lie inside the convex hull of supp(µ).
The CMV Matrix and Its Resolvent, Dynamical Proper- ties of the CMV Matrices
The matrix representation of the operator T in the basis ˜B (called the alternative CMV basis) is. We conclude this section with two results describing the dependence of the Schur function on the Verblunsky coefficients.
Properties of Random CMV Matrices
A detailed analysis of the Lyapunov exponent, as well as the necessary and sufficient conditions for its existence, can be found in Section 10.5 of [Sim5]. Even if we denote by f(z;S) the Schur function associated with the family of Verblunsky coefficients S, we have In the thesis, we will consider only random Verblunsky coefficients for which the measure β0 is rotationally invariant.
It is worth noting that most of the results presented in this section apply to more general probability distributions. If the condition (2.4.5) holds and β0 is rotationally invariant, then the spectrum of the unitary operator Cα is a pure point for almost every α∈Ω. When analyzing the random matrixCα, it is very important to understand the structure of the eigenfunctions.
The conclusion (2.4.13) will allow us to obtain information about the decay of the eigenfunctions of matrixCα. The development of the mathematical theory of random Schr¨odinger operators was largely motivated by the physicist P.
The Anderson Model
The point τ(ψE) (which may not be unique) is called the center of localization of the eigenfunction ψE. We can now state the localization result for the eigenfunctions of the operator HV: Theorem 3.1.3. As we will see later, this result will be a key ingredient in the study of the local statistical distribution of the eigenvalues of HV.
If the matrix elements of the Green function decay exponentially (along the . rows and columns), one can show that the eigenfunctions of H(ω) decay exponentially. The exponential decay of the Green function (with very high probability) was obtained by Fr¨ohlich and Spencer in [FS] for probability distributionsµ absolutely continuous with respect to the Lebesgue measure. Under the hypotheses of Theorem 3.1.4, for almost every ω ∈ Ω, the random Hamiltonian H(ω) has only dense point spectrum (for all E as condition a) and for |E| large as condition b) applies) with exponentially decaying eigenfunctions.
From the exponential decay of the s-fractional moments of the matrix elements in the Green's function, one can conclude that the Anderson localization holds using the Simon-Wolff criterion (Theorem 3.1.5). A very concise and clear presentation of the Aizenman-Molchanov methods can be found in [Sim2].
The Statistical Distribution of the Eigenvalues for the Anderson Model: The Results of Molchanov and Minami
A proof of the existence of the density of states for this model (and also for more general models) can be found in [Pas]. It is natural to ask whether the same statistical distribution holds for the eigenvalues of the multidimensional Schr¨odinger operator. As in the one-dimensional case, we consider the restriction HΛ of the operator H to the finite frame Λ⊂Zν.
We want to study the statistical distribution of the eigenvalues of the operator HΛ located near E. In Minami's work, the exponential localization of the eigenfunctions is derived from the Aizenman-Molchanov limits. The Aizenman-Molchanov limits are again used to show that the asymptotic local statistical distribution of the eigenfunctions remains unchanged as before.
More precisely, we will be interested in the expectations of the fractional moments of the matrix elements in the solvent. We will prove that the expected value of the fractional moment of the matrix elements in the solvent decays exponentially (see (1.1.6)).
Uniform Bounds for the Fractional Moments
We will analyze the matrix elements of the resolvent (C(n)−z)−1 of the CMV matrix, or, equivalently, the matrix elements of . Thus we get that, for any fixedα∈Ω and for Lebesgue almost anyz=eiθ ∈∂D, the radial limit Fkl(eiθ,Cα(n)) exists, where. Note that the argument of Lemma 4.1.1 works in the same way when we replace the unit matrix Cα(n) with the unit operator Cα (corresponding to random Verblunsky coefficients uniformly distributed in D(0, r)), so we have also.
The Uniform Decay of the Expectations of the Fractional Moments
For each z = eiθ ∈ ∂D there exists a Lyapun exponent and the Thouless formula for rotationally invariant distributions (see (2.4.8)) gives. The positivity of the Lyapunov exponent γ(eiθ) implies, using Theorem 2.4.3 (Ruelle-Osceledec), that there exists a constant λ6= 1 (which determines the boundary condition) for which. The next step is to obtain the same result for the finite volume case (i.e. when the matrix C =Cα is replaced by the matrix C(n)α.
Let C be the CMV matrix corresponding to the family of Verblunsky coefficients {αn}n≥0, where |αn|< r for any n. Note that the matrix (C − C(n)) has at most eight nonzero terms, each of which has absolute value at most 2. We can improve the previous lemma to obtain the convergence to 0 of E(|(Cα(n) − z)− 1j,j+k|s) is uniform in row j.
Let us consider the matrixCdec(n) obtained from the same Verblunsky coefficients with the additional constraint αm =eiθ where m is chosen to be larger but close to j (for example m=j+ 3). In other words, the decay is uniform on the five rows m−2,m−1, m,m+ 1, andm+ 2 located at a distance of at most 2 from the place where the matrixCdec(n) disconnects.).
The Exponential Decay of the Fractional Moments
The following lemma shows that we can check the conditional expectations of the diagonal elements of the matrixC(n). For a fixed family of Verblunsky coefficients {αn}n∈N, the diagonal elements of the CMV matrix solver can be obtained using the formula. The basic idea is to use the uniform decomposition of the expectations of the fractional moments of the matrix elements of C(n) (Lemma 4.2.5) to derive the exponential decomposition.
We can now repeat the same procedure for each termE(|(C1(n)−z)−1s4l|s) and we get one more coefficient β.
The Localized Structure of the Eigenfunctions
The eigenfunction is concentrated (has its large values) near the point m(ϕ(n)α ) and is small at locations far from m(ϕ(n)α. This structure of the eigenfunctions will allow us to prove a decoupling property for The CMV matrix.
Decoupling the CMV Matrices
Since we are interested in the asymptotic distribution of eigenvalues, it will suffice to study the distribution (asn→ ∞) of the eigenvalues of matrices C(N) of sizeN = [lnn]£ n. In our analysis, we will start counting the rows of the CMV matrix with row 0. A simple inspection of the CMV matrix shows that even for Verblunsky coefficients α2k, only rows 2k and 2k+ 1 depend on α2k.
Since, from the results in Section 5.1, the eigenfunctions of the matrix C(N) are exponentially localized (based on a set of size 2T[ln(n+ 1)], where henceforth, T = D140), some of them will have the localization center near SN(1) (the set of points where the matrix ˜C(N) diverges) and others will have the localization centers far from this set (i.e., due to exponential localization, within a Approx. any eigenfunction of the second type will produce an "almost" eigenfunction for one of the blocks of the disconnected matrix ˜C(N). Therefore, if we denote by the card (A) the number of elements of the set A, we get X.
Since the distributions for our Verblunsky coefficients are assumed to be rotation invariant, the distribution of the eigenvalues is rotation invariant. We then get that the probability of the event "there are bad eigenfunctions corresponding to eigenvalues in the intervalIN" converges to 0.
Estimating the Probability of Having Two or More Eigen- values in an Interval
Let ηN : [0,2π)→R be a continuous function such that. we will only be interested in the values of the function ηN near a fixed point eiθ0 ∈ ∂D). Since the distribution of each of the random variables α0, α1,. . , αN−2 and β are rotation invariant, for any θ ∈ [0,2π), γ and τ(θ) are uniformly distributed random variables. We can immediately see that γ and τ(θ) are independent. Since γ does not depend on θ, for any fixedθ1, θ2 ∈[0,2π), we have that the random variables ∂η∂θN(θ1) andηN(θ2) are independent.
We now see that for any Blaschke factor Ba(z) = 1−azz−a, we can define a real value functionηa on ∂D such that. Since BN is a Blaschke product, we now get that for any fixedα ∈Ω, ∂η∂θN has a constant sign (positive). Also, using the same argument as in Lemma 4.1.1, we have that for any anglesθ and ϕ,.
Taking expectations and using Fubini's theorem (as we also did in Lemma 4.1.1), we get, for any angle θ0,. For any θ1 ∈IN we have, using the independence of the random variables ∂η∂θN(θ1) and ηN(θ2) for the first inequality and Chebyshev's inequality for the second inequality.
Proof of the Main Theorem
This theorem shows that as N → ∞, each of the disconnected matrices contributes at most one eigenvalue to each interval of size N1. Since for any disjoint interval IN,k,1 ≤ k ≤ [lnn] located near eiθ0, the random variables Sn(IN,k) are independent, (6.2.1) will now yield (1.0.16) and for therefore the proof of the Main Theorem is complete.
The Case of Random Verblunsky Coefficients Uniformly Distributed on a Circle
It would be interesting to understand the statistical distribution of zeros of rectangular polynomials. A generic representation of the zeros of paraorthogonal polynomials versus the zeros of orthogonal polynomials is. We expect that the distribution of the arguments of the zeros of rectangular polynomials on the unit circle is also Poisson.
For example, if the Verblunsky coefficients are uniformly distributed in the interval [−12,12], then a typical plot of the zeros of the paraorthogonal polynomials. This indicates that the assumption that the distribution of the Verblunsky coefficients is invariant under rotations may not be crucial to obtain the local Poisson distribution. The numerical simulations for the case of Bernoulli distributions also suggest a local Poisson distribution for the zeros of the paraorthogonal polynomials at most points z∈∂D.
Mo2], Local structure of the spectrum of the one-dimensional Schr¨odinger operator, Comm. Sto] Mihai Stoiciu, Statistical distribution of zeros of random paraorthogonal polynomials on the unit circle, to be published in J.