E l e c t ro n ic
Jo u r n
a l o
f
P r
o b
a b i l i t y
Vol. 6 (2001) Paper no. 14, pages 1–33.
Journal URL
http://www.math.washington.edu/~ejpecp/ Paper URL
http://www.math.washington.edu/~ejpecp/EjpVol6/paper14.abs.html
ORDERED ADDITIVE COALESCENT AND
FRAGMENTATIONS ASSOCIATED TO LEVY PROCESSES WITH NO POSITIVE JUMPS
Gr´egory Miermont
Laboratoire de Probabilit´es et Mod`eles Al´eatoires, Universit´e Paris VI 175, rue du Chevaleret, 75013 Paris, France
Abstract
We study here the fragmentation processes that can be derived from L´evy processes with no positive jumps in the same manner as in the case of a Brownian motion (cf. Bertoin [4]). One of our motivations is that such a representation of fragmentation processes by excursion-type functions induces a particular order on the fragments which is closely related to the additivity of the coalescent kernel. We identify the fragmentation processes obtained this way as a mixing of time-reversed extremal additive coalescents by analogy with the work of Aldous and Pitman [2], and we make its semigroup explicit.
Keywords Additive coalescent, fragmentation, L´evy processes, processes with exchangeable increments.
AMS subject classification 60J25, 60G51
1
Introduction
This paper is centered on the study of the additive coalescent which has been constructed, as a large class of coalescent Markovian processes, by Evans and Pitman in [12]. They describe the dynamics of a system of clusters with finite total mass, in which pairs of clusters merge into one bigger cluster at a rate given by the sum of the two masses (this is made rigourous by giving the associated L´evy system). They also present a construction of Markovian “eternal” coalescent processes on the whole real line which are starting from some infinitesimal masses.
One standard way to study such processes is to consider the dual fragmentation processes that split each cluster into smaller clusters. Coalescent processes are then obtained by appropriate time-reversal. Aldous and Pitman, successively in [1] and [2], have given a construction of the standard additive coalescent, which is the limit asntends to∞of a coalescent process starting at time−12log(n) withnclusters of masses 1/n, by time-reversing a fragmentation process obtained by logging Aldous’ CRT (Continuum Random Tree) by a certain family of Poisson processes on its skeleton. They also contructed more general fragmentation processes by using inhomogeneous generalizations of the CRT, and gave the exact entrance boundary of the additive coalescent. Bertoin [4, 5] gave a different approach to obtain the same fragmentation processes, by using partitions of intervals induced by some bridges with exchangeable increments (the Brownian bridge in the case of the standard additive coalescent, giving the “Brownian fragmentation”). Our goal in this paper is to investigate the fragmentation processes that can be associated to L´evy processes with no positive jumps in a way similar to the Brownian fragmentation. One main motivation for studying this “path representation” approach is that it induces a particular ordering on the coalescing clusters that is not seen directly with the CRT construction. We study this order in section 2 in the most general setting, and construct the so-called ordered additive coalescent. Our study is naturally connected to the so-called fragmentation with erosion ([2, 5]), which in turn is related to non-eternal coalescents. Then we study the L´evy fragmentation processes in sections 3 and 4 in the case where the L´evy process has unbounded variation, and we give in particular the law of the process at a fixed time t ≥ 0, which is interpreted as a conditioned sequence of jumps of some subordinators, generalizing a result in [1]. We also give in section 5 a description of the left-most fragment induced by the ordering on the real line, which is similar to [1] and [4]. Then, we show that the fragmentation processes associated to L´evy processes are up to a proper time-reversal eternal additive coalescents, and we study the mixing of the extremal eternal additive coalescent appearing in the time-reversed process in section 6.
For this, we rely in particular on the ballot theorem (Lemma 2), which Schweinsberg [23] has also used in a similar context. We also use a generalization of Vervaat’s Theorem (Proposition 1) for the Brownian bridge, which is proved in section 7.
As a conclusion, we make some comments on the case where the L´evy process has bounded variation in section 8.
2
Ordered additive coalescent
• The construction by Evans and Pitman [12] of the additive coalescent semigroup and L´evy system, when the coalescent takes values either in the set of partitions ofN ={1,2. . .} or inS↓={x
1 ≥x2≥. . .≥0,Pxi = 1}.
• The description of eternal additive coalescent on the whole real line R by Aldous and Pitman [1, 2], by time-reversing a fragmentation process obtained by splitting the skeleton of some continuum random tree.
• The representation by Bertoin [5] of these fragmentation processes by path transformation on bridges with exchangeable increments and excursion-type functions.
We may also mention the method of Chassaing and Louchard [8], which is quite close to Bertoin’s and is based onparking schemes.
In this paper, we shall mostly focus on the third method. To begin with, we stress that in contrast to the first two constructions, the third method naturally induces a puzzling natural order on the fragments (which are sub-intervals of [0,1] with total length 1). Indeed, one may wonder for instance why there should be a fragment (the ”left-most fragment”, see [4]) that always coalesces by the left? We shall answer this question by constructing a more precise process we call ordered additive coalescent, in which the initial coalescing fragments can merge in different ways (and which is not the same as the ranked coalescent of [12]). This study is naturally connected to the “fragmentation with erosion”.
2.1 Finite-state case
First recall the dynamics of the additive coalescent starting from a finite number of clusters with masses m1, m2, . . . , mn > 0 (we sometimes designate the clusters by their masses even if
there may be some ambiguity). We stress that the sequence m = (m1, . . . , mn) need not be
ranked in decreasing order. For each pair of indices (i, j) with i < j let eij be an exponential
r.v. with parameter mi +mj (we say that κ(x, y) = x+y is the additive coalescent kernel).
Suppose that these variables are independent. At time e = infi<jeij the clusters with masses
mi0 and mj0 merge into a unique cluster of mass mi0 +mj0 where i0 < j0 are the a.s. unique integers such that e=ei0j0. Then the system evolves in the same way, and it stops when only one cluster with mass 1 remains. In the sequel, we will always suppose for convenience that
m1+m2+. . .+mn= 1. The general case follows under some change of the time scale.
These dynamics do not induce any ordering on the clusters. We now introduce a natural order which is closely related to the additive property of the coalescent kernel. Indeed, one can view each variable eij as the minimum of two independent exponential variables eiij and ejij with
respective parameters mi and mj. The first time eof coalescence corresponds to someekij with
k ∈ {i, j}, meaning that clusters i and j have merged at time e, and then we decide to say that cluster k has absorbed the other one. Alternatively, the system evolves as if we took n
exponential variablese1, . . . , en with respective rates (n−1)m1, . . . ,(n−1)mn and by declaring
that, at time ei∗ = min1≤i≤nei, the cluster labeled i∗ absorbs one of the n−1 other clusters which is picked at random uniformly. We are going to make this more precise in the sequel. First we introduce a more “accurate” space. Call total order on a set M a subset O of M×M
1. If (i, j)∈O and (j, k)∈O then (i, k)∈O.
2. ∀i∈M, (i, i)∈O.
3. ∀i, j∈M, (i, j)∈O or (j, i) ∈O.
4. If (i, j)∈O and (j, i) ∈O then i=j.
A partial order on M is a subset of M ×M that satisfies only properties 1, 2 and 4. For any partial order O on M and any subset M′ ⊂M, we can define the restriction of O to M′ which is the intersection ofO with M′×M′.
Let O∞ (resp. On, n≥1) be the set of all partial orders O on N (resp. Nn ={1, . . . , n}) such that
iRj ⇐⇒ (i, j)∈O or (j, i) ∈O
defines an equivalence relation. This is equivalent to saying that there exists a partition w of
N (resp. Nn) such that the restrictions ofO to the blocks of w are total orders, and that two integers in disjoint blocks of w are not comparable. We may also writeO uniquely in the form (w,(OI)I∈w) where w is a partition of N (resp. Nn) and for each I ∈w,OI is a total order on
I (the restriction of O to I). We call the OI’s clusters, and by abuse of notation, byI ∈O we
mean thatI is a cluster ofO. For O inOn (1≤n≤ ∞) and 1≤k≤n we callkO the cluster
in which kis appearing.
IfO is inOn (n≤ ∞), we define for any pair (I,J) of distinct clusters inO the orderOIJ ∈ On
which has the same clusters asO except that the clustersI and J merge into IJ where
(i, j)∈ IJ ⇐⇒
 
(i, j)∈ I or
(i, j)∈ J or
i∈π(I) and j∈π(J)
and π is the projection on the first coordinate axis. Let also
S1+,n={m= (m1, . . . , mn), mi>0 ∀1≤i≤n, n
X
i=1
mi = 1}
and
S1+,∞={m= (m1, m2, . . .), mi>0 ∀i≥1,
+∞
X
i=1
mi = 1}.
Last, for n ≤ ∞, m ∈ S1+,n and O ∈ On, we call mass of cluster I ∈ O the number mI =
P
i∈π(I)mi.
For n∈N, we now describe the dynamics of the so-called On-additive coalescent with “proto-galaxy masses”m= (m1, . . . , mn) ∈l1+,n. Let O∈ On be the current state of the process, and
#O≥2 the number of clusters. Considernexponential r.v.’s (ek)1≤k≤n with respective
param-eters (mk)1≤k≤n. There is a.s. a uniquek∗ such thatek∗ = min1≤k≤nek. At timeek∗/(#O−1), which is exponential with parameter #O−1, cluster k∗
O merges with one of the #O−1 other
clustersI∗ picked at random uniformly. The state of the process then turns to Ok∗
OI∗, and the
above : it is easily seen that two clusters I,J ∈O merge together at rate mI +mJ. Indeed,
P[k∗O=I] =P[k∗ ∈ I] =mI. We call P
m
O the law of the process with initial stateO.
We say that the cluster k∗O absorbs cluster I∗ (more generally we designate the order in every clusterIby the binary relation “iabsorbsj”). When the system stops evolving, it is constituted of a single clusterO∞with massPmi, which is a total order onNn. There is always a left-most fragment (here we call fragment any integer) minO∞, which is the fragment that has absorbed
all the others. Notice that the process is increasing in the sense of the inclusion of sets.
Remarks. —This construction has to be compared with Construction 3 in [12], but where the system keeps the memory of the orders of coalescence by labeling the edges of the resulting tree in their order of appearance. It would be interesting to give a description of a limit labeled tree in an asymptotic regime such as in [2].
—It is immediate that, when ignoring the ordering, the evolution of the cluster masses starting at m1, . . . , mn is a finite-state additive coalescent evolution (in fact, the ordered coalescent is
not so different of the classical coalescent, it only contains an extra information at each of the coalescence times). If we had replaced the timeek∗/(#O−1) of first coalescence by ek∗ above, the evolution of the clusters’ masses would give the aggregating server evolution described in [5].
—One may remark that this way of ordering the clusters can be seen as a particular case of Norris [18] who studies coagulation equations by finite-state Markov processes approximation, and where clusters may coagulate in different manners depending on their shapes. In this direction, the “shape” of a cluster is simply its order.
2.2 Bridge representation
We now give a representation of the ordered coalescent process by using aggregative server systems coded by bridges with exchangeable increments as in [5]. Letn <∞ andmbe in S1+,n. Letbm be the bridge with exchangeable increments on [0,1] defined by
bm(s) =
n
X
i=1
mi(1{Ui≤s}−s), 0≤s≤1, (1)
where the (Ui)1≤i≤n are independent uniform r.v.’s on [0,1].
Definition 1 Let f be a bridge in the Skorohod space D([0, ℓ]), ℓ > 0, i.e. f(0) = f(ℓ) =
f(ℓ−) = 0. Let xmin be the location of the right-most minimum of f, that is, the largest x such
that f(x−)∧f(x) = inff. We call Vervaat transfomof f, or Vervaat excursion obtained from
f, the functionV f ∈D([0, ℓ]) defined by
V f(x) =f(x+xmin[mod ℓ])−inf
[0,ℓ]f, x∈[0, ℓ)
✲
Figure 1: A c`adl`ag bridge and its Vervaat’s transform
Throughout this paper, the functionsf we will consider will be sample path of some processes with exchangeable increments that attain a.s. their minimum at a unique location, so that we could have omitted to take the largest location of the minimum in the definition. See [16] for details.
Letεm=V bm be the Vervaat excursion obtained frombm, andsminthe location of the infimum
ofbm, which is a.s. unique by [16]. LetVi=Ui−smin[mod 1], 1≤i≤nbe the jump times ofεm,
be the sequence of its intervals of constancy ranked in decreasing order of their lengths. Let #(t) =ktheir number.
Let
F(t) = (1 +t).(b1−a1, . . . , b#(t)−a#(t),0,0, . . .)
be the sequence of the corresponding lengths, renormalized by the proper constant so that their sum is 1 (that this constant equals 1 +tis a consequence of the fact thatε(mt) has a slope 1 +t). Also, let F(∞) = (m1, . . . , mn), which is equal to F(t) fort sufficiently large, a.s. Then by [5]
the sequence of the distinct states of (F(t))t≥0 is equal in law to the time-reversed sequence of
the distinct states of the additive coalescent starting at (m1, . . . , mn) (it is in particular easy to
observe that the terms in F(t) are constituted of sums of subfamilies extracted from m). On the other hand it is easy to see that every jump timeVj, 1≤j≤nbelongs to some [ai, bi), and
that the left bounds (ai(t),1≤i≤#(t)) are all equal to someVj. Hence the (Vj)1≤j≤n induce
on the intervals of constancy a random orderOm
ε (t) at timet:
(i, j)∈Om
✡✡
Figure 2: ε(mt) and intervals of constancy of its supremum.
We thus deduce a process (Om
bridge (since it is independent of the jump times). We then know from Propositions 1 and 2 of [5] that (F(t−1A
n−1))0≤t<An (with F(+∞) = (m1, . . . , mn)) is the aggregative server system
described in the second remark of section 2.1.
Therefore, let T(t) be the time-change defined as follows. For 0 ≤ k ≤ n let tk = sup{t ≥
I−1. Last, for 0≤t≤I(An−1) we set
T(t) = An
I−1(t) −1, T(0) =∞.
We then have the
Lemma 1 The process
Om
t =
Om
ε (T(t))) if 0≤t≤I(An−1) Om
ε (T(I(An−1))) if t > I(An−1)
is an ordered additive coalescent inOn with proto-galaxy masses m1, . . . , mn.
Proof. The discrete state-evolution has the proper law, since given that two clusters with mass mi and mj coalesce, the probability that the mass mi is at the left of the mass mj is
mi/(mi+mj), which is obtained from the calculation of Proposition 2 in [5]. This proposition
also shows that (A1, . . . , An) are thenfirst jumps of a Poisson process with intensity 1. Together
with the definition ofI(t) andT(t), this implies that the time-evolution also has the appropriate law : when the system is constituted of k clusters, the time until the following coalescence is
exponential with parameterk−1.
Remark. In particular, we easily get that the final order Om
I(An−1)=O
m
∞ is defined by (i, j)∈ Om
I(An−1) ⇐⇒ Vi ≤Vj, where (Vj)1≤j≤n is a cyclic permutation of the uniformly distributed (Uj)1≤j≤n which is determined by the location of the minimum of the bridgebm.
2.3 The ballot Theorem
Before examining ordered coalescent any further, we present a tool that will be seen to be very useful in the sequel.
The most famous version of the ballot theorem is doubtless its discrete version (see Tak´acs [24]). When we pick the balls one by one without replacement in a box containing ared balls and b
green balls,a > b, the probability that the red balls are always leading is (a−b)/(a+b). There exists a continuous version, which can be deduced from the discrete one by approximation, as it is done in [24], for non-decreasing processes with derivative a.s. 0, with exchangeable increments. We give an alternative version.
Letℓ > 0. From [14] we know that every process bwith exchangeable increments on [0, ℓ] with bounded variation may be represented in the form
b(x) =αx+
∞
X
i=1
βi(1x≥Ui−
x
ℓ), 0≤x≤ℓ (2)
where (Ui)i≥1 is i.i.d. uniform on [0, ℓ], andα, β1, β2, . . . are (not necessary independent) r.v.’s
which are independent of the sequence (Ui)i≥1, and satisfyPi∞=1|βi|< ∞ a.s. We call it the
Lemma 2 (ballot Theorem) Suppose that the jumps β1, . . . are negative. Then we have
P h
b(x)≥0 ∀x∈[0, ℓ]b(ℓ), β1, β2, . . .
i
= max b(ℓ)
b(ℓ)−P∞i=1βi
,0
In particular, conditionally onP∞i=1βi andb(ℓ), the event{b(x)≥0∀0≤x≤ℓ}is independent
of the sequence of the jumps (β1, β2, . . .).
Proof. Denote −
P∞
i=1βi
ℓ by Σ. For each iwe define the following process on (0, ℓ):
Mxi = 1Ui≤x,
so that
b(x) = (α+ Σ)x+
∞
X
i=1 βiMxi
Let (Fx)x≥0 be the filtration generated by all the processes M(iℓ−x)− and enlarged with the
variables α and βi’s. Then the process
M(iℓ−x)−
(ℓ−x) , 0≤x≤ℓ
is a martingale with respect to this filtration, and if Mx=−P∞i=1βiMxi, so is
M(ℓ−x)−
ℓ−x , 0≤x≤ℓ
which has Σ for starting point. Moreover we remark that it tends to 0 at ℓ with probability 1, as a consequence of Theorem 2.1 (ii) in [15], with f(t) = t (in other words, processes with exchangeable increments with no drift have a.s. a zero derivative at 0). The hypothesis that the jumpsβi are negative enables us to apply the optional sampling theorem which thus gives that,
conditionally on theβi’s andα,Mx stays below (α+ Σ)x on (0, ℓ) with probabilityα/(α+ Σ).
The second assertion follows.
As a consequence of this lemma, we obtain that whenα= 0 a.s.,battains its minimum at a jump time. Indeed, fori≥0, letvib(x) =b(x+Ui[mod 1])−b(Ui) the process obtained by splitting
the bridge atUi, and modified at timeℓso that it is continuous at this time. Since the variables
Uj−Ui forj6=iare also uniform independent, it is easy to see thatvibis the Kallenberg bridge
with jumpsβj, j 6=i and drift coefficient βi/ℓ. Lemma 2 implies that conditionally onβi and
onP∞i=1βj, it is positive (i.e. Ui is the location of the minimum ofb, or also thatvibis equal to
V b) with probability βi/P∞j=1βj. Since the sum of these probabilities is 1, we can conclude.
On the other hand, we obtain by a simple time-reversal on [0, ℓ] the same result for processes with exchangeable increments and positive jumps. Moreover, ifb haspositivejumps, we have the
Corollary 1 Conditionally on the βi’s, the first jump of V b is βi with probability βi/P∞j=1βj,
Proof. It is immediate from our discussion above when consideringb(ℓ−x), which has negative
jumps.
An important consequence is that the left-most fragment minOm
∞ is a m-size biased pick from
{1, . . . , n}, that is
P[minOm
∞=i] =mi.
2.4 Infinite state case : fragmentation with erosion
We now give a generalization of the previous results to additive coalescents with an infinite number of clusters. We will use approximation methods that are very close to [5], with the difference that the processes we are considering have bounded variation, which makes the ap-proximations technically more difficult (in particular the functionals of trajectories such as the Vervaat’s transform are not continuous).
It is conceptually easy to generalize the construction by Evans and Pitman [12] of partition valued additive coalescents. We are following the same approach by replacing the set of partitionsP∞
by the setO∞.
We endowO∞with a topology as in [12] : first endow the finite setOnwith the discrete topology.
Forn≤α≤ ∞, callπnthe function fromO
αtoOncorresponding to the restriction toNn. Then the topology onO∞is that generated by (πn)n≥1. It is a compact totally disconnected metrizable
space, and the distanced(O, O′) = supn≥12−n1
{πn(O)6=πn(O′)} induces the same topology. We also denote by (Ot)t≥0 the canonical process associated to c`adl`ag functions onO∞. Last, for O ∈ O∞ let µm(O, .) be the measure that places, for each pair of distinct clusters (I,J) mass mI at OIJ. We wish to show that there exist for each m∈S1+,∞, a family of laws (P
m
O)O∈O∞ such that
• IfO∈ O∞ contains the cluster (n, n+ 1, n+ 2, . . .) with any order on it (for example, the
natural order onN) for some n≥ 1, then (π
n(O
t))t≥0 is under P
m
O a On-coalescent with
proto-galaxy massesm[n]= (m
1, . . . , mn−1,P∞i=nmi) started atπn(O).
• Under P
m
O, the canonical process is a Feller process with L´evy system given by the jump
kernelµm.
• The map (m, O)7→P
m
O fromS
+,∞
1 ×O∞to the space of measures onD(R+,O∞), is weakly continuous (where we endowS1+,∞ with the usual l1 topology).
Again, we could state the result for more generalm’s (with finite sum6= 1), but this would only require an easy time-change.
We claim that these properties follow from the same arguments as in [12]. Indeed, Theorem 10, Lemma 14 and Lemma 15 there still hold when P∞ (the space of partitions on N) is replaced by the topologically very similar spaceO∞. We have, however, to check that the construction
of a “coupled family of coalescents” as in [12] (Definition 12 and Lemmas 14 and 15 there), still exists in our ordered setting. For this we adapt the arguments of Lemma 16 there: we use the same construction with the help of Poisson measures, but we do not neglect the orientation of the edges in the random birthday trees T(Yn,m,O
j ) we obtain in a similar way. In this way, we
We will give a description of the O∞-additive coalescent processes with the help of bridges
local minima by a jump, otherwise by exchangeability of the increments the minimum would be attained continuously with positive probability. As a consequence, every cluster of Om
ε (t) has
a minimum, the “left-most fragment”. Denote by OSing the element of O∞ constituted of the
singletons{1},{2}, . . ., and let Om
We are going to prove this theorem by using a limit of the bridge representation of the ordered additive coalescent described in section 2.2 and the weak continuity properties of Pm
O. Recall
Lemma 3 Almost surely, we may extract a subsequence of (εn
m)n≥1 which converges uniformly
toεm as n goes to infinity.
Proof. It is trivial thatbn
mconverges uniformly tobmsince the jump times coincide (the bridges
are built on the sameUi’s). To get the uniform convergence of the Vervaat excursions, it suffices
to show that a.s. fornsufficiently large and up to the extration of subsequences, the location of the minimum of the bridgebn
Next, independently ofnlet kbe sufficiently large so that
where the equality is obtained from Corollary 1. In this case we obtain that for ǫ > 0 and n
large, X
Where the last equality above is also obtained from Corollary 1. From this we deduce that up to extraction of a subsequence,Un∗=U∗ forn sufficiently large. Next we associate to each integer n an ordered additive coalescent process (On
t)t≥0 taking
val-ues in O∞ as follows. Let ([ani(t), bni(t)])1≤i≤#(n,t) be the intervals of constancy of ε(mn,t) (the
process defined as ε(mt), but for the bridge bn
m). We know from section 2.2 how to obtain an
ordered coalescent in On from bnm, with proto-galaxy masses m1, . . . , mn, and starting from
lescent converges in law to the ordered coalescent starting from all singletons, with proto-galaxy masses m, in virtue of the weak continuity property for (P
m
O)m∈S1+,∞,O∈O∞.
Proof of Theorem 1. We have a.s. that, if [a, b] is an interval of constancy for the process (ε(mt)(s))0≤s≤1, then for everys∈]a, b[,
max(ε(mt)(s−), εm(t)(s))< ε(mt)(a).
This is proved in [5], Lemma 7. This assertion combined with the uniform convergence in Lemma 3 implies, up to extraction of subsequences, the pointwise convergence of (an
i(t))i≥1
(resp. (bn
i(t))i≥1) to (ai(t))i≥1 (resp. (bi(t))i≥1) as ngoes to infinity, for every i≥0. Moreover,
this gives that thebi’s are not equal to some of theVj’s (else the process would jump at the end
of an interval of constancy of its supremum process, which would be absurd). More precisely, we have thatan
i(t)≤ai(t) fornsufficiently large, for everyt. Indeed, the process
(ts−εm(s)))0≤s≤1 jumps at ai(t) =Vj for some j and ani(t) tends toai(t) so that if ani(t) was
greater than ai(t) for arbitrarily large n, ani(t) would not be the beginning of an interval of
constancy forεn
From this we deduce that (still up to extraction) (Oεn(t))t≥0 converges in O∞ to (Omε (t))t≥0 in
the sense of finite-dimensional distributions. Indeed, it suffices to show that for every m ≥ 0 the restriction of the order On
ε(t) to Nm is equal to the restriction of the order O
m
ε (t) for n
sufficiently large. For this it suffices to choose n such that [ani(t), ai(t)) does not contain a Vj
with labelj≤m, and that suchVj’s does not fall between any bni(t) andbi(t). This is possible
according to the above remarks. For suchnthe orders induced by the Vj’s in each [ai(t), bi(t)],
and restricted to Nm are the same. Similarly, we have that the process (Om
ε (t))t≥0 is continuous in probability at every time t. For
this we use the fact that for every i, ai(t′) = ai(t) a.s. fort′ close to t, which is a consequence
of [15] Theorem 2.1 (ii) for f(t) = t. Indeed, this theorem shows that the bridge bm has a
derivative equal to−1 at 0, and hence by exchangeability bm has a left derivative equal to−1
at any jump time since we would not lose the exchangeability by suppressing the corresponding jump.
Last, it is easily seen that the time-change Tn(t) converges in probability to e−t/(1 −e−t).
Together with the above, this ends the proof of the first assertion of the theorem. Together with
Corollary 1, we get the second one.
Remark also that the mass of thei-th heavier cluster is given by
bi(s)−ai(s)
1−e−t for s=
e−t
1−e−t (3)
and can be read directly on the intervals of constancy of εm. In the following studies, we will turn our study to these lengths of constancy intervals.
We conclude this section with a remark concerning the so-called fragmentation with erosion ([5, 2]) that typically appears in such “bounded variation” settings as the one in this part (bm has bounded variation a.s.). The “erosion” comes from the fact that the total sum of the intervals of constancy ofεm is less than 1. Yet, in our study, we have shown that the erosion is deterministic, and that when we compensate it by the proper multiplicative constant, we obtain after appropriate time-changing an additive coalescent starting at time 0 (at the opposite of eternal coalescent obtained in the infinite variation case that starts from time −∞). We have not pursued it, but we believe that in the ICRT context in [2], the ICRT obtained in the equivalent “bounded variation” setting (Pθ2
i = 1 and
P
θi<∞ with the notations therein) is
somehow equivalent to the birthday tree with probabilitiesm1, m2. . .so that Poisson logging on
its skeleton just gives a process which is somehow isomorphic to a non-eternal additive coalescent.
3
The L´
evy fragmentation
make “explicit” the law of the fragmentation process at a fixed time in terms of the hitting times process of the L´evy process.
We begin by giving the setting of our study, and by recalling some properties on L´evy processes with no positive jumps and the excursions of their reflected processes. Most of them can be found in [3]. From now on in this paper,X designates a L´evy process with no positive jumps.
3.1 L´evy processes with no positive jumps, bridges and reflected process
Letν be the L´evy measure ofX. We will make the following assumptions :
• X has no positive jumps.
• X does not drift to −∞, i.e. (together with the above hypothesis) X has first moments and E[X1]≥0.
• X has a.s. unbounded variation.
We set
Xs(t) =Xs+ts,
and
X(st)= sup
0≤s′≤sX
(t)
s′
its supremum process (sometimes called “supremum” by a slight abuse). LetTx(t)the first hitting
time ofx by X(t). Recall that the fact thatX(t) has no positive jumps implies that the process
of first hitting times defined byTx(t) = inf{u≥0, Xu(t)> x} is a subordinator, and the fact that
X, and then X(t), does not drift to −∞ implies that T(t) is not killed. Moreover, T(t) is pure
jump sinceX(t)has infinite variation. Whent= 0 we will drop the exponent(0) in the notation. The reason why we impose infinite variation is that we are going to study eternal coalescent processes. Nevertheless, we make some comments on the bounded variation case in section 8. Last, we suppose for technical reasons that for alls >0, the law ofXs has a continuous density
w.r.t. Lebesgue measure. We call its densityqs(x) =P[Xs ∈dx]/dx. For comfort, we suppose thatqs(x) is bi-continuous in s >0 andx ∈R. This is not the weakest hypothesis that we can assume, but it makes some definitions clearer, and some proofs simpler (Vervaat’s Theorem,...). In particular, for every ℓ >0 we may define the law of the bridge of X from 0 to 0 with length
ℓas the limit
P0ℓ,0(·) = limε→0Pℓ[· ||Xℓ|< ε], (4)
wherePℓ is the law of the processX stopped at timeℓ, see [13], [16].
Let us now present some facts about reflected processes and excursion theory.
We call reflected process (below its supremum) ofX the processX−X. The general theory of L´evy processes gives that this process is Markov, and sinceX has no positive jumps, the process
X is a local time at 0 for the reflected process, and T is the inverse local time process. This local time enable us to apply the Itˆo excursion theory to the excursions of the reflected process away from 0. The excursion at level x≥0 is the process
Itˆo theory implies that the excursion process is a Poisson point process: there exists a σ-finite measure n, called the excursion measure, which satisfies the Master Formula
E X
x>0
H(Tx−(ω), ω, εx(ω))
=E
Z ∞
0
dXs(ω)
Z
n(dε)H(s, ω, ε)
whereH is a positive functional which is jointly predictable with respect to its first two compo-nents and measurable with respect to the third.
Let D = inf{u > 0 : ε(u) = 0} be the death time of an excursion ε. We are going to give a “good” representation of the excursion measure conditioned by the duration with the help of the following generalization of Vervaat’s Theorem, that we will still call “Vervaat’s Theorem” :
Proposition 1 (Vervaat’s Theorem) Let bℓ
X be the bridge ofX on [0, ℓ] from0 to0. LetΓℓ
be the law of the process (V bℓX(ℓ−x))0≤x≤ℓ. Then the family(Γℓ)ℓ>0 is a regular version for the
“conditional law” n(·|D) in the sense that
n(dε) = Z ∞
0
n(D∈dℓ)Γℓ(dε) =
Z ∞
0 dℓ
ℓ qℓ(0)Γℓ(dε)
Hence, whe will always refer to the time-reversed Vervaat Transform of the bridge of X from 0 to 0 with length ℓ > 0 as the excursion of X below its supremum, with duration ℓ. The explanation for the second equality above is in the first assertion of Lemma 7, and the proof of this proposition is postponed to section 7.
3.2 The fragmentation property
We now state a definition of what we call an (inhomogeneous) fragmentation process, following [4]. Let S↓ be the space of all decreasing positive real sequences with finite sum, and for all
ℓ >0, let Sℓ↓ ⊂S↓ be the space of the elements of S↓ with sum ℓ. Then, for 0≤t≤t′, consider
for eachℓan “elementary” probability measureκt,t′(ℓ) onS↓
ℓ. Next, for allL= (ℓ1, ℓ2, . . .)∈S↓,
let L1, L2, . . . be independent sequences with respective laws κt,t′(ℓ2), κt,t′(ℓ2), . . . and define
κt,t′(L, .) as the law of the decreasing arrangement of the elements ofL1, L2, . . ..
Definition 2 We call fragmentation process a (a priori not homogeneous in time) Markov pro-cess with transition kernel (κt,t′(L, dL′))t<t′,L∈S↓.
Informally, conditionally on the current state of the fragmentation, a fragment then splits in a way depending only on its length, independently of the others, a property which is usually refered to as thefragmentation property.
As in the Brownian case developed in [4], the process that we are now defining is a fragmentation process of therandominterval [0, T1].
Definition 3 We call L´evy fragmentation associated to X, and we denote it by (FX(t)) t≥0,
the process such that for each t, FX(t) is the decreasing sequence of the lengths of the interval
components of the complementary of the support of the measure 1[0,T1]dX(t) (in other terms, of constancy intervals of the supremum ofX(t) before T
Since the support of the measure dX(t) is precisely the range of the subordinator T(t), we see
that the sum of the “fragments” at a timet >0 isT1. Indeed, sinceX(t) has infinite variation,
0 is regular for itself (see Corollary VII,5 in [3]), and it easily follows that the closure of the range ofT(t) has zero Lebesgue measure.
The purpose of this section is to prove that the L´evy fragmentation is indeed a fragmentation in the sense of Definition 2. For any ℓ >0 we consider the transition kernel ϕt,t′(ℓ) defined as follows:
Definition 4 For any t′ > t≥0, let (ε(t)
ℓ (s),0≤s≤ℓ) be the generic excursion with duration
ℓof the reflected processX(t)−X(t). We denote byϕ
t,t′(ℓ) the law of the sequence of the lengths of the constancy intervals for the supremum process of (s(t′−t)−ε(t)(s),0 ≤s≤ℓ), arranged
in decreasing order.
By convention let ϕt,t′(0) be the Dirac mass on(0,0, . . .).
Remark. In fact one could define a fragmentation-type process from any excursion-type func-tionf defined on [0, ℓ] (that is, which is positive, null at 0 and ℓand with only positive jumps), deterministic or not, by declaring that Ff(t) is the decreasing sequence of the lengths of the
intervals of constancy of the supremum process of (st−f(s),0 ≤ s ≤ ℓ). We will sometimes refer to it as the fragmentation process associated to f.
Proposition 2 The process (FX(t))t≥0 is a fragmentation process with kernels ϕt,t′(L, dL′) (0 ≤ t < t′, L ∈ S↓). In other words, for t′ > t ≥ 0, conditionally on FX(t) = (ℓ
1, ℓ2, . . .),
if we consider a sequence of independent random sequences F1, F2, . . . with respective laws ϕt,t′(ℓ1), ϕt,t′(ℓ2), . . ., then the law of FX(t′) is the one of the sequence obtained by rearrang-ing the elements ofF1, F2, . . . in decreasing order.
Remarks. From the definition of FX we can see that it is a fragmentation beginning at a
random state which corresponds to the sequence of the jumps of (Tx) for x ≤1. But we can
also define the L´evy fragmentation beginning at (ℓ,0, . . .) by applying the transition mechanism explained in Proposition 2 to this sequence. We will denote the derived Markov process byFεℓ,
the fragmentation beginning from fragmentℓ, which is equal in law to the fragmentation process associated to εℓ by virtue of Proposition 2. Even if the definition of FX is simpler than that
of Fεℓ, we will rather study the latter in the sequel since the behaviour of FX can be deduced
from that of theFεℓ’s as we will see at the beginning of section 4.
In order to prove Proposition 2 we will use, as mentioned above, the same methods as in [4]. Since the proofs are almost the same, we will be a bit sketchy.
First we remark that a “Skorohod-like formula” holds for the supremum processes X(t) and
X(t′). This formula is at the heart of the fragmentation property.
Lemma 4 For any t′ ≥t≥0 we have
X(ut) = sup
0≤v≤u
This property holds for any processX which has a.s. no positive jumps; the proof of [4] applies without change.
Remark. In fact we have that
X(ut) = sup g≤v≤u
(X(vt′)−(t′−t)v) (6)
whereu belongs to the interval of constancy [g, d] of X(t) (in other terms,g is the time in [0, u] whereX(t) is maximal).
Proof of Proposition 2. Following [4] we deduce from Lemma 4 that if Gt is the σ-field
generated by the process X(t), then (Gt)t≥0 is a filtration. Indeed, the Skorohod-like formula
shows thatX(t) is measurable w.r.t. X(t′) for any t′ > t.
Now suppose that K is a Gt-measurable positive r.v., and let us denote by ε(1t,K) , ε
(t)
2,K, . . . the
sequence of the excursions accomplished by X(t) below its supremum, ranked by decreasing
order of duration (we callℓ(1t,K) , ℓ(2t,K) , . . .the sequence of their respective durations), before time
TK(t). Ifn(t)is the corresponding excursion measure, and ifn(t)(ℓ) is the law of the excursion ofX
below its supremum with durationℓ, we have the analoguous for Lemma 4 in [4]: conditionally on Gt, the excursionsε1(t,K) , ε(2t,K) , . . . are independent random processes with respective distributions
n(t)(ℓ(t)
1,K), n(t)(ℓ
(t) 2,K), . . ..
Again, the proof is identical to [4], with the only difference that n(t)(ℓ) cannot be replaced by n(ℓ), the law of the excursion of X below its supremum with duration ℓ (which stems from Girsanov’s theorem in the case of Brownian motion).
Applying this result to K = X(Tt1) and using the forthcoming Lemma 5 which will show that
TK(t) =T1, it is now easy to see that (F(t), t ≥ 0) has the desired fragmentation property and
transition kernels.
4
The fragmentation semigroup
Our next task is to characterize the semigroup of the fragmentation process at a fixed time. In this direction, it suffices to characterize the semigroup ofFεℓ(t) for fixed t >0 and ℓ >0, since
conditionally on the jumpsℓ1 > ℓ2 > . . . ofT before level 1, the fragmentation FX(t) at timet
comes from the independent fragmentations Fεℓ1, Fεℓ2, . . . at timet.
Our main result is Theorem 2, a generalization of the result of Aldous and Pitman [1] for the Brownian fragmentation. The conditioning mentioned in the statement is explained immediately below (equations (9) and (10)). Recall thatqt(·) is the density ofXt.
Theorem 2 The following assertions hold :
(i) The process (∆Tx(t))0≤x≤tℓ of the jumps of T(t) before the level tℓ is a Poisson point process
on (0,∞) with intensity measure z−1qz(−tz)dz.
(ii)For any t >0, the law of Fεℓ(t) is that of the decreasing sequence of the jumps ofT(t) before
The first assertion is well-known, and will be recalled in Lemma 7. We essentially focus on the second assertion.
4.1 Densities for the jumps of a subordinator
We recall some results on the law of the jumps of a subordinator that can be found in Perman [19]. From now on in this paper, we will often have to use them.
We consider a subordinator T with no drift, and infinite L´evy measure π(dz) . We assume that the L´evy measure is absolutely continuous with densityh(z) =π(dz)/dz that is continuous on (0,∞). It is then known in particular that for each level x, Tx has a density f, which is
characterized by its Laplace transform Z +∞
0
e−λuf(u)du= exp−x
Z ∞
0
(1−e−λz)h(z)dz. (7)
Next, for all v > 0, let fv(x) denote the density at level x (which is known to exist) of the
subordinatorTv which L´evy measure ish(z)1
z<vdz. It is characterized by its Laplace transform
Z +∞
0
e−λufv(u)du= exp
−x
Z v
0
(1−e−λz)h(z)dz (8)
We also denote by (∆i)i≥1 the decreasing sequence of the jumps of (Ty)0≤y≤x.
By [19], thek-tuple (∆1, . . . ,∆k) admits for every k a density:
P[∆1 ∈du1, . . . ,∆k∈duk]
du1. . . duk
=xkh(u1). . . h(uk) exp
−x
Z ∞
uk
h(z)dz (9)
which we denote byp(u1, . . . , uk).
Besides, the k+ 1-tuple (∆1, . . . ,∆k, Tx) has density
P[∆1 ∈du1, . . . ,∆k∈duk, Tx ∈ds]
du1. . . dukds
=p(u1, . . . , uk)fuk(s−u1−. . .−uk). (10)
The proof relies on the general fact for Poisson measures that, conditionally on the k largest jumps (∆1, . . . ,∆k), the sequence (∆i)i≥k+1 is equal in law to the decreasing sequence of the
jumps of the subordinatorT∆k before timex, that is, the atoms of a Poisson point process with
intensityh(z)1{z<v}dz at timex. SinceTx has a density, these formulas give the conditional law
of the jumps ofT before time x givenTx=s for everys >0.
Remark. In the case of a L´evy measureπwith finite massa(compound Poisson case) admitting a density, first condition by the fact that there arekjumps in [0, x] (the probability ise−aak/k!),
then by the size of the jumps, which are independent with law π/a.
4.2 A useful process
It is clear thatY(t) is a L´evy process with bounded variation and with no positive jumps. As the subordinatorT(t) can be recovered from Y(t), the sigma-field generated by the latter coincides
withGt, and in particular it should be possible to deduce (FX(s))0≤s≤t from it.
We begin with a lemma which is related to Lemma 7 in [4]. We denote by σ(t) the inverse of Y(t), in particular σ(t) is a subordinator.
Lemma 5 For any t >0, FX(t) has the law of the decreasing sequence of the jumps of(T(t) x )
for x≤X(Tt1) = 1 +tT1, that is, the jumps of(−Yx(t)/t) for x≤σ(1t).
Moreover we have for any y,
σy(t)=y+tTy (11)
Proof. We prove the second assertion, the first one being a straightforward consequence. We have
σ(xt) = inf{z≥0 :z−tTz(t)> y}
= inf{X(ut) :X(ut)−tu > y}
= X(t)(inf{u≥0 :X(ut)−tu > y})
= X(t)(inf{u≥0 : sup
0≤v≤u
(X(vt)−tv)> y})
= X(t)(inf{u≥0 :Xu > y})
= X(Tty)
where we used the fact that X(ut)−tu is non-increasing on a constancy interval of X(ut), the continuity of X(t) which follows from the fact that X has no positive jumps, and formula (5). We then note that
σ(Xt)
u =X
(t)
T
Xu =Xu+tTXu
which follows from the fact thatXs≤Xufors≤TXu, andXs(t) ≤Xu+tu≤Xu+tTXu =XT(t)
Xu.
Applying this for u=Ty we have
σy(t) =XT(ty)=y+tTy
Remark that this last result implies that the process of first hitting times ofY(t)is not killed, so
thatY(t) is oscillating or drifting to ∞. Moreover, from the fact that the Laplace exponents of X(t) and T(t) are inverse functions, we obtain that
E[T
(t)
1 ] = 1/E[X
(t)
1 ] = 1/(E[X1] +t) so that
E[Y
(t) 1 ] =
E[X1] E[X1] +t
,
and Y(t) oscillates if and only if X does so.
Lemma 5 also shows that the information of FX(t) for fixed t is (very simply) connected to
(Xu)u≥0 = (Tu−1)u≥0 is a local time for the reflected process of X, so that the previous lemma
implies that, up to a multiplicative constant and a drift,X andY(t)share for allt >0 the same
inverse local time processes.
Recall that n is the excursion measure of X−X and that V is the lifetime of the canonical process. If ε is an excursion-type function we denote by ε(t) the supremum process of ts−εs.
We are now able to state the
Lemma 6 The “law” under n(dε) of the decreasing lengths of the constancy intervals of ε(t) is
the same as the “law” under the excursion measure of Y(t)−Y(t) of the jumps of the canonical
process, multiplied by1/tand ranked in the decreasing order. The same holds for the conditioned law n(dε|V =ℓ) and the corresponding law of the excursion of Y(t)−Y(t) with duration tℓ.
Proof. From the above remark, to the excursion εx of the reflected process of X at level x
(that is, the excursion of X below its supremum and starting at Tx, εx(u) = x−XTx−+u for 0 ≤u≤ Tx−Tx−) we can associate the excursion γx(t) of Y(t) below its supremum, at level x,
given by
γx(t)(u) =x−Y(t)
u+σ(xt−) =x−u−(x+tTx−) +tT
(t)
u+x+tTx− (12)
for 0≤u≤σx(t)−σ(xt−). We underline that the random times −tTx−+Tu(t+)x+tTx− appearing in the formula only depend of the process x−X between the times Tx− and Tx, that is, of εx.
Indeed, Lemma 5 implies that
X(Ttx)− =x+tTx−, X(
t)
Tx =x+tTx
so that the values taken by the jumps of T(t) between T
x− and Tx are exactly the length of the
constancy intervals of the supremum of (−εx(u) +tu)0≤u≤Tx−Tx−. The proof then follows. Remarks. —More precisely, if we call Vkthe location of the k-th largest jump ofγ(xt), then we
have that thei-th largest constancy interval of the supremum process of (ts−εx(s))
0≤s≤Tx−Tx− is at the left of the j-th one if and only if Vi < Vj. This is a straightforward consequence of
elementary sample path properties of X. In particular, there a.s. exists a left-most interval of constancy of the supremum process of (ts−εℓ(s))0≤s≤ℓ if and only if the excursion of Y(t) with
length tℓa.s. begins by a jump. This is to be related, of course, with section 2, but also with the forthcoming section 5
—The last proof also implies the fact (that could easily be guessed on a drawing) that if ε is the excursion of X below its supremum with duration 1 and if for 0 ≤ x ≤ t, Tx′ = inf{u ≥ 0, −εu +tu > x}, then x −tTx′, 0 ≤ x ≤ t has the law of the excursion of Y(t) below its
supremum with durationt.
4.3 Proof of Theorem 2
The last step before the proof is a lemma that gives explicit densities for the characteristics of
Lemma 7 The L´evy measure π(t)(dz) of T(t) is absolutely continuous w.r.t. Lebesgue measure, with density
h(t)(z) = 1
zqz(−tz)1z>0. (13)
Moreover, for everyx >0, P[T
(t)
x ∈ds] has density
P[T
(t)
x ∈ds]/ds=
x
sqs(x−st) (14)
Proof. Following from the fact thatX(t)is a L´evy process with no positive jumps, we have the well-known result (x, s∈R+)
xP[X
(t)
s ∈dx]ds=sP[T
(t)
x ∈ds]dx, (15)
see Corollary VII,3 in [3] for example. From this we deduce (14), asqs(x−st) is the density of
X(t).
Next, we know from Corollary 8.8 page 45 in [22] that the L´evy measure ofT(t)is on (a,∞) the weak limit of (1/ε)P[T
(t)
ε ∈ds] for anya >0. We thus obtain (13). Proof of Theorem 2.
From Lemma 6 we know that the law of Fεℓ(t) is equal to the law of the decreasing sequence
of sizes of the jumps of the excursion of (Y(t)−Y(t))/t with length ℓ. But Vervaat’s Theorem
(Proposition 1) implies that this excursion has the same law asV ytℓ(t)(ℓ− ·) where (ytℓ(t)(x))0≤x≤ℓ
is the bridge with length tℓ of Y(t) from 0 to 0. Since T(t)
tℓ , and hence Y
(t)
tℓ , has a continuous
density by Lemma 7, the law of ytℓ(t) is defined as the limit as ǫ → 0 of the law of Y(t) before
timetℓconditioned on|tℓ−tTtℓ(t)|< ǫ. Now since the L´evy measure ofT(t)also has a continuous
density by Lemma 7, we get that under this limit probability, the jumps of the canonical process are the same as the jumps of T(t)/t conditioned on T(t)
tℓ =ℓ in the sense of Perman [19]. This
concludes the proof.
Notice that formulas (10), (13) and (14) make the densities of the firstkterms ofFεℓ(t) (k∈
N) explicit in terms of (qt(x), t >0, x∈R).
5
The left-most fragment
In the two preceding sections, we did not consider specifically the sample path properties of the excursion-type functions εℓ that we used to describe the L´evy fragmentation. As a link with
section 2 we are now studying some properties of the order induced by [0, ℓ] on the constancy intervals of the supremum process of (st−εℓ(s))0≤s≤ℓ.
We can be a bit more precise than in the proof of Theorem 2 in our description of the bridge of
Y(t). The L´evy-Itˆo decomposition of subordinators imply that (T(t)
x )0≤x≤tℓ may be written in
the form
Tx(t)=
∞
X
i=0
where (∆i)i≥1 is the sequence of the jumps of T(t) before time tℓ ranked in decreasing order,
and (Ui)i≥1 is an i.i.d. sequence of r.v.’s uniform on [0, tℓ], independent of (∆i)i≥1. Thus the
bridge ofY(t) with length tℓfrom 0 to 0 may be written in the form x−tP∞
i=0δi1x≥Ui where
(δi)i≥1 has the conditional law of (∆i)i≥1 givenTtℓ(t) =ℓ. We recognize a Kallenberg bridge with
exchangeable increments, zero drift coefficient, finite variation and only negative random jumps (tδi)i≥0.
Recall from Lemma 2 that bridges with exchangeable increments, finite variation and only positive jumps a.s. attain their minimum at a jump, so that their Vervaat’s Transforms begin with a jump. Hence Vervaat’s Theorem applied to Y(t) combined with the discussion after
Lemma 2 gives that the excursions of Y(t) below its supremum a.s. start by a jump, which is a size-biased pick from the variables (δi)i≥1. In fact this is proved by other means to give
an alternative proof of Vervaat’s Theorem itself in the bounded variation case, see 7.2 below. Nonetheless, it is clearer in our setting to present it rather as a consequence of Vervaat’s Theorem. We denote this first jump bytHℓ(t), and according to the remark in section 4.2, Hℓ(t) is equal
in law to the left-most constancy interval of the supremum process of (ts−εℓ(s))0≤s≤ℓ.
As a consequence of Corollary 1, we also have thattHℓ(t) has the law of a size-biased pick from
the jumps of the bridge of Y(t) with length tℓ from 0 to 0. Equivalently, Hℓ(t) has the law of
a size-biased pick from the jumps of T(t) before time tℓ conditioned on Ttℓ(t) = ℓ. It has been already proved by Schweinsberg in [23] in the Brownian case, by similar methods. As noticed in this article, remark that Hℓ(t) has the law of a size-biased without being a size-biased itself.
Generalizing a result of Bertoin in the Brownian case, we can do even better, that is, to show that theprocess (Hℓ(t))
t≥0 has the law of a size-biased marked fragment in a sense we precise
here.
LetU be uniform on [0, ℓ), independent ofεℓ, and letH∗ℓ(t) the length of the constancy interval
of the supremum process of (ts−εℓ(s))0≤s≤ℓ that containsU. At every time t >0, H∗ℓ(t) has
the law of a size-biased pick from the elements ofFεℓ(t).
Theorem 3 The processes Hℓ and Hℓ
∗ have the same law; they are both (in general
time-inhomogeneous) Markov processes with transition kernel Q given by (fort≤t′, h′≤h):
Qt,t′(h, dh′) = (t
′−t)hq
h′(−t′h′)qh−h′(t′h′−th) (h−h′)q
h(−th)
dh′. (16)
Proof. Recall from section 3.2 that conditionally on the length Hℓ(t) = h of the left-most
fragment, at timet, the law of the fragmentation starting with this fragment isϕt,t′(h). Remark also that the left-most fragment at timet′comes from the fragmentation of the left-most fragment at timet. By the fact thatHℓ(t) is a size-biased pick from the jumps ofT(t)beforetℓconditioned
by Ttℓ(t)=ℓ, and by replacing time 0 by time t, timetby timet′−t, and X byX(t), we obtain
that the left-most fragment at timet′ given its value h at timethas the law of a size-biased of the jumps ofT(t′)
before time (t′−t)h conditionally on T(t′)
(t′−t)h =h
Now recall from [21] the explicit law z(dz) of a size-biased pick from jumps of a subordinator
T before a fixed time x, conditionally on the value sof the subordinator at this time :
z(dz) =
zxh(z)f(s−z)
where h and f are respectively the (continuous) density of the L´evy measure of T, and the (continuous) density of the law ofTx. Thanks to this formula and the expressions of the density
of T(t′)
and its L´evy measure in terms of q in Lemma 7 we obtain (16). Next, it is trivial that the initial states forHℓ and Hℓ
∗ are the same (namely ℓ). It just remains
to prove thatHℓ
∗ is a Markov process with the same transition function as Hℓ.
For this, let us condition on Hℓ
∗(t) = h at time t. It means that U belongs to an interval of
excursion of X(t) below its supremum with length h. But then, U is uniform on this interval
and independent of this excursion conditionally on its length Hℓ
∗(t). The value at time t′ of
the processHℓ
∗ is thus by definition a size-biased from the fragments of the L´evy fragmentation
process with initial state (h,0, . . .). It thus has the same law thatHℓ(t′) given Hℓ(t) =h, that
is, precisely Qt,t′(h, dh′).
Remark. The fact thatHℓ(t) is non zero a.s. gives in particular an interesting property for the excursions : The excursions of the reflected process ofX out of 0 (under the excursion measure, or with fixed duration) have at 0 an “infinite slope” in the sense that
ε(s)
s s−→−→0+∞
Indeed, the only other possibility is that they begin with a jump (ε(0)>0), but it is never the case when X has infinite variation. This result could be also be deduced from Millar [17] who shows thatX “moves away” from a local maximum faster than does the supremum process from 0 at time 0.
6
The mixing of extremal additive coalescents
In this section, we associate by time-reversal a coalescent process to each L´evy fragmentation. We will show that it is a ranked eternal additive coalescent as described by Evans and Pitman [12] and Aldous and Pitman [2], where the initial random data (at time−∞) depend on X. It is thus a mixing of the extremal coalescents of [2], and we will give the exact law of this mixing. The most natural way to identify the mixing is to use the representation of the extremal coales-cents by Bertoin [5], with the help of Vervaat’s Transforms of some bridges with exchangeable increments and deterministic jumps, by noticing that the bridge of X with length 1 is such a bridge, but with random jumps. We will focus on the case where the total mass is 1, so that we do not have to introduce too many time-changes.
Definition 5 Recall the definition of Fε1 from section 3. Call L´evy coalescent derived fromX the process defined on the whole real line by
Cε1(t) =Fε1(e−t), t∈
R (18)
More generally, if (F(t), t≥0) is some fragmentation-type process, we will call (F(e−t), t∈
R) the “associated coalescent”.
We first recall the results of [5]. Letl2↓ be the set of (non-negative) decreasingl2 sequences. For a≥0 and θ∈l↓2 define the bridge
ba,θ =abs+
∞
X
i=1
wherebis a standard Brownian bridge on [0,1] and theUi are as usual independant uniform r.v.
on [0,1]. We call ba,θ the Kallenberg bridge with jumps θ and Brownian bridge component ab
(it is a particular case of the general representation for bridges with exchangeable increments, see [14]).
Let (ϑi) be a decreasing positive sequence such that Pϑ2i ≤1 (we call the corresponding space
l12,↓), and
ς = v u u t1−
∞
X
i=1 ϑ2
i. (19)
Consider the fragmentation Fϑ(t) associated to the excursion V b
ς,ϑ (it consists on the lengths
of the intervals of constancy of the supremum process ofts−V bς,ϑ(s)). LetCϑ(t) =Fϑ(e−t) be
the associated coalescent process. Then ([5], Theorem 1 and [2], Theorems 10 and 15) it is an extreme eternal additive coalescent process, the mapping
ϑ∈l21,↓ 7−→Cϑ
is one-to-one, and every extreme eternal additive coalescent (where the total mass of the clusters is 1) can be represented in this way up to a deterministic time-translation. We callP
ϑ its law,
and fort0 ∈R we denote byP
ϑ,t0 the law of the time translated coalescent (Cϑ(t−t
0), t∈R). In this way, the law of any extreme eternal additive coalescent is of this form.
We now denote by σ the Gaussian component of X. For θ∈l↓2, let k(θ)≥0 be such that
k(θ)2σ2 = 1−k(θ)2Xθi2,
that is
k(θ) = q 1
σ2+P∞
i=1θi2
.
Remark thatk(θ).θ is in l21,↓ and thatk(θ)σ =p1−P∞i=1(k(θ)θi)2 is the corresponding “ς”.
Now the bridgebX of X from 0 to 0 and length 1 has exchangeable increments, and as such the
ranked sequence of its jumps is a random element ofl↓2 (see Kallenberg [14]). Let ΘX(dθ) be its
law. It is not difficult to see that ifθ∗ has law Θ
X(dθ), thenbX(1− ·) has the same law as bσ,θ∗. LetΘeX(dϑ, dt0) be the image of ΘX(dθ) by the mapping
θ7→(ϑ, t0) = (k(θ)θ,logk(θ)).
Proposition 3 The L´evy coalescent Cε1 associated to X is an additive coalescent, and its law is given by the mixing Z
(ϑ,t0)∈l12,↓×R P
ϑ,t0(·)Θe
X(dϑ, dt0). (20)
Proof. Consider the fragmentation Fε1. It is associated toV b
X(1− ·), which is equal in law to
V bσ,θ∗ where θ∗ has law ΘX. Hence for t≥0, Fε1(t) has the law of the intervals of constancy of the supremum process of (ts−V bσ,θ∗(s),0≤s≤1). This last process is equal to
1
k(θ∗)(k(θ
∗)ts−V b
so that the supremum processes of (ts − V bσ,θ∗(s),0 ≤ s ≤ 1) and of (k(θ∗)ts −
V bk(θ∗)σ,k(θ∗)θ∗(s),0 ≤ s ≤ 1) share the same constancy intervals. Hence, by definition,
Fε1(t) = Fk(θ∗)θ∗(k(θ∗)t), and this means that the associated coalescent is Ck(θ∗)θ∗,logk(θ∗). The law of Cε1 is thus
Z
θ∈l2↓
P
k(θ)θ,logk(θ)(·)Θ
X(dθ) (21)
and we conclude by a change of variables.
Remark. We stress that, whether the L´evy measure ofX integrates |x| ∧1 or not, the typical mixings that appear are not the same : the configurations wherePθi =∞ have no “weight”
in the first case, whereas Pθi < ∞ does not happen in the second. Moreover, under some
more hypotheses on X (e.g. that its L´evy measureν(du) is absolutely continuous with respect to Lebesgue measure, and that the L´evy process with truncated L´evy measure 1u≤xν(du) has
densities), one can make more “explicit” the law ΘX by the same arguments of conditioned
Poisson measures as in section 4.1 above.
7
Proof of Vervaat’s Theorem
We are going to give two proofs of Proposition 1, the first one being quite technical, and essen-tially devoted to the unbounded variation case since we have not found how to prove it with simple arguments. Of course this proof applies also in the bounded variation case. The second proof only works for L´evy processes with bounded variation, but uses only tools that are directly connected to this work, such as the ballot theorem.
7.1 Unbounded variation case
Let (Ft0)t≥0 the natural filtration on the space of c`adl`ag functions on R+. Let Pb be the law of the spectrally positiveL´evy process Xb =−X. Without risk of ambiguity, Xb will also denote the canonical process onD([0,∞)). Recall the definition of the lawP
t
0,0 of the bridge ofX with
length t > 0 starting and ending at 0 from (4). LetPt be the law of the process X killed at
timet, and (Ft)t≥0 be theP-completed filtration.
Let also PJ = PτJ c where τ
Jc = inf{s ≥ 0, Xs ∈/ J} for any interval J. Recall that n is the
excursion measure of the reflected process X−X =Xb−Xb whereXbt = inf0≤s≤tXbs. Since Xb
oscillates or drifts to−∞, every excursion of the process has a finite lifetime D. Let nu be the
measure associated to the excursion killed at time u∧D. Remark that the measuren(·, t≤D) is a finite measure, with total massπ((t,∞)) whereπ is the L´evy measure of the subordinator (Tb−y)y≥0 (whereTbx= inf{s≥0, Xbs =x}). We already saw that the inverse local time process
of Xb−Xb is (Tb−y)y≥0, and that it has L´evy measure qv(0)dv/v.
The demonstration that we are giving is close to the method used by Biane [7] for Brownian motion and Chaumont [10] for stable processes. It involves a path decomposition of the trajec-tories of Xb under Pt at its minimum. We will first need the following result (see [9]) which is
an application of Maisonneuve’s formula. Chaumont stated the result only for oscillating L´evy processes, but the proof applies without change to processes drifting to−∞.
Letktbe the standard killing operator at timet,ζ the life of the canonical process,θtbe the shift
operator andθ′
b
In other terms, ifH andH′ are positive measurable functionals, that can be taken of the form
H = 1{tn<ζ}
LetH andH′ be continuous measurable positive bounded functionals (that can be of the form above with the fi continuous, positive and bounded) such that H◦ku is integrable w.r.t. the
measure n(dω)du1u≤D, and f a continuous positive real function with compact support which
does not contain 0. We then have by dominated convergence and by definition ofPt
0,0:
Moreover we have that the family of probability measures (P(y,∞))
y<0 is weakly continuous in
the sense that for any continous bounded functional F, P(y,∞)(F) is continuous in y. This
follows from the a.s. continuity of the subordinator (Tb−y)y≥0 at a fixedy. The continuity of H
and H′,qx(s) and of the killing operator thus implies that the limit we are studying is equal to
Z
so that
Z ∞
0 dv
v qv(0)f(v)n
Z D
0
duH ◦kuH′◦θu
D=v
= n Z
D
0
duH ◦kuEω(0u,∞)[H
′f(u+ζ)]
By comparing this with (22) we obtain the desired result : by interverting and sticking together the pre- and post- minimum processes of the bridge with lengtht, we obtain a regular version of the conditional “law”n(·|D), such that the sticking pointuis uniform on the interval (0, D) (this gives also a way to recover the law of the bridge by splitting the excursion at an independent uniform point). This ends the proof.
Remark. Recently, Chaumont [11] has given a generalization of the Vervaat’s Transformation by constructing processes with cyclically exchangeable increments conditioned to spend a fixed times >0 below 0. This conditional law converges whensgoes to 0, and would give a (certainly more general!) proof of the Vervaat Theorem if one could precisely and properly identify the limit law (“process conditioned to spend time 0 below 0”) as that of the excursion.
7.2 Bounded variation case
There is a more natural way to approach Vervaat’s Theorem in the setting of this paper. Let
Yt=ct−τt be a spectrally negative L´evy process with bounded variation, wherecis a positive
constant andτ is a strict subordinator without drift (which means that it is pure jump and that it is not killed). Without loss of generality we suppose that c = 1. Let Y be the supremum process. We suppose that Y oscillates of drifts to +∞(that is,Y∞= +∞a.s.), which happens
if and only if 1 ≥E[τ1]. We suppose also that the L´evy measure κ(dz) of τ and the law of τt for any t >0 have continuous densities, κ(dz) =h(z)dz, P[τt ∈ds]/ds=pt(s) so that we may apply our results on the densities of the jumps of this subordinator.
Let us study the law of the excursions of the reflected process ofY. LetPu†(·) be the law of the
process (u−Yx)0≤x≤ζ killed at the timeζ when it reaches 0. Since we know that the processY
oscillates or drifts to +∞, we haveζ <∞a.s. The following description then holds:
Proposition 4 An excursion of Y below its supremum a.s. begins by a jump, and its Itˆo excur-sion measure is given by
e
n(·) = Z
R+
Pu†(·)h(u)du (23)
In other words, such an excursion begins with a jump with “law”h(u)du and evolves as the dual process of Y killed when it reaches 0.