• Tidak ada hasil yang ditemukan

streaming data

N/A
N/A
Protected

Academic year: 2023

Membagikan "streaming data"

Copied!
216
0
0

Teks penuh

Introduction

Causal frequency-constrained sampling

In the basic finite causal frequency sampling setup (see Figure 1.1), the sampler (ie, the transmitting node) follows the source process {Xt} and determines the sequence of sampling times τ1, τ2,. This work also motivates further research into signal-dependent sampling policies in finite causal frequency sampling settings.

Causal lossy data compression

A fundamental bound of the finite causal rate sampling problem is measured by the distortion rate function, which quantifies the smallest end-to-end estimation distortion compatible with the communication rate (ie, the average number of bits sent per second). In the basic setup of event-triggered quantized control (see Figure 1.4), the encoder observes the revolution Yt driven by the noise Xstand and causally determines the sampling times τ1, τ2, .

Figure 1.2: Causal compressing of streaming source symbols.
Figure 1.2: Causal compressing of streaming source symbols.

Causal joint source-channel coding with feedback

Using the form of the sampling policy (2.9), we show that the MMSE estimation policy (2.2) simplifies as follows. We go on to show that the complexity of the type-based, anytime-instant SED code is O(tlogt).

Figure 1.5: Communication over a channel with full feedback.
Figure 1.5: Communication over a channel with full feedback.

Causal frequency-constrained sampling

Introduction

For a fixed-delay channel, we show that the optimal causal sampling policy remains optimal for a no-delay channel. We show that the optimal causal sampling policy mediates the source process pattern if the process innovation exceeds one of two symmetric thresholds.

Problem statement

See Appendix A.1 for a sufficient condition on the stochastic process for the optimal sampling policy to be satisfied (S.2). To quantify the trade-off between sampling frequency (2.3) and MSE (2.4), we introduce the distortion frequency function.

Optimal causal frequency-constrained sampling

On the infinite time horizon, the optimal causal sampling policy for time-homogeneous continuous Markov processes satisfying assumptions (P.1)–(P.3) in Section 2.2 is a symmetric threshold sampling policy of the form . We provide an alternative method to find the optimal sampling policy for the OU process in Appendix A.7 using (2.13).

Figure 2.2: Symmetric threshold sampling of the Wiener process W t . The black curve represents the Wiener process
Figure 2.2: Symmetric threshold sampling of the Wiener process W t . The black curve represents the Wiener process

Successive refinement via causal frequency-constrained sampling . 28

We show that the optimal causal sampling policy for the continuous Lévy process Xt = cWat +bt (2.17) remains the symmetric threshold sampling policy in (2.18). We first show that the DSTF is more constrained by a minimization problem achieved by the causal sampling policy in Theorem 4.

Figure 2.4: System model for causal frequency-constrained sampling over a packet- packet-drop channel with feedback.
Figure 2.4: System model for causal frequency-constrained sampling over a packet- packet-drop channel with feedback.

Conclusion

Dropping the assumption that the channel is noiseless, we show the optimal causal sampling policy of a continuous Lévy process for a packet drop channel with feedback. It sends new samples according to a symmetric threshold sampling policy but with a threshold greater than that for a noiseless channel.

Future research directions

We assume that the source symbols of the DSS are equally likely distributed, i.e. the source distribution satisfies (4.1). We show that the complexity of the type-based instantaneous encoding phase is log-linearO(tlogt) at times t = 1,2.

Causal rate-constrained sampling

Introduction

The SOI code also applies to successive refinement in the causal rate-limited sampling setting. For a binary sweep channel (BEC), we show that the optimal code limited by the causal rate is essentially the optimal sampling of the causal frequency.

Problem statement

The causal sampling policy, defined in Definition 1-1, determines the stopping times(2.1) at which codewords are generated. The DRF for causal rate-limited sampling of the process{Xt}Tt=0 is the minimum MSE achievable by causal rate R-codes:.

Optimal causal rate-constrained sampling

Furthermore, the optimal decoding policy depends only on the thresholds of the sampling policy and the sampling timestamps. Thus, finding the optimal reason code is simplified to finding the optimal reason selection policy.

Fig. 2.2 in Section 2.3 shows the SOI encoding policy for the Wiener process.
Fig. 2.2 in Section 2.3 shows the SOI encoding policy for the Wiener process.

Optimal causal rate-constrained deterministic sampling

The minimization problem (3.24b) in DDET(R) is the causal IDRF for a discrete-time stochastic process formed by samples. Similar to (3.24b), the optimization problem in (3.26b) is a causal IDRF for a Guass-Markov (GM) process formed by samples, but the rate in (3.26b) is the rate per sampleR rather than the rate per secondRin (3.24b) .

Fig. 3.3 displays distortion-rate tradeoffs obtained in Theorems 5 and 6 for the Wiener process, as well as a numerical simulation of the uniform sampler in Theorem 6 with the greedy Lloyd-Max quantizer
Fig. 3.3 displays distortion-rate tradeoffs obtained in Theorems 5 and 6 for the Wiener process, as well as a numerical simulation of the uniform sampler in Theorem 6 with the greedy Lloyd-Max quantizer

Rate-constrained control

In the rate-limited control system, the optimal coding policy that minimizes the mean square cost in (3.34) is the SOI code in Theorem 5, and so is the optimal control signal. According to the optimal control policy in Proposition2, the optimal encoder does not rely on the control signal to determine the codeword generation times.

Figure 3.4: Control system.
Figure 3.4: Control system.

Successive refinement via causal rate-constrained sampling

To quantify the trade-offs between the rates Rn and the MSEs dn, we introduce the strain rate region. The code words and sampling times received by the kth decoder are equivalent to those generated by the SOI code at rate Pk.

Rate-constrained sampling over imperfect channels

The decoding policy in (3.50) may be suboptimal since it ignores the knowledge implied by the sampling times of the deleted bits. Conversely: In Appendix B.10 we show that the DSTF for a BECDBEC(n, T) is underbounded by the DSTF for a packet-dropping channelDpd(n, T), i.e. 3.54) Achievability: The causal encoding policy in Theorem8 together with the decoding policy in (3.50) achieves the inverse bound Dpd(n, T) since the decoder can noiselessly recover the samples at the sampling times using the successfully received 1-bit codewords , i.e. ,XˆtBEC= ¯Xtpd.

Figure 3.5: System model for causal rate-constrained sampling over a BEC with feedback.
Figure 3.5: System model for causal rate-constrained sampling over a BEC with feedback.

Delay-tolerant rate-constrained sampling

In the extreme, the encoder and the decoder can wait until the end of the Wiener process {Wt}Tt=0 before encoding. We show that the one example look-ahead decoder significantly reduces the MSE compared to the DRF of the Wiener process under the causal real-time estimation.

Conclusion

Future research directions

Section 4.5, we present the algorithm of the instantaneous SED code for a symmetric binary-input DMC. The slope of the curves corresponds to the any time reliabilityα(4.18) of the instantaneous SED code.

Causal joint source-channel coding with feedback

Introduction

Using the current SED code, we derive the JSCC reliability function for a stream source whose symbol arrival time has bounded randomness. Nevertheless, we have shown that a sequence of current SED codes indexed by symbol sequence length k achieves the JSCC reliability function for streaming via Gallager symmetry [84, p.

Problem statement

Next, we define an immediate encoding code designed to recover the first DSS ksymbols at a rate of Rsymbols per channel usage and error probability ϵ. For DSS transmission over a non-degenerate DMC with noiseless feedback via an immediate coding code, we define a streaming JSCC reliability function.

Figure 4.1: Real-time feedback communication system with a streaming source.
Figure 4.1: Real-time feedback communication system with a streaming source.

Instantaneous encoding phase

At each time, the encoder and decoder first update priorsθi(yt−1) for all i ∈ [q]N(t). 4.23). At each time when the priors are updated, the encoder and decoder split the message alphabet [q]N(t) into |X | disjoint groups {Gx(yt−1)}x∈X such that for allx∈ X,. 4.24).

Figure 4.5: An example of group partitioning and channel input randomization for a DMC with uniform capacity-achieving distribution P X∗ (x) = 0.25 , X = [4]
Figure 4.5: An example of group partitioning and channel input randomization for a DMC with uniform capacity-achieving distribution P X∗ (x) = 0.25 , X = [4]

Joint source-channel coding reliability function

For each (fully accessible) DS, the reliability function of JSCC (4.38) is accessible by the MaxEJS code [45, Sec. Since the (fully accessible) DS (4.3) is a special DSS, Theorem 9 gives the JSCC reliability function (4.38) for a fully accessible resource.

Instantaneous SED code

We then determine which unstable scalar linear systems can be stabilized by the current SED code. We proceed to show an unstable scalar linear system that can be stabilized by the current SED code.

Figure 4.6: The error probability P [ ˆ S t k ̸= S k ] of decoding the first k symbols of a DSS at time t achieved by the type-based instantaneous SED code (Section 4.7)
Figure 4.6: The error probability P [ ˆ S t k ̸= S k ] of decoding the first k symbols of a DSS at time t achieved by the type-based instantaneous SED code (Section 4.7)

Streaming with random arrivals

We derive the JSCC reliability function for random streaming E(R)˜ (4.54) using the instantaneous SED code for random arrivals. Although the instantaneous SED code for random arrivals reaches E(R)˜, in fact any coding strategy is sometimes = 1,2,.

Low-complexity codes with instantaneous encoding

Since the type update in step (i) does not add new types, the number of types increases only due to the splitting of types during clustering in step (iii). We generalize the type-based instantaneous SED code for deterministic arrivals in Section 4.7 to stochastic arrivals.

Figure 4.8: Tables (a) , (b) , (c) represent three types at time t . Each row represents a source sequence in the type
Figure 4.8: Tables (a) , (b) , (c) represent three types at time t . Each row represents a source sequence in the type

Simulations

This suggests that assumption (b′) about the rate at which the symbol arrives, sufficient for the instantaneous SED code to reach E(R), might be conservative. The type-based instantaneous SED code in section 4.7 and the instantaneous SED code in section 4.5 operate on k i.i.d.

Figure 4.9: Rate R k (symbols per channel use) vs. source length k . The error probability is constrained by ϵ = 10 −6 (4.15)
Figure 4.9: Rate R k (symbols per channel use) vs. source length k . The error probability is constrained by ϵ = 10 −6 (4.15)

Streaming over a degenerate DMC with zero error

In the first block, the communication phase uses a ⟨k, R, ϵk⟩Shannon limit-reaching code with instantaneous encryption and general randomness. A block-encoded stop-feedback code that retransmits blocks at a total rate asymptotically equal to the rate of the first block is also used by Forney [105, p.

Conclusion

For a DSS whose symbol arrival times have limited randomness, we derive the JSCC reliability function for random streaming (Theorem 11) using the immediate SED code. It is equivalent to the JSCC reliability function for deterministic streaming, meaning that the immediate SED code performs as if the decoder knows the random times.

Future research directions

We denote the error probability of the binary hypothesis test at the stopping time T by. Finally, we show that the convergence of the source para leads to the convergence of the entropy sequence and conclude (C.41).

Conclusion

Sufficient condition for (S.2)

We fix the arbitrary realization {Xs}rs=0 =x, which leads to τk′ =r, and construct the angle {Pt}Tt=r. The process {Pt}Tt=r (A.8) belongs to Π[r,T], since it samples the input process at time intervals if it observed {Xs}rs=0 =x regardless of the actual realization {Xs}rs= 0 .

Proof of Theorem 1

The sampling decision process{Pmsym}Tt=0 leads to the same average sampling rate as {Pt}Tt=0. We conclude that {Pt}Tt=0 and {Pmsym}Tt=0 lead to the same average sampling rate.

Proof of Corollary 1.1

We then show that {Ptsym}Tt=0 achieves an MSE no greater than that achieved by {Pt}Tt=0. Combining (A.47) and (A.49), we conclude from the law of total expectation that {Ptsym}Tt=0 achieves an MSE no greater than that achieved by {Pt}Tt=0 .

Proof of Corollary 1.2

By inputting a random sample timestamp with probability D to sample and probability 1−D not to sample, we can compensate for the decimal part. Therefore, for any sampling policy whose average sampling rate is strictly less thanF, we can always set up a sampling policy that achieves the maximum sampling rateF and leads to an MSE no worse than that achieved by the randomly set sampling policy.

Proof of Corollary 1.3

Proof of Theorem 2

Using (A.58)–(A.59) and assumption (S.1), we will prove that the sampling decision process that achieves the D∞(F) for time-homogeneous continuous Markov processes satisfying assumptions (P) .1)–(P.3) are of the form (2.12). Since the goal is to minimize the MSE, it is sufficient to consider sampling decision processes leading to (A.63).

Optimal sampling policy for the OU process

Moreover, the expectation of reward per assumption is finite (S.1). F, 3) R1 is a monotonically increasing function such that 2) implies E[O2τ.

Proof of Proposition 1

Proof of Theorem 4

If we rewrite the lower bound in (A.82) in matrix form and reduce it under the time constraint (2.43), we get the lower bound of DSTF as A.86) We continue by showing that the minimum on the right-hand side (A.83) is achieved by the causal sampling policy in Theorem 4. If we incorporate the causal sampling policy in Theorem 4 into the DSTF objective function (A.81), we conclude that DSTF equal to its lower limit (A.83).

Proof of Theorem 5

Recovering L t from Z t

Decomposition of D DET op (R)

Proof of Theorem 6

Proof of Lemma 11

Proof of Lemma 12

We first show that the optimization problem on the right-hand side of (B.22) is a convex optimization problem that satisfies Slater's condition, i.e., it has strong duality. We then solve the Lagrangian dual problem to obtain the optimal D1∗. B.25c) Therefore, the minimization problem on the right side of (B.22) is convex.

Proof of Lemma 13

Proof of Lemma 14

Furthermore, we observe that the uniform sampling intervals (B.15) that meet both the upper and the lower bounds of DDET(F, Rs) converge asymptotically to F1.

Proof of Lemma 15

We will also prove that DN(F, Rs) is equilibrium, and (B.53b) will follow via the fundamental theorem of Γ-convergence. Ignoring the two non-negative λ∗(F, Rs, N) terms in (B.10b), we observe that DN(F, Rs) is underbounded by.

Converse proof of Theorem 8

A partition that satisfies (4.24)

Channel input distribution is equal to the capacity-achieving distri-

Converse proof of Theorem 9

To obtain the inverse bound on the JSCC reliability function for a fully accessible resource, we place a lower bound on the expected decoding timeE[ηk] using the error probabilityPe and and resource distributionPSk. The asymptotic performance of the lower bound (C.11) relies on two properties of DS in Lemma18, stated below.

Proof of Lemma 16

Proof of Lemma 17

Sk is bounded lower by the error probability of the binary hypothesis test ([99, second paragraph below Prop. 2]), i.e., given any≥0,yt∈ Yt,. C.21). To lower the limit of the minimization function on the right-hand side of (C.21), we show that the alphabet[q]k can always be divided into two groups G(Yτδ) and [qk]\ G(Yτδ) such that for all≥ 0,yt∈ Yt, the priors of the hypotheses are bounded lower as.

Proof of Lemma 18

When A3 occurs, we move the sequences ∈[q]k that achieve the maximum in the event A3toG(Yτδ) and move the remaining sequences to the second group.

Achievability proof of Theorem 9: A (fully accessible) DS

Lemma20, stated next, will be used to investigate whether a block-encoding code achieves the JSCC reliability function for a fully accessible source. We show that the MaxEJS code and the SED code both satisfy (C we leave the prior in its generic form and obtain a modified version of [45, Theorem 1] as follows.

Achievability proof of Theorem 9: A DSS with f = ∞

1 + max{logqk,log 1ϵ}, (C.32) then the expected decoding time of the code with block coding is upper bound if. We invoke Lemma 20 with ϵ ← ϵk to upper bound E[ηk] on the right-hand side of (C.34) and obtain a feasibility bound on the expected decoding time E[η′k] of the buffer-then-send code: .

Achievability proof of Theorem 9: A DSS with f < ∞

According to DSS, we extract all arrival times of different symbols from t1 ≤t2 ≤. and we denote the sequence of arrival times of different symbols by . C.38). For example, if a DSS transmits a source symbol every λ ≥ 1 channel using (4.6), then the symbol arrival times are equal to the arrival times of different symbols, i.e. tn =dn, and the following Lemma 23 holds trivially. me).

Proof of Lemma 22

Proof of Lemma 24

To obtain the asymptotic behavior of H(Xt|Zt, Yt−1) in (C.50), we proceed to analyze the asymptotic behavior of θSN(t)(Yt−1).

Decoding before the final arrival time

Proof of Remark 2

The approximating instantaneous SED rule ensures (4.67)

Number of types for random arrivals

Converse proof of Theorem 11

Achievability proof of Theorem 11

Proof of Lemma 25

Proof of Lemma 26

Zero entropy rate of symbol arriving times

Proof of Proposition 4

Cardinality of common randomness

Zero-error code for degenerate DMCs

  • Causal compressing of streaming source symbols
  • Causal compressing of a streaming continuous-time process
  • System model of quantized event-triggered control
  • Communication over a channel with full feedback
  • Real-time feedback communication system with a streaming source
  • System Model. Sampling times τ i , i = 1, 2, . . . are determined by
  • Symmetric threshold sampling of the Wiener process W t . The black
  • An n -sampler n -estimator system for successive refinement via causal
  • System model for causal frequency-constrained sampling over a packet-
  • System Model. Sampling time τ i and codeword U i are chosen by the
  • Decomposition of the encoder
  • MSE versus rate
  • Control system
  • System model for causal rate-constrained sampling over a BEC with
  • Real-time feedback communication system with a streaming source
  • A fully accessible source: 1 = t 1 = t 2 =
  • A streaming source: t 1 = 1, t 2 = 2, t 3 = 4, t 4 = t 5 = 6,

Gambar

Figure 1.1: Causal frequency-constrained sampling.
Figure 1.2: Causal compressing of streaming source symbols.
Figure 1.3: Causal compressing of a streaming continuous-time process.
Figure 1.4: System model of quantized event-triggered control.
+7

Referensi

Dokumen terkait

Menurut UU MD3, Mahkamah Kehormatan Dewan merupakan alat kelengkapan DPR yang bersifat tetap dan bertugas menjaga serta menegakkan kehormatan dan keluhuran martabat DPR sebagai