Yavas participated in the conception of the project, solved the problem and wrote the manuscript., June. Yavas participated in the conception of the project, solved the problem and wrote the manuscript. 8] ——, “Multiple and Random Gaussian Approach in the Finite Block Length Regime”, in 2020 IEEE International Symposium on Information Theory (ISIT),.
Yavas, “Random access channel coding in the finite block length regime,” in 2018 IEEE International Symposium on Information Theory (ISIT).
LIST OF TABLES
INTRODUCTION
For BSC and more practically relevant pairs (n, ϵ), including n∈[100,500] and ϵ an approximation up to channel bias is the most accurate among the most recent CLT and LD approximations in the literature. . In feedback channel coding, the channel input at time i for message m, Xi(m), is a function of message m and the received signal up to time i−1,Yi−1. In VLSF codes for RAC, the decoder can decode at one of the available times nk,1, nk,2,.
A grid random variable is a random variable that has values in {a+kd:k∈Z}, where d∈ R+ is the span of the grid.
REFINED CLT, MD, AND LD THEOREMS IN PROBABILITY THEORY
In the CLT regime, where Fn(−x)∈(0,1) equals a value independent of n, that expansion is known as the Corner-Fisher theorem [29], which reverses the Edgeworth expansion (Theorem 2.3. 2). In the next section, we will see that the exponent to the right of (2.38) is tight for many cases, including all discrete random variables. Condition (S) of Theorem 2.5.2 is a smoothness assumption for the cgf κn, which is a generalization of Cramér's condition found in the large deviation theorem for the sum of i.i.d.
The latter follows from the boundedness condition in (2.45) and implies that the interest probability is in the LD regime.
MODERATE DEVIATIONS ANALYSIS OF CHANNEL CODING
The gradient and Hessian of the mutual information I(PX, PY|X) with respect to PX are given by [35]. The next convex optimization problem arises when optimizing an input distribution that reaches a lower bound S. Although the bounds in Theorem 3.3.4 are only for the CLT regime, but not for the MD regime, they yield a Gaussian channel asymmetry.
In most of the chapter, with the exception of the Gaussian channel expansion, we derive tight bounds for the non-Gaussianity (3.4) in the MD regime.
VARIABLE-LENGTH SPARSE FEEDBACK CODES FOR PPCS
2] shows that variable-length coding improves the first-order term in the asymptotic expansion of the maximum achievable message set size from NC to NC1−ϵ, where C is the capacity of DM-PPC, N is the average decoding time, and ϵ is the average error probability . Like this chapter, VLSF codes are studied under a finite constraint L on the number of decoding times. Here, log(L)(·) denotes the L-fold nested logarithm, N is the average decoding time, and CanandV are the capacity and dispersion of DM-PPC, respectively.
Analyzing the new bound, we prove an inverse for VLSF codes with infinitely many uniformly spaced decoding times. Our main result is a second-order feasibility bound for VLSF codes with L=O(1) decoding times on the DM-PPC. The error probability of the resulting code is bounded by ϵ, and the average decoding time is.
Proof Sketch: Theorem 4.3.2 analyzes the error probability and average decoding time of the suboptimal SHT-based decoder used at times n2,. The following theorem gives achievability and inverse bounds for VLSF codes with decoding times uniformly distributed as {0, dN, 2dN, . Since the information density ı(X;Y) of the Gaussian channel is unbounded, the expected value of the decoding time in the proof of [71, Th.
For DM-PPCs, there is a mismatch between the second-order term of our feasibility boundary for VLSF codes with L = O(1) decoding times (Theorem 4.3.1) and the second-order term of the best-known inverse boundary (4.10); the latter holds for L = ∞, and thus for anyL. However, since the thresholds of the optimal SHT with L decoding times have no closed-form expression [83, pp.
VARIABLE-LENGTH SPARSE FEEDBACK CODES FOR MACS
A VLSF code for the MAC with K transmitters is defined similarly to the VLSF code for the PPC. Our main results are second-order feasibility bounds for the rates approaching a point on the sum-rate bound of the MAC attainable region increased by a factor of 1−ϵ1. The term (5.12) limits the probability that the information density corresponding to the true messages is below the threshold for all decoding times; (5.13) bounds the probability that all messages are incorrectly decoded; and bound the probability that the messages from the sender index set A ⊆[K] are incorrectly decoded, and the messages from the index set Ac are correctly decoded.
Like the single-threshold rule from [85] for RAC, the single-threshold rule used in the proof of Theorem 5.3.2 differs from the decoding rules used in [72] for VLSF codes over Gaussian MACs by the power of expected. constraints and in [75] for DM-MAC. In both [72] and [75], L = ∞, and the decoder uses 2K −1 simultaneous threshold rules for each of the boundaries that define the reachable region of MAC meK transmitters. This is because if we truncate a code of infinite length at time n = 2N, with Chernoff bound, the resulting penalty term added to the error probability decays exponentially with N, the effect of which in (5.18) is o (1).
75] numerically evaluate their non-asymptotic reachability bound on DM-MAC, while Truong and Tan [72] provide reachability bound on the second-order term −O(√. N ) for Gaussian MAC with average power constraints. Applying our single-threshold rule and analysis to a Gaussian MAC with average power constraints improves the second-order expression in [72] from −O(√ . N) to −lnN+O(1) for all non-corner points. in the reachable region. The main challenge in [72] is to derive a tight bound on the expected value of the maximum over A ⊆ [K] of the stopping times τ(A) for the corresponding threshold rules in (5.17).
Under the same model and assumptions onL, to reach non-corner rate points that do not lie on the sum rate boundary, corresponding to a(A) = 0 in (5.11) for one or more A ∈ P([K] ), we change our single threshold rule to (5.17), where A is the transmitter index set corresponding to the capacity region's active sum rate bound at the (non-corner) point of interest. The following steps similar to the proof of (5.18) give second-order term −lnN +O(1) also for those points.
RAC CODES THAT EMPLOY STOP-FEEDBACK
The receiver restores only part of the sent messages, depending on the current level of activity in the channel. We assume that the influence of the channel input on the channel output is independent of the transmitter from which it comes; so every channel enters. The first lemma shows that the channel quality for each active transmitter deteriorates as the number of active transmitters grows (although the sum of the capacities may increase).
The bound in (6.22) implies that the only component of V2 used in the second-order characterization of the region (6.21) is V2∗. In the case of the Gaussian RAC defined in (6.14), the maximum power constraint P on the codewords requires that. Theorem 6.3.1 follows from Theorem 6.3.2, stated next, which bounds the error probability of the RAC code defined in Section 6.4.
A description of the proposed RAC code and the proofs of Theorems 6.3.1 and 6.3.2 appear in Section 6.4. In the following discussion, we bound the error probability of the code (f,{gk}Kk=0) defined above. The choice of the threshold γk (6.63) follows the approach established for the point-to-point channel in [5].
Note that for the dominant points of R(M,ϵ) corresponding to different values of PX∗, there is an O(1) difference between the left and right sides of the inequality in (6.19). The error exponents of the Hoeffding test coincide with the exponents of the optimal (Neyman-Pearson Lemma) binary hypothesis test. The decoder applies a (K+ 1)-ary hypothesis test using the output sequence Yn0 and determines an estimate ˆk of the number of active transmitters k ∈ {0,1,.
We also derive the best third-order asymptotics of the minimax type II error (Theorem 6.5.3).
GAUSSIAN MULTIPLE AND RANDOM ACCESS CHANNELS
Proofs of reachability bounds for two-transmitter Gaussian MAC, K-transmitter Gaussian MAC, and Gaussian RAC are presented in Sections 7.4–7.7. When Z ∼ N(0,V) and V is a d×d positive semidefinite matrix, the multidimensional analogue of the inverse Q−1(·) is the complementary Gaussian cumulative distribution. 1Y[nkt] is denoted by Yknt in Chapter 6; here we switch to Y[nkt] for note consistency with the rest of the chapter.
To prove Theorem 7.3.1, we specify the distribution of the random codewords, PX, in Theorem 7.3.2 as follows. The decoding function used at time nk combines the maximum likelihood decoding rule for the k-sender MAC with a typical rule based on the power of the output. The support of the distribution from which the codewords for the Gaussian MAC and RAC are drawn is shown in Fig.
We show that the first three terms of the expansion in (7.31) are still obtainable in this setting. We treat all bindings in (7.51) as errors because the probability that two codewords result in exactly the same information density is negligible due to the continuity of the noise. For the rest of the proof, Z ~ N(0,In) denotes the Gaussian noise, which is independent of the channel inputs X1 and X2.
Therefore, H is distributed according to the marginal distribution of the first coordinate of an arbitrary vector uniformly distributed over Sn(√ .n). Codeword: At least one of the k codewords associated with the transmitted message m[k] violates the power constraint in (7.28) in the first nk symbols. Note that compared to the feasibility proof of the Gaussian MAC in (7.11), the multiplicative constant in (7.184) is M−ks.
Using the data processing inequality of the total variational distance and independence H(t),t ∈[k] we obtain.
CONCLUSION, FUTURE WORK, AND OPEN PROBLEMS