3.2 F G-modules
3.2.2 Modular representation theory
Let G be a finite group and F a splitting field of characteristic a prime p > 0. The representation % : G −→ GL(V) over F is called modular if p divides the order of G and the study of such representations is called modular representation theory.
Modular representation theory was developed by Brauer. The difference between ordinary and modular representation is highlighted by the following proposition.
Proposition 3.21. Every finitely generated FG-module over the group algebra FG is semisimple if and only if the characteristic of the field p does not divide the order of the group.
Proof: See [1, Proposition 3.1].
It therefore follows that if p divides |G| then the FG-module cannot be decomposed into a direct sum of irreducible submodules. In this case we generally consider the decomposition of the finitely generated module into indecomposable summands. Then an equivalent of the Maschke’s theorem for the modular case which comes close to it is the Krull-Schmidt’s theorem which states that any representation of G has a unique decomposition into indecomposable direct summands. This narrows the study of modular representation theory to the study of indecomposable modules.
Theorem 3.22. (Krull-Schmidt Theorem) Every module M can be written as M = M1 ⊕ M2 ⊕ . . . ⊕ Mn where the Mi are indecomposable. Furthermore, if M = M1 ⊕M2 ⊕ · · · ⊕ Mn and also M = N1 ⊕N2 ⊕ . . .⊕Np where the Ni are
CHAPTER 3. REPRESENTATIONS AND MODULES 33 also indecomposable, then n = p and the Mi are isomorphic to the Nj, in some order.
Proof: See [100].
Modular representations are linked to ordinary representations via some integral values. We give a brief summary of some basic facts which have been established about modular representations linking them to ordinary representations.
Definition 3.23. Let G be a group and p any prime. An element g ∈ G can be written as g =stso thatpdoes not divide the order ofs and the order oft is a power of p. Then s is called a p-regular and t is called a p-singular element respectively.
(a) The number of irreducible F-representations of G is equal to the number of elements of the p-regular conjugacy classes of G.
(b) The composition factors together with their multiplicities of the F- representation of G is uniquely determined by the Brauer characters.
Further, Brauer has shown in [12, Theorem 1] that an ordinary irreducible representation is still irreducible in the modular case. We further have that the cancelation law for modules apply in the modular case.
Theorem 3.24 (Brauer and Nesbitt). Let Gbe a group of order g =pag0,p a prime and (g0, p) = 1. An irreducible representation Zi of degree zi ≡0 (mod pa) remains irreducible as a modular representation.
Proof: See [12, Theorem 1]
Corollary 3.25 (Cancelation law). If M, U and V are modules, and M ⊕U =M ⊕V then U =V.
Another important way of studying a module is by looking at its composition series. This proves to be quite helpful since most modules occurring naturally are not semi-simple and hence can not be written as a direct sum of irreducible modules.
CHAPTER 3. REPRESENTATIONS AND MODULES 34 Definition 3.26. A composition series for an FG-module V is a series of submodules of the form
V =V0 ⊇V1 ⊇. . .⊇Vt = 0
such that for each i ≥1 the factor Vi−1/Vi is irreducible. The integer t is called the length of the module V. If t is infinite then V is said not to have a composition series.
The restrictionsπi =π|Vi−1/Vi, 0< i < nof π to the composition series of a finite dimensional FG-representation π are called irreducible constituents of π and an irreducible constituent π has multiplicity j if the factor Vr ∼= Vi−1/Vi for exactly j values of i.
Not every module has a composition series for example the ringZwhen considered as a module over itself does not have a composition series. However when one does exist, Jordan-H¨older theorem states:
Theorem 3.27 (Jordan-H¨older theorem for FG-modules). If V is a finite dimensional FG-module, then V possesses a composition series and the composition factors (up to order and equivalence) are independent of the choice of the composition series.
Proof: See [3].
As a consequence of the Jordan-H¨older theorem we observe that given any two composition series the composition factors Mi/Mi+1 of one of the series is simply a permutation of the composition factors of the other. This suggests that a module can have many composition series. Again all the simple modules appear in a composition series for an FG-module, since every simple module is a quotient of an FG-module, hence there can only be finitely many of them.
Any proper submodule of a finitely generated module is contained in a maximal submodule (see [100]). From this it is apparent that any finitely generated module will have a composition series. The following special submodules require mentioning as we shall meet them in our computations.
CHAPTER 3. REPRESENTATIONS AND MODULES 35 Definition 3.28. Let M be an FG-module over a finite field F. We call the intersection of all maximal submodules of M theradical of M and denote itrad(M) i.e.,
rad(M) = \
{U ≤maxM|M/U is simple }.
Thesocle of M is the sum of all irreducible submodules of M and is denoted by soc(M). Moreover,
soc(M) = X
{U ≤M|U is simple}.
A socle series is a composition series of socles.
Chapter 4
Links of codes and other combinatorial structures
In this chapter we focus on codes, designs and finite geometries. For a more detailed account and additional information the reader is advised to consult [4, 7, 24] and [89, 102].
4.1 Linear codes
A finite field of order q where q is a power of a prime, will be denoted by Fq and F∗q will denote the non-zero elements of Fq. Denote the vector space of n-tuples of elements ofFq byV =Fqn.Then the standard dot product ofxandyinV is defined by x·y =xyt where yt is the transpose of y. The subspace spanned overFq by the subset {x1, x2, . . . , xn} of V will be denoted by hx1, x2, . . . , xni.
Definition 4.1. Let F be a set of q elements. A q-ary code C is a set of finite sequences of the elements of F, called codewords (words) . If all the codewords are sequences of the same length n, then C is called a block code of length n.
Definition 4.2. Let C be a q-ary code and x and y words in C. The Hamming distance between x and y, denoted by d(x, y), is the number of positions in which the words x and y differ. The minimum distance d of C is the smallest Hamming
36
CHAPTER 4. LINKS OF CODES AND OTHER COMBINATORIAL STRUCTURES37
distance between any two distinct words in C, that isd=min{d(x, y)|x, y ∈C, x6=
y}.
The codes from designs that we will study are block codes. The construction of these codes over finite fields will give them additional structure. Specifically we consider codes over finite fields which are finite dimensional vector spaces.
Definition 4.3. A linear code C of length n over the field Fq is a subspace of Fqn. We write C = [n, k]q where dim(C) =k.
Every linear code of length n over Fq contains the zero vector 0 ∈ Fqn whose entries are all the zero elements of the field. If d(x, y) is the Hamming distance of x, y in C,then x−y is in C and d(x, y) = d(0, x−y). This implies that for a linear code, the minimum distance d of the code is the smallest number of non-zero entries of the codewords of the code.
Definition 4.4. If C is a linear code of length n over the field Fq then the weight of a word x in C is defined to be wt(x) = d(0, x).
It then follows that the minimum distance of a linear code C is the minimum weight of the code. When the minimum weight d of a linear code C = [n, k]
is known, we write C = [n, k, d]q. For a linear code C = [n, k, d]q, we have the Singleton bound d ≤n−k+ 1 (see [4]).
LetC be a linear [n, k, d]q code. We letAi(c) denote the number of codewords at distanceifrom a codeword c∈C. The numbersAi(c) where 0≤i≤n,are called the weight distribution of C with respect to c. Obviously A0(c) = 1, Ai(c) ≥ 0, andP
iAi(c) = qk. For linear codes (and some non-linear codes)Ai(c) is independent of c and will be denoted by Ai.
Definition 4.5. Let C be a linear code. Then the weight enumerator of C is the polynomial WC(x, y) =P
c∈Cxn−wt(c)ywt(c)=Pn
i=0Aixn−iyi.
Remark 4.6. The weight distribution classifies codewords according to the number of non-zero coordinates. More detailed information is supplied by the complete weight enumerator, which gives the number of codewords of each composition.
CHAPTER 4. LINKS OF CODES AND OTHER COMBINATORIAL STRUCTURES38
Definition 4.7. Let C be a [n, k]q code. A generator matrix for C denoted by G is a k×n matrix obtained from any k linearly independent vectors of C.
Definition 4.8. Let C be a [n, k]q code. The dual code or orthogonal code of C denoted by C⊥, is the orthogonal under the standard inner product, that is C⊥ ={v ∈Fqn| (v, c) = 0 for all c∈C}.
From elementary linear algebra we have that dim(C) + dim(C⊥) = n, since C⊥ is simply the null space of a generator matrix for C. Taking G to be the generator matrix for C = [n, k]q, a generator matrix H for C⊥ is a (n−k)×n matrix that satisfies GHT = 0, that isc∈C if and only if cHT = 0 ∈Fqn−k.For any vector y in Fqn
the vectoryHT is called the syndrome ofy , denoted Syn(y). Ifx and yare in Fqn,then Syn(x) = Syn(y) if and only if xand yare in the same coset of C.We will see in Section 4.6 that syndromes can be used to decode a received message more efficiently.
Definition 4.9. Any generator matrixH forC⊥ is called aparity-check or check matrix for C. If G is written in the standard form [Ik|A], then H = [−AT |In−k] is a check matrix for the code with generator matrix G.
The first k coordinates are called theinformation symbols and the last n−k coordinates are the check symbols.
We can use the generator matrix for a linear code to encode a message. Suppose that we have a set of data consisting of qk messages that are to be transmitted. We encode the message using a code C with a generator matrixG.To do this we identify the data with the vectors in Fqk
. Then for u ∈ Fqk
, we encode u by forming the vector uG.Ifu= (u1, u2, . . . , uk) andG has rowsR1, R2, . . . , Rk, where each Ri is in Fqn, then u is encoded as:
uG =X
i
uiRi = (u1, u2, . . . , uk, xk+1, . . . , xn).
But when G is in standard form, the encoding takes the simpler form u 7→
(u1, u2, . . . , uk, xk+1, . . . , xn), and here the u1, u2, . . . , uk are the message or
CHAPTER 4. LINKS OF CODES AND OTHER COMBINATORIAL STRUCTURES39
information symbols, and the lastn−k entries are the check symbols, and represent the redundancy .
In general it is not easy to say anything about the minimum weight ofC⊥knowing only the minimum weight of C but, of course either a generator matrix or a check matrix gives a complete information about both C and C⊥. In particular, a check matrix for C can be used to determine the minimum weight of C as is shown in theorem4.10:
Theorem 4.10. Let H be a check matrix for a [n, k, d] code C. Then every choice of d−1 or fewer columns of H forms a linearly independent set. Moreover if every d−1 or fewer columns of a check matrix for a code C are linearly independent, then the code has minimum weight at least d.
Proof: See [4, Theorem 2.3.1].
A constant vector is one for which all the coordinate entries are either 0 or 1. If C⊥ contains the all-one vector 1 ∈ Fqn
, whose entries are all 1 ∈ Fq, then every vector in the q-ary codeC of weight congruent to 0 moduloq is also in C⊥. A code C is self-complementary if it contains the all-ones vector, self-orthogonal if C ⊆ C⊥ and self-dual if C = C⊥. The hull of a design’s code over some field is the intersection C∩C⊥. A binary code is doubly-even if all its codewords have weight divisible by 4.