• Tidak ada hasil yang ditemukan

Techniques of data analysis

The collected data were analyzed following bio-metrical techniques developed by based on the mathematical models of Fisher et. al. (1932)1 and those of Hayman (1958); De-Way and Lu (1959)2 and Allard (1960)3.The techniques used are described under the following subheads:

Mean: Data on individuals were added together then divided by the total number of observations and the mean was obtained as follows:

n X X

n

i

i

1

Standard Deviation: The Standard Deviation is a measure of how spread out numbers are. Its symbol is σ (the Greek letter sigma). The formula is easy: it is the square root of the Variance. So,

SD =

Standard error (SE): Standard error (SE) was calculated according to following formula

SE = n SD

Where, SE = Standard Error ;SD = Standard Deviation ; n = Number of observation.

1 Fisher R.A. Immer R.R, Tedin O. The genetical interpretation of statistics of the third degree in the study of quantitative insheritance, Genetics (1932), 17, 107-124.

2 Deway, D.R. and Lu, K.H. A correlation and path coefficient analysis of components of orested wheatgrass seed produaton, Agrony (1959), 51: 515-518.

3 Allard, R.W. Principles of plant breeding. John Whiley Sons Inc. New York (1960) pp. 485.

47

Least significant difference (LSD): Least significant different was carried out according to following formula. DMRT test was carried according to Duncan (1955)4 using the following formula. LSD values at 5% was calculated where the value of variance ratio for treatment effect were significant.

LSD = t0.05

r

Square Mean

Error

(  at (EMS) df

Where, r = Number of repetition.

Analysis of variance: Variance analysis is a measurement of dispersion of a population, so for testing the significant differences among the population so for testing the significant differences among the populations the analysis of variance is necessary.

Variance analysis for each character was carried out separately on mean value of different groups of teacher.

Table 6: Expectation mean square (EMS) test used in the analysis of variance for two-way classification data with unequal number of observations per Cell.

Source of Variation

Sum of Squares

Degrees of Freedom

Mean Sum of Square (MS)

Expectation of MS

Factor one SSA I-1 MSA

Factor two SSB J-1 MSB

Interaction SSAB (I-1)(J-1) MSAB

Error SSE IJ(K-1) MSE

Total SStot IJK-1

Where, MSA represents mean square of factor one (A), MSB represents mean square of factor two (B), MSAB represents mean square of interaction (AB), MSE represent Error mean square.

4 Duncan, D.B. A multiple range and multiple F-Tests, Biometrics (1955) 11, 1-42.

48

Test of significance: Analysis of variance (ANOVA) is a statistical technique to analyze variation in a response variable (continuous random variable) measured under conditions defined by discrete factors (classification variables, often with nominal levels). Frequently, we use ANOVA to test equality among several means by comparing variance among groups relative to variance within groups (random error).

Assumptions of two-way ANOVA

The populations from which the samples were obtained must be normally or approximately normally distributed.

The samples must be independent.

The variances of the populations must be equal.

The groups must have the same sample size.

Objectives of ANOVA

1. It identifies the causes of variation and sort out corresponding components of variation with associated degrees of freedom.

2. It provides test of significance based on F-distribution.

Interpretation: ANOVA is a particular form of statistical hypothesis testing heavily used in the analysis of experimental data. A statistical hypothesis test is a method of making decisions using data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred, assuming the truth of the null hypothesis. A statistically significant result (when a probability (p-value) is less than a threshold (significance level)) justifies the rejection of the null hypothesis. Analysis of variance provides the basis for test of significance.

Significance of differences among the population was worked out by F test (Variance ratio) as follows:

F = ~FnBinW

Group) (Within

MS

Group) (Between

MS

Where, MS = mean square (required) Chi-square (2) test

Using sample data, find the degrees of freedom, expected frequencies, test statistic, and the P-value associated with the test statistic. The approach described in this section is illustrated in the sample problem at the end of this lesson.

Degrees of freedom. The degrees of freedom (DF) is equal to:

df = (r - 1) * (c - 1)

49

Where r is the number of levels for one categorical variable, and c is the number of levels for the other categorical variable.

Expected frequencies. The expected frequency counts are computed separately for each level of one categorical variable at each level of the other categorical variable.

Compute r * c expected frequencies, according to the following formula.

n n

n

Er,cr* c /

where is the expected frequency count for level r of Variable A and level c of Variable B, nr is the total number of sample observations at level r of Variable A, nc

is the total number of sample observations at level c of Variable B, and n is the total sample size.

Test statistic. The test statistic is a chi-square random variable 2 defined by the following equation.

Where is the observed frequency count at level r of Variable A and level c of Variable B, and is the expected frequency count at level r of Variable A and level c of Variable B.

P-value. The P-value is the probability of observing a sample statistic as extreme as the test statistic. Since the test statistic is a chi-square, use the Chi-Square Distribution Calculator to assess the probability associated with the test statistic.

Use the degrees of freedom computed above.

Interpret Results

If the sample findings are unlikely, given the null hypothesis, the researcher rejects the null hypothesis. Typically, this involves comparing the P-value to the significance level, and rejecting the null hypothesis when the P-value is less than the significance level.

50