Reading 12 Hypothesis Testing
–––––––––––––––––––––––––––––––––––––– Copyright © FinQuiz.com. All rights reserved. ––––––––––––––––––––––––––––––––––––––
F
in
Q
u
iz
N
o
te
s
–
2
0
1
7
1. INTRODUCTION
Statistical inference refers to a process of making judgments regarding a population on the basis of information obtained from a sample. Two branches of Statistical inference include:
1)Hypothesis testing: It involves making statement(s) regarding unknown population parameter values based on sample data. In a hypothesis testing, we have a hypothesis about a parameter's value and
seek to test that hypothesis e.g. we test the hypothesis “the population mean = 0”.
• Hypothesis: Hypothesis is a statement about one or more populations.
2)Estimation: In estimation, we estimate the value of unknown population parameter using information obtained from a sample.
2. HYPOTHESIS TESTING
Steps in Hypothesis Testing:
1. Stating the hypotheses: It involves formulating the null hypothesis (H0) and the alternative hypothesis (Ha).
2. Determining the appropriate test statistic and its probability distribution: It involves defining the test statistic and identifying its probability distribution.
3. Specifying the significance level: The significance level should be specified before calculating the test statistic.
4. Stating the decision rule: It involves identifying the rejection/critical region of the test statistic and the rejection points (critical values) for the test.
•Critical Region is the set of all values of the test statistic that may lead to a rejection of the null hypothesis.
•Critical value of the test statistic is the value for which the null is rejected in favor of the alternative hypothesis.
•Acceptance region is the set of values of the test statistic for which the null hypothesis is not rejected.
5. Collecting the data and calculating the test statistic:
The data collected should be free from measurement errors, selection bias and time period bias.
6. Making the statistical decision: It involves comparing the calculated test statistic to a specified possible value or values and testing whether the calculated value of the test statistic falls within the acceptance region.
7. Making the economic or investment decision: The hypothesized values should be both statistically significant and economically meaningful.
Null Hypothesis: The null hypothesis (H0) is the claim that
is initially assumed to be true and is to be tested e.g. it is hypothesized that the population mean risk premium for Canadian equities ≤ 0.
• The null hypothesis will always contain equality.
Alternative Hypothesis: The alternative hypothesis (Ha) is
the claim that is contrary to H0. It is accepted when the
null hypothesis is rejected e.g. the alternative hypothesis is that the population mean risk premium for Canadian equities > 0.
• The alternative hypothesis will always contain an inequality.
Formulations of Hypotheses: The null and alternative hypotheses can be formulated in three different ways:
1. H0: θ = θ0 versus Ha: θ≠θ0
• It is a two-sided or two-tailed hypothesis test.
• In this case, the H0 is rejected in favor of Ha if the
2. H0: θ ≤θ0 versus Ha: θ>θ0
• It is a one-sided right tailed hypothesis test.
• In this case, the H0 is rejected in favor of Ha if the
population parameter is > θ0.
3. H0: θ≥θ0 versus Ha: θ< θ0
• It is a one-sided left tailed hypothesis test.
• In this case, the H0 is rejected in favor of Ha if the
population parameter is < θ0.
where,
θ = Value of population parameter
θ0 = Hypothesized value of population parameter
NOTE:
Ha: θ> θ0 and Ha: θ< θ0 more strongly reflect the beliefs of
the researcher.
Test Statistic: A test statistic is a quantity that is calculated using the information obtained from a sample and is used to decide whether or not to reject the null hypothesis.
Test statistic =
∗
• The smaller the standard error of the sample statistic, the larger the value of the test statistic and the greater the probability of rejecting the null hypothesis (all else equal).
• As the sample size (n) increases, the standard error decreases (all else equal).
*When the population S.D. is unknown, the standard error of the sample statistic is given by:
=
√
Thus,
Test statistic =
√
When a null hypothesis is tested, it may result in four possible outcomes i.e.
1. A false null hypothesis is rejected → this is a correct decision and is referred to as the power of the test. Power of a test = 1 – Probability of a Type-II error
When more than one test statistic is available to conduct a hypothesis test, then the most powerful test statistic should be selected.
2. A true null hypothesis is rejected → this is an incorrect decision and is referred to as a Type-I error.
3. A false null hypothesis is not rejected → this is an incorrect decision and is referred to as a Type-II error.
4. A true null hypothesis is not rejected → this is a correct decision.
Type I and Type II Errors in Hypothesis Testing
True Situation
Decision H0 True H0 False
Do not reject H0 Correct Decision Type II Error
Reject H0
(Accept Ha)
Type I Error Correct Decision
Source: Table 1, CFA® Program Curriculum, Volume 1,
Reading 12.
• Type-I and Type-II errors are mutually exclusive errors.
• The probability of a Type-I error is referred to as a level of significance and is denoted by alpha, α.
o The lower the level of significance at which the null hypothesis is rejected, the stronger the evidence that the null hypothesis is false.
• The probability of a Type-II error is denoted by beta,
β. The probability of type-II error is difficult to quantify.
• All else equal, the smaller the significance level, the smaller the probability of making a type-I error and the greater the probability of making a type-II error.
• Type I and II errors probabilities can be
simultaneously reduced by increasing the sample size (n).
• Type-I error is more serious than Type-II error.
Rejection Points Approach to Hypothesis Testing:
Critical region for two-tailed test at 5% level of significance (i.e. α = 0.05):
The two critical/rejection points are Z 0.025 = 1.96 and –
Z0.025 = –1.96.
• The Null hypothesis is rejected when Z < -1.96 or Z > 1.96; otherwise, it is not rejected.
Critical region for one-tailed test at 5% level of significance (i.e. α = 0.05):
• Null hypothesis: H0: θ≤θ0 • Alternative hypothesis: Ha: θ>θ0
The critical/rejection point is Z0.05 = 1.645.
• The Null hypothesis is rejected when Z > 1.645; otherwise, it is not rejected.
Critical region for one-tailed test at 5% level of significance (i.e. α = 0.05):
• Null hypothesis: H0: θ≥θ0 • Alternative hypothesis: Ha: θ<θ0
The critical/rejection point is –Z0.05 = –1.645.
• The Null hypothesis is rejected when Z < -1.645; otherwise, it is not rejected.
Confidence Interval Approach to Hypothesis Testing: The 95% confidence interval for the population mean is stated as:
x
s
X
±
1
.
96
• It implies that there is 95% probability that the interval
x
s
X
±
1
.
96
contains the population mean's value.Lower limit
µ
0<
X
−
1.96s
xUpper limit
µ
0>
X
+
1
.
96
s
x• When the hypothesized population mean (µ0) < the
lower limit, H0 is rejected.
• When the hypothesized population mean (µ0) > the
upper limit, H0 is rejected.
• When the hypothesized population mean (µ0) lies
between the lower and upper limit, H0 is not
rejected.
P-value Approach to hypothesis testing: The p-value is also known as the marginal significance level. The p-value is the smallest level of significance at which the null hypothesis can be rejected.
• The smaller the p-value, the greater the probability of rejecting the null hypothesis.
• The p-value approach is considered more efficient relative to rejection points approach.
Decision Rule:
• When p-value < α reject H0. • When p-value ≥α do not reject H0.
3.1 Tests Concerning a Single Mean
Calculating the test statistic for hypothesis tests concerning the population mean of a normally distributed population:
A.When the sample size is large or small but population S.D. is known, the test statistic is calculated as follows:
n
X
Z
σ
−
µ
0=
where,
= Sample mean
µ0 = the hypothesized value of the population mean
σ = the known population standard deviation
Sample size, n ≥ 30 is treated as large sample.
B. When the sample size is large but population S.D. is unknown, the test statistic is calculated as follows:
n
s
X
Z
=
−
µ
0where,
s = the sample standard deviation
C.When the population S.D. is unknown and
• Sample size is large or
• Sample size is small but the population sampled is normally distributed, or approximately normally distributed.
The test statistic is calculated as follows:
n
s = sample standard deviation
NOTE:
As the sample size increases, the difference between the rejection points for the t-test and z-test decreases.
Test Concerning the Population Mean (Population Variance Unknown)
Source: Table 2, CFA® Program Curriculum, Volume 1,
Reading11.
Rejection Points for a z-Test:
A.Significance level of α = 0.10.
1. H0: θ = θ0 versus Ha: θ≠θ0. The rejection points are
z0.05 = 1.645 and –z0.05 = -1.645.
Decision Rule: Reject the null hypothesis if z > 1.645 or if z < –1.645.
2. H0: θ≤θ0 versus Ha: θ> θ0. The rejection points are
z0.10 = 1.28.
Decision Rule: Reject the null hypothesis if z > 1.28. 3. H0: θ≥θ0 versus Ha: θ<θ0. The rejection points are
-z0.10 = -1.28.
Decision Rule: Reject the null hypothesis if z < 1.28.
B. Significance level of α = 0.05.
1. H0: θ = θ0 versus Ha: θ≠θ0. The rejection points are
z0.025 = 1.96 and –z0.025 = -1.96.
Decision Rule: Reject the null hypothesis if z > 1.96 or if z < -1.96.
2. H0: θ≤θ0 versus Ha: θ>θ0. The rejection points are z0.05
= 1.645.
Decision Rule: Reject the null hypothesis if z > 1.645. 3. H0: θ≥θ0 versus Ha: θ<θ0. The rejection points are -z0.05
= -1645.
Decision Rule: Reject the null hypothesis if z < -1.645.
C.Significance level of α = 0.01.
1. H0: θ = θ0 versus Ha: θ≠θ0. The rejection points are
z0.005 = 2.575 and –z0.005 = -2.575.
Decision Rule: Reject the null hypothesis if z > 2.575 or if z < -2.575.
2. H0: θ≤θ0 versus Ha: θ> θ0. The rejection points are z0.01
= 2.33.
Decision Rule: Reject the null hypothesis if z > 2.33. 3. H0: θ≥θ0 versus Ha: θ< θ0. The rejection points are
-z0.01 = -2.33.
Decision Rule: Reject the null hypothesis if z < -2.33.
Example:
Since, it is a two-tailed test, critical values are ±1.96.
Decision Rule: Reject H0 when calculated value of Z >
+1.96 or <1.96.
Z = . ! √
= 1.50
X
= 372.5σ = 15
Since, it is a one-tailed test, critical value is 1645.
Decision Rule: Reject H0 when calculated value of Z
>1.645.
Z = . ! √
= 1.50
• Since, calculated Z-value is not > 1.645, we do not reject H0 at 5% level of significance.
Example:
Suppose, an equity fund has been in existence for 25 months. It has achieved a mean monthly return of 2.50% with sample S.D. of 3.00%. It was expected to earn a 2.10% mean monthly return during that time period.
H0: Underlying mean return on equity fund (µ) = 2.10%
Decision Rule: Reject the null hypothesis when t > 1.711 or t < –1.711.
t-statistic = ..."
√
= 0.667
• Since, calculated tvalue is neither > 1.711 nor < -1.711, we do not reject the null hypothesis at 10% significance level.
Using Confidence interval approach:
−
+#/
where,
tα/2→α/2 of the probability remains in the right tail. t -α/2→ -α/2 of the probability remains in the left tail.
90% confidence interval is:
2.5 – (1.711) (0.60) = 1.473 AND 2.5 + (1.711) (0.60) = 3.5266 [1.473, 3.5266].
• Since hypothesized value of mean return i.e. 2.10% falls within this confidence interval, H0 is not rejected.
3.2 Tests Concerning Differences between Means
1. H0: µ1 – µ2 = 0 versus Ha: µ1 – µ2≠ 0 or µ1≠µ2
2. H0: µ1 – µ2≤ 0 versus Ha: µ1 – µ2> 0 or µ1>µ2
3. H0: µ1 – µ2≥ 0 versus Ha: µ1 – µ2< 0 orµ1<µ2
where,
µ1 = population mean of the first population µ2 = population mean of the second population
Test Statistic for a Test of the Difference between Two Population Means (Normally Distributed Populations, Population Variances Unknown but Assumed Equal) based on Independent samples: A t-test based on independent random samples is given by:
="−− ("−)
Test Statistic for a Test of the Difference between Two Population Means (Normally Distributed Populations, Unequal and Unknown Population Variances) based on independent samples: In this case, an approximate t-test based on independent random samples is given by:
="−− ("−)
In this case, modified degrees of freedom is used. It is calculated as follows: Practice: Example 2 & 3,
Volume 1, Reading 12.
3.3 Tests Concerning Mean Differences
When samples are dependent, the test concerning mean differences is referred to as paired comparisons test and is conducted as follows.
1. H0: µd= µd0 versus Ha: µd≠µd0
2. H0: µd≤µd0 versus Ha: µd>µd0
3. H0: µd≥µd0 versus Ha: µd<µd0
where,
d = difference between two paired observations = xAi - xBi
where
xAi and xBi are the ith pair i = 1, 2, …, n. on the two
random variables.
µd = population mean difference.
µd0 = hypothesized value for the population mean
difference
Test Statistic for a Test of Mean Differences (Normally Distributed Populations, Unknown Population Variances):
=̅−*
*
where,
=̅=1
+
%
+,"
=*=∑ !+−̅"
%
+,"
− 1
Sample S.D. =
s
2dn = number of pairs of observations
# $ $ #% & =̅
= *
√
Example:
• H0: The mean quarterly return on Portfolio A = Mean
quarterly return on Portfolio B from 2000 to 2005.
• Ha: The mean quarterly return on Portfolio A ≠ Mean
quarterly return on Portfolio B from 2000 to 2005.
The two portfolios share the same set of risk factors; thus, their returns are dependent (not independent). Hence, a paired comparisons test should be used.
The following test is conducted:
H0: µd = 0 versus Ha: µd≠ 0 at a 10% significance level.
where,
µd = population mean value of difference between the
returns on the two portfolios 2000 to 2005.
Suppose,
• Sample mean difference between Portfolio A and Portfolio B =
d
= -0.60% per quarter.• Sample S.D of differences = 6.50.
• Total sample size = n = 6 years × 4 = 24.
• The standard error of the sample mean difference =
d
s
= 6.50 / √24 = 1.326807.• t-value from the table with degrees of freedom = n - 1 = 24 - 1 = 23 and .10/ 2 = 0.05 significance level is t ± 1.714.
Decision rule: Reject H0 if t > 1.714 or if t < –1.714.
Calculated test statistic = t = -..
-/.01.2-3 = –0.452213
• Since, calculated t statistic is not < -1.714, we fail to reject the null hypothesis at 10% significance level.
Thus, we conclude that the difference in mean quarterly returns is not statistically significant at 10% significance level.
4.1 Tests Concerning a Single Variance
We can formulate hypotheses as follows:
1. H0: σ2 = σ20 versus Ha: σ2 ≠σ20
2. H0: σ2 ≤σ20 versus Ha: σ2 > σ20
3. H0: σ2 ≥σ20 versus Ha: σ2 < σ20
where,
σ20 = hypothesized value of σ20.
Test Statistic for Tests Concerning the Value of a Population Variance (Normal Population): If we have n independent observations from a normally distributed population, the appropriate test statistic is chi-square test statistic, denoted χ2.
χ=− 1
'
where,
n– 1 = degrees of freedom.
s2 = sample variance, calculated as follows.
=∑ %+," +−
− 1
Assumptions of the chi-square distribution:
• The sample is a random sample or
• The sample is taken from a normally distributed
population.
Properties of the chi-square distribution:
• Unlike the normal and t-distributions, the chi-square distribution is asymmetrical.
• Unlike the t-distribution, the chi-square distribution is bounded below by 0 i.e. χ2 values cannot be
negative.
• Unlike the t-distribution, the chi-square distribution is affected by violations of its assumptions and give incorrect results when assumptions do not hold.
• Like the t-distribution, the shape of the chi-square distribution depends upon the degrees of freedom i.e. as the number of degrees of freedom increases, the chi-square distribution becomes more symmetric.
Rejection Points for Hypothesis Tests on the Population Variance:
1. Two-tailed test: H0: σ2 = σ20 versus Ha: σ2≠σ20
Decision Rule: Reject H0 if
i. The test statistic > upper α/2 point (χ2
α/2) of the
chi-square distribution with df = n – 1 or
ii. The test statistic < lower α/2 point (χ2
1-α/2) of the
chi-square distribution with df = n – 1.
2. Right-tailed test: H0: σ2≤σ20 versus Ha: σ2 > σ20.
Decision Rule: Reject H0 if the test statistic > upper α
point of the chi-square distribution with df = n -1.
3. Left-tailed test: H0: σ2 ≥σ20 versus Ha: σ2< σ20
Decision Rule: Reject H0 if the test statistic < lower α
point of the chi-square distribution with df = n -1.
Finding the critical values for the chi-square distribution from a table:
• For a right-tailed test, use the value corresponding to d.f. and α.
• For a left-tailed test, use the value corresponding to d.f. and 1 - α.
• For a two-tailed test, use the values corresponding to d.f.& ½ α and d.f.& 1 –½ α.
Example:
Suppose,
H0: The variance, σ2≤ 0.25.
Ha: The variance, σ2 > 0.25.
It is a right-tailed test with level of significance (α) = 0.05 and d.f. = 41 – 1 = 40 degrees. Using the chi-square table, the critical value is 55.758.
Decision rule: Reject H0 if χ2 > 55.758.
Using the X2–test, the standardized test statistic is:
2
.
43
25
.
0
)
27
.
0
)(
1
41
(
)
1
(
2 2
2
=
−
=
−
=
σ
χ
n
s
• Since, χ2 is not > 55.758, we fail to reject the H0.
Chi-square confidence intervals for variance: Unlike confidence intervals based on z or t-statistics, chi-square confidence intervals for variance are asymmetric. A two-sided confidence interval for population variance, based on a sample of size n is as follows:
• Lower limit = L = (n-1) s2 / χ2 α/2 • Upper limit = U = (n -1) s2 / χ2
1-α/2.
When the hypothesized value of the population variance lies within these two limits, we fail to reject the null hypothesis.
4.2 Tests Concerning the Equality (Inequality) of Two Variances
1. H0: σ2 1 = σ22 versus Ha: σ21 ≠σ22
σ2 1 = σ22 implies that σ2 1 / σ22 = 1.
2. H0: σ21 ≤σ22 versus Ha: σ2 1 >σ22
3. H0: σ21 ≥σ22 versus Ha: σ2 1 <σ22
Tests concerning the difference between the variances of two populations based on independent random samples are based on an F-test and F-distribution. F-test is a ratio of sample variances.
Properties of F-distribution:
• Like the chi-square distribution, the F-distribution is non-symmetrical distribution i.e. it is skewed to the right.
• Like the chi-square distribution, the F-distribution is bounded from below by 0 i.e. F ≥ 0.
• The F-distribution depends on two parameters n and m (numerator and denominator degrees of
freedom, respectively).
• Unlike the chi-square test, the F-test is NOT sensitive to violations of its assumptions.
Relationship between the chi-square and F-distribution:
F = (χ12 / m) ÷ (χ22 / n)
• It follows an F-distribution with m numerator and n denominator degrees of freedom.
where,
χ12 is one chi-square random variable with m degrees of freedom.
χ22 is another chi-square random variable with n degrees of freedom.
Test Statistic for Tests Concerning Differences between the Variances of Two Populations (Normally Distributed Populations):
Assumption: The samples are random and independent and taken from normally distributed populations.
2 2
1 2
S
S
F
=
where,
s21 = sample variance of the first sample with nl observations.
s22 = sample variance of the second sample with n2 observations.
df1 = n1 -1 numerator degrees of freedom. df2 = n2 -1 denominator degrees of freedom.
NOTE:
The value of the test statistic is always ≥ 1.
Convention regarding test statistic: We use the larger of the two ratios s21 / s22 or s22 / s21 as the actual test statistic.
Rejection Points for Hypothesis Tests on the Relative Values of Two Population Variances:
A.When the convention of using the larger of the two ratios s21 / s22 or s22 / s21 is followed:
1. Two-tailed test: H0: σ21 = σ22 versus Ha: σ21≠σ22
Decision Rule: Reject H0 at the α significance level if
the test statistic > upper α / 2 point of the F-distribution with the specified numerator and denominator degrees of freedom.
2. Right-tailed test: H0: σ21 ≤σ22 versus Ha: σ21> σ22
Decision Rule: Reject H0 at the “α significance level” if
the test statistic > upper α point of the F-distribution with the specified
numerator and denominator degrees of freedom.
3. Left-tailed test: H0: σ21 ≥σ22 versus Ha: σ21 < σ22
Decision Rule: Reject H0 at the “α significance level” if
the test statistic > upperα point of the F-distribution with the specified
numerator and denominator degrees of freedom.
B. When the convention of using the larger of the two ratios s21 / s22 or s22 / s21 is NOT followed: In this case if
the calculated value of F < 1, F-table can still be used by using a reciprocal property of F-statistics i.e.,
F n, m = 1/ Fm, n Important to Note:
• For a two-tailed test at the α level of significance, the rejection points in F-table are found at α / 2
significance level.
Example:
Suppose,
H0: σ21≤σ22
Ha: σ21>σ22
• n1 = 16 • n2 = 16 • S21 = 5.8 • S22 =1.7 •df1=df2 = 15
From F table with 15 and 15 df and α = 0.05, the critical value of F = 2.40 (from the table below).
Decision Rule: Reject H0 if calculated F-statistic > critical
value of F.
Since S21 > S22, we will use convention F = s21 / s22.
41
.
3
7
.
1
8
.
5
2 2 2
1
=
=
=
s
s
F
• Since calculated F-statistic (3.41) > 2.40, we reject H0
at 5% significance level.
F-values for α = 0.05
5. OTHER ISSUES: NONPARAMETRIC INFERENCE
Parametric test: A parametric test is a hypothesis test regarding a parameter or a hypothesis test that is based on specific distributional assumptions.
• Parametric tests are robust i.e. they are relatively unaffected by violations of the assumptions.
• Parametric tests have greater statistical power relative to corresponding non-parametric tests.
Non parametric test: A non parametric test is a test that is either not regarding a parameter or is based on minimal assumptions about the population.
• Nonparametric tests are considered distribution-free methods because they do not rely on any
underlying distributional assumption.
• Nonparametric statistics are useful when the data are not normally distributed.
A non parametric test is mainly used in three situations:
1) When data do not meet distributional assumptions. 2) When data are given in ranks.
3) When the hypothesis is not related to a parameter.
In a nonparametric test, generally, observations (or a function of observations) are converted into ranks according to their magnitude. Thus, the null hypothesis is stated as a thesis regarding ranks or signs. The non-parametric test can also be used when the original data are already ranked.
Important to Note: Non-parametric test is less powerful i.e. the probability of correctly rejecting the null hypothesis is lower. So when the data meets the assumptions, parametric tests should be used.
Example: If we want to test whether a sample is random or not, we will use the appropriate nonparametric test (a so-called runs test).
Parametric Nonparametric
rank test Sign test
Source: Table 9, CFA® Program Curriculum, Volume 1,
Reading 12.
5.1 Tests Concerning Correlation: The Spearman Rank Correlation Coefficient
When the population under consideration does not meet the assumptions, a test based on the Spearman rank correlation coefficient rS can be used.
Steps of Calculating rS:
1. Rank the observations on X in descending order i.e. from largest to smallest.
• The observation with the largest value is assigned number 1.
• The observation with second-largest value is assigned number 2, and so on.
• If two observations have equal values, each tied observation is assigned the average of the ranks that they jointly occupy e.g. if the 4th and 5th-largest values are tied, both observations are assigned the rank of 4.5 (the average of 4 and 5).
2. Calculate the difference, di, between the ranks of
each pair of observations on X and Y.
3. The Spearman rank correlation is calculated as:
($= 1 − 6∑ "
% +,"
(− 1)
a)For small samples, the rejection points for the test based on rS are found using Table 11 below.
b)For large samples (i.e. n> 30), t-test can be used to test the hypothesis i.e.
=(− 2)
Portfolio Managers
1 2 3 4 5
Sharpe Ratio (X) –1.50 –1.00 –0.90 –1.00 –0.95
Management Fee (Y)
1.25 0.95 0.90 0.98 0.90
X Rank 5 3.5 1 3.5 2
Y Rank 1 3 4.5 2 4.5
di( X – Y) 4 0.5 –3.5 1.5 –2.5
d2i 16 0.25 12.25 2.25 6.25
Sum of d2i = 37
• The first two rows in the table above contain the original data.
• In the row of X Rank, the Sharpe ratios are converted into ranks.
• In the row of Y Rank, the management fees are converted into ranks.
It is a two-tailed test with a 0.05 significance level and sample size (n) = 5.
NOTE:
Both variables X and Y are not normally distributed; the t-test assumptions are not met.
rS = 1 – [(6 ∑d2i) / n (n2 – 1)] rS = 1 – (6 × 37) / 5 (25 – 1) = -0.85
Important to Note: Since the sample size is small i.e. (n < 30), the rejection points for the test must be looked up in Table 11.
•Upper-tail rejection point for n = 5 and α/2 = 0.05/ 2 = 0.025 from table 11 is 0.9000.
Decision Rule: Reject H0 if rS> 0.900 or rS<–0.900.
Since rs is neither < -0.900 nor > 0.900, we do not reject
the null hypothesis.
Spearman Rank Correlation Distribution Approximate Upper-Tail Rejection Points
Sample Size: n
α= 0.05 α= 0.025 α= 0.01
5 0.8000 0.9000 0.9000
6 0.7714 0.8286 0.8857
7 0.6786 0.7450 0.8571
8 0.6190 0.7143 0.8095
9 0.5833 0.6833 0.7667
10 0.5515 0.6364 0.7333
11 0.5273 0.6091 0.7000
12 0.4965 0.5804 0.6713
13 0.4780 0.5549 0.6429
14 0.45930 0.5341 0.6220
15 0.4429 0.5179 0.6000
16 0.4265 0.5000 0.5824
17 0.4118 0.4853 0.5637
18 0.3994 0.4716 0.5480
19 0.3895 0.4579 0.5333
20 0.3789 0.4451 0.5203
21 0.3688 0.4351 0.5078
22 0.3597 0.4241 0.4963
23 0.3518 0.4150 0.4852
24 0.3435 0.4061 0.4748
25 0.3362 0.3977 0.4654
26 0.3299 0.3894 0.4564
27 0.3236 0.3822 0.4481
28 0.3175 0.3749 0.4401
29 0.3113 0.3685 0.4320
30 0.3059 0.3620 0.4251
NOTE:
The corresponding lower tail critical value is obtained by changing the sign of the upper-tail critical value
Source: Table 11, CFA® Program Curriculum, Volume 1,
Reading 12.