Accounting Research Center, Booth School of Business, University of Chicago
Effects of Audit Report Wording Changes on the Perceived Message Author(s): K. E. Bailey III, Joseph H. Bylinski and Michael D. Shields
Source: Journal of Accounting Research, Vol. 21, No. 2 (Autumn, 1983), pp. 355-370
Published by: Wiley on behalf of Accounting Research Center, Booth School of Business, University of Chicago
Stable URL: http://www.jstor.org/stable/2490779 . Accessed: 30/08/2013 11:40
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp
.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].
.
Wiley and Accounting Research Center, Booth School of Business, University of Chicago are collaborating with JSTOR to digitize, preserve and extend access to Journal of Accounting Research.
http://www.jstor.org
Vol. 21 No. 2 Autumn 1983 Printed in U.S.A.
Effects of Audit Report Wording Changes on the Perceived Message
K. E. BAILEY III,* JOSEPH H. BYLINSKI,t AND
MICHAEL D. SHIELD Stt
1. Introduction
The Commission on Auditors' Responsibilities (CAR) (AICPA [1978]) concluded that existing audit reports are misunderstood by many readers. CAR identified miscommunication of the character of the audit and the auditor's responsibility in conducting it as major problems, arising because all of the reports' intended messages are not explicitly stated. In consideration of CAR's conclusion, the Auditing Standards Board (ASB) (AICPA [1980]) issued, and subsequently withdrew seven months later (Journal of Accountancy [April 1981]: 3), an exposure draft proposing seven specific wording changes to the standard audit reports.
The ASB's decision to withdraw the exposure draft was based on com- ment letters arguing that the proposed wording changes were not a cost- effective improvement over the existing reports. Subsequently, the ASB (AICPA [1982]) implemented an educational program to explain the intended meaning of audit reports. An implication of this educational program is that readers' knowledge about audit reports influences the messages they receive.
It is important to examine empirically whether either wording changes such as proposed by the ASB or audit report knowledge can cause material differences in the perceived messages of the audit reports. The issue is important in order to determine whether individuals, when
* Associate Professor, Wake Forest University; tAssistant Professor, University of North Carolina at Chapel Hill; ttAssociate Professor, University of Arizona. We appreciate Howard Rockness' constructive criticisms on the design of the first experiment, and we are indebted to Bob Libby for his comments on the design of the second experiment. Special thanks to Bob Rouse for his help. [Accepted for publication February 1983.]
355
making resource allocation decisions in nonpublic market settings, such as bank lending or investing in nonpublic firms, correctly perceive the intended messages of the audit reports. An understanding of how indi- viduals perceive audit report messages may also be important in public market settings if misperceptions cause individuals to make resource allocation decisions which are contrary to their portfolio preferences.
The experiments used here were conducted in the context of investing in a nonpublic firm.
Our study had two specific objectives. First, we studied differences in the perceived messages between existing audit reports and those proposed by the ASB. These perceived differences were measured in terms of qualities or attributes that the ASB exposure draft implied audit reports should possess. Second, we studied differences in the messages perceived by two groups of readers having different levels of audit report knowledge.
Our results indicated, first, that the AICPA's proposed wording changes did shift readers' perceptions of the responsibility for financial state- ments from the auditor toward management, a desired outcome of the AICPA. Second, we found that more knowledgeable readers placed more responsibility on management than less knowledgeable readers. In sec- tion 2, related research is discussed and three hypotheses are developed.
Section 3 presents the research design. Sections 4 and 5 report the analysis and results of two related laboratory experiments. Our conclu- sion is given in section 6.
2. Hypotheses
No prior accounting research has empirically examined the effect of audit report wording changes on reader perceptions. In a somewhat related study, Libby [1979] examined differences in the messages com- municated to auditors and bankers by a set of existing audit reports. He used multidimensional scaling (MDS) to develop two-dimensional object (audit report) spaces representing auditors' and bankers' perceptual relationships among the audit reports. He concluded that the two groups' perceptions were similar and that his subjects viewed the relationships
among the audit reports in a manner consistent with the classification in Statement on Auditing Standards No. 2 (AICPA [1974]).
In this study, we employed Libby's MDS approach to test empirically whether readers' perceptions of the messages of a set of audit reports were sensitive to the seven wording changes proposed by the ASB, given in table 1. Examination of these proposed wording changes revealed that they were across-the-board, involving the same modifications to all of the regularly issued reports. This suggests that the ASB intended to preserve the same relative meanings of different report types (i.e., un- qualified, qualified, disclaimer) within the proposed set of reports, as perceived within the existing reports. Hence, the intended effect of the wording changes was an absolute difference in the message for all report
TABLE 1
ASB Proposed Changes to the Audit Reports 1. Add the word independent to the report title.
2. Assert that the financial statements are the representations of management.
3. Add a statement that an audit is intended to provide reasonable, but not absolute, assurance as to whether financial statements taken as a whole are free of material misstatements.
4. Replace the word examined with audited.
5. State in the scope paragraph that application of (generally accepted auditing standards) requires judgment in determining the nature, timing, and extent of tests and other procedures and in evaluating the results of those procedures.
6. Delete the word fairly.
7. Delete the reference to consistent application of GAAP.
types-in particular, a decrease in perceived auditor responsibility. This led to the following substantive hypotheses.
HI: There is no relative difference in the perceived messages between existing and ASB-proposed wordings for a set of audit reports.
H2: There is an absolute difference in the perceived messages between existing and ASB-proposed wordings for a set of audit reports.
A policy consideration in the design of audit reports is whether readers' perceptions vary as a function of certain individual characteristics.' The ASB, by adopting its educational program, has identified audit report knowledge as a potential determinant of readers' perceptions. This led to a third substantive hypothesis.
H3: There is a difference in the perceived messages for a set of audit reports between two groups of readers having different levels of audit report knowledge.
3. Research Design
Two laboratory experiments were designed to provide separate tests of HI and H2. Both experiments tested H3. After Libby [1979], our experi- ments were conducted in the context of a fictitious firm, Bottling Enter- prises, Inc. (BEI). BEI was described as a large, independent, closely held soft drink bottling company, audited by a Big Eight CPA firm and not subject to SEC reporting regulations. This type of company was used because it is the type that occasionally receives an other-than-unqualified audit report. We chose a set of ten existing audit reports, also taken from Libby [1979], which were in accordance with Statement on Auditing Standards No. 1 (AICPA [1972]) except that the middle paragraphs did not contain dollar amounts or footnote references. Ten matching audit reports with proposed wordings were patterned after examples in the ASB exposure draft. The set of reports is given in table 2.
1 For instance, Libby [1979] examined whether CPAs and bankers had the same percep- tions of a set of audit reports. A critical difference between his two subject groups was occupational perspective; no noteworthy differences were found.
TABLE 2 Set of Audit Reports
Report Number Report Label
Report Type Reason for Departure
1. Unqualified
2. Qualified Asset Realization
3. Disclaimer Asset Realization
4. Qualified Litigation
5. Disclaimer Litigation
6. Qualified Circumstance-Imposed
7. Disclaimer Circumstance-Imposed
8. Qualified Client-Imposed
9. Disclaimer Client-Imposed
10. Disclaimer Unaudited
4. Relative Difference Experiment
The relative difference experiment was designed to test H1 in particular, and partially H3. The experiment was a 2 x 2 x 10 design with two two- level, between-subject variables: report wording and reader knowledge.
There was a ten-level, within-subject variable set of reports. We believed that if subjects had been exposed to two alternate wordings for the same audit report, the experiment would have become so transparent as to induce significant demand effects. Hence, we treated the alternate word- ings as a between-subject variable, not exposing the same subject to both wordings.
4.1 TASK
Each subject was given a test instrument containing three sections: (1) a two-page photo-reduced handout presenting the ten audit reports, using either the existing or proposed wording; (2) a one-page handout describ- ing BEI and the subject's role, which was to evaluate BEI in consideration of purchasing an equity ownership share; and (3) a three-part computer- generated questionnaire.
Part one of the questionnaire contained all 45 pairs of the ten audit reports, uniquely and randomly ordered for each subject. Each subject was asked to rate the relative similarity of each pair of reports on a nine- point scale anchored by least (1) to most (9) similar. The criteria for
"what is a similar pair" were not specified, allowing each subject to evaluate the reports using his/her own perceptual framework, within the context of the investment decision. Each report was identified by a number (1-10) and a two-part label giving the report type, and if the report was other than unqualified, the specific circumstance causing the departure (see table 2). The use of the labeling system may have induced its own demand effect, but we found during pretests that such an identification system was crucial to task manageability.
Part two of the questionnaire asked subjects to rate each of the ten reports on each of 12 attributes. Each attribute had a nine-point rating scale with adjective anchor points, as shown in table 3. The attributes were presented in unique random order, one attribute per page, with ten rating scales on each page, one for each report, also randomized. Part three of the questionnaire gathered the 66 paired similarity ratings for the 12 attributes on a 1-9 scale. The attributes were identified by a number and a descriptive phrase (table 3). This method of presentation was shown during pretests to simplify the rating process. Two half- fractionals were developed, with a subject receiving only one in unique random order.
4.2 PROCEDURE
All subjects completed the experiment at the same time. An experi- menter explained the nature of the study, all questions were answered, and the test instrument was distributed. The subjects studied the audit reports for 15 minutes, which provided them with sufficient familiarity to associate the audit reports rapidly with their identifying labels. Then written and oral instructions were given for rating the similarity of the audit reports, using an automobile example for clarification. The subjects were also told to compare the ten reports in order to select the two that were most similar and the two that were least similar. These were to serve as benchmarks when they later rated all 45 pairs. Subjects could revise responses at any time.
After the subjects completed part one, they were given written and oral instructions for part two, which required them to rate the ten reports with respect to the 12 attributes. To clarify this task, we expanded the previous automobile example. After completion of part two, there was a break. Part three involved rating the similarity of the attributes used in part two, and this task was clarified using still another expanded version of the automobile example. After completion of part three, all subjects received a small, fixed fee for participation, which was not based on any aspect of their performance. The entire experiment ran about two hours.
4.3 SUBJECTS
Subjects with two different levels of audit report knowledge were used in the experiment. One group was knowledgeable about both financial statements and audit reports and consisted of 27 individuals who had recently graduated from an undergraduate accounting program and had taken the CPA exam. This group received only the ten proposed audit reports and hereafter is referred to as accountants-proposed (i.e., ac- counting graduates-proposed reports). The second group was knowledge- able about financial statements, but not about audit reports, and con- sisted of 44 fourth-year accounting students who had completed advanced accounting but had not yet taken auditing. Members of the group were
rz~~~~~~~~~~~~~~~~~~~~~~~~7 P,,, ; 7 o .
4-)
Q 0 0=.=
CiZ 4-)
- -4
C)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~- ?4 QQ?
0) ua CCz E2
u~~~~~~~~~~~~~~~~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~~~
- t O --
~~~~0 -~~~~~~~4
C-0~~~~~~~~~
X CZV QzCi = Q
X E e ; t s ; X cl = e g e 8
\,: .= Q Q Q~~~~~: -4- 0 ; 0 . : - . U 4Q C6 X D
c12
CS C1 w C6
-~~~~~~~~~~~~~~~~~.-
o ~~~~~~~c oC CO co
~~~~) C~~~~Z O Co C
r. 00 0
0 o~~~cl - o
CISO
AUDIT REPORT WORDING CHANGES
a; -
-e ~~~~~~~~~~~~-
o -c
4J2 0
U
~ ~
a (, -4-~~~~~- Ct b o~~c ? >)
ct3
Ct~~~~~~~c
O Q Z ct a ?
CD~~~~~~~~~~~c )p >
co
ct
= zv ~~~~~~~ct_ .
to l 0
1-4 -0 -
(1-
c o
4o 00 c co
(1 2 O.XX * - DE C :O
V~~~e CSSE *- =VA.
4 - ) > z O *
4-)~~~~~~~~~~~1
co r, -0 C6-*~0 c.
(12 (12 - 0~~~~~~~( - j --
'n6c~ ,-) co~
randomly split into two equal groups of 22 subjects each, one receiving only the ten proposed audit reports (advanced-proposed) and the other the ten existing reports (advanced-existing).
4.4 ANALYSIS OF DATA
The 12 univariate scales from the second part of the questionnaire were evaluated using multivariate analysis of variance (MANOVA) and analysis of variance (ANO VA). The report and attribute similarities data from questionnaire parts one and three were analyzed using multidimen- sional scaling (MDS).
The latter is a factor-analysis-like technique based on similarity judg- ments rather than correlations. MDS algorithms find a physical repre- sentation for a set of stimuli (i.e., reports or attributes) such that the distances between them are monotonically related to the similarity judgments given by subjects. Sometimes the analogy is made that an MDS space represents the latent psychological distances between the stimuli in the subjects' minds. Other accounting research which employed MDS includes Belkaoui [1980], Brown [1981], and Rockness and Nikolai [1977]. The program used here was the ALSCAL MDS program (Takane, Young, and de Leeuw [1977]).
Data from part one were the input for developing report spaces, while data from part three were used to develop attribute spaces. The three groups' data were separately analyzed using a Euclidian model with the subjects treated as replications (Young and Lewyckyj [1979]). The simi- larity judgments were assumed to be measured at the ordinal level, while the derived stimulus spaces were considered to contain interval infor- mation. The selection of the dimensionality of the solutions was based on a scree test using stress, rather than R . Since the orientations of the ALSCAL solutions were arbitrary relative to any particular system of axes, each group's report and attribute spaces were rotated using an orthogonal Procrustean technique (SOUPAC [1973]) to make the ori- entations as similar as possible.
The results of the relative difference experiment are presented below.
First, the MDS results of the audit report similarities data are used to test H1 and H3. These are followed by the MANOVA/ANOVA results for the 12 univariate scales testing the same hypotheses. A summary of major findings concludes the experiment.
4.5 MDS RESULTS FOR H1 AND H3
Intergroup correlations of report space coordinates were computed as one test of H1 and H3. This required that we first select the dimensionality of each solution. A separate scree test for each group's one- through four- dimensional report spaces indicated that a two-dimensional solution was most appropriate for each group.2 The R2 values for the two-dimensional
2 Respective stress values for one through four dimensions were: accountants-proposed- .43, .34, .29, .25; advanced-proposed-.49, .38, .31, .26; advanced-existing-.44, .34, .29, .25.
solutions were .63 (accountants-proposed), .51 (advanced-proposed), and .62 (advanced-existing). These values were similar to Libby's two-dimen- sional solutions, even though he used the INDSCAL MDS program (Carroll and Chang [1970]) which, because it had one more independent variable, could have the potential of explaining more variance between subjects. Figure 1 contains each group's two-dimensional report space.
These spaces were statistically rotated to match Libby's all-subjects' report space.
The level of intergroup report-space relative difference was measured by the Pearson product-moment correlations for the coordinates of the reports on the two dimensions in their respective MDS spaces. The six correlations were all above .95. Thus, the MDS report-space solutions did not reject Hi, which predicted no relative report-wording effect. The solutions supported rejection of H:3, since there were no perceptual differences due to reader knowledge.
Correlations between the report coordinates for our three groups and Libby's all-subjects' two-dimensional report spaces were also calculated.
The coordinates were highly correlated between the two studies on each dimension-all six Pearson product-moment correlations were greater than .80, with a mean of .89. Hence, the report spaces for the two studies were very similar.
4.6 MANOVA RESULTS FOR H1 AND H:3
Separate between-subject MANOVAs were employed as a second test of H1 and H3.3 A potential source of variance was that the three experi- mental groups had different perceptions of the 12 attributes. To test this possibility, we compared the two-dimensional MDS attribute spaces between groups. The means of the Pearson product-moment correlations for the coordinates of the attributes were .91 for the first dimension and .28 for the second dimension. Given this level of equivalence, we dis- counted intergroup attribute differences as significantly driving the MANOVA results.
Each MANOVA had a two-level, between-subject variable, either re- port wording or reader knowledge, and a ten-level, within-subject varia- ble, the set of ten audit reports. One source of variance, the main effect for each set of reports, was not relevant to testing H1 and H3. We expected the subjects to rate the ten reports differently if the attributes were meaningful audit report descriptors, and this effect was significant for each MANOVA (p < .01).
In the MANOVA for H1, the report-wording main effect was not significant (F = 1.0, p = .48) but the report wording by set of reports interaction was significant (F = 1.4, p < .01).4 The ANOVA results showed only one significant main effect for wording, attribute two, where
3Complete MANOVA and ANOVA results are available from the authors.
4 In the presentation of the results, the significant interaction effects are not explained because inspection indicated that they are complex. Data about the interactions are available from the authors.
a ~~~~~~~~~~~~~~~~~a
N~~~~ S
_ X~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-
So~~~~~~~~~~~~~
0~~~~~~~~~~~~~~
s-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-
QJLco0
00
0
40 0
*0 ~~~~~~~~~~~~~~~~~~Ca
on~~~~~~~~~Oc
0 E
44
Q)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~C-
a-)
M~ LO
r~~~~~~~~~~~~
s?C~~~~~~~~~~~~~~~~~~~~~~~~~~~Sn
n Q~~~~~~~~~~~~~~~~~~~~~~~~~~~S
<:CX~~~~~~~~~~~~~
~~~-)s0 Q
n ah ;~~~~~~~~c
the proposed wording conveyed less responsibility for the information in the financial statements to the auditor (p = .04). The report wording by set of reports interaction was significant (p < .05) for four attributes (#1, 5, 8, 11).
In the MANOVA for H3, the knowledge main effect (F = 2.4, p = .02) and the set of reports by knowledge interaction (F = 1.6, p < .01) were both significant. The ANOVA results showed that the knowledge main effects for attributes 2, 6, and 10 were significant (p < .01, = .02, < .01, respectively). The more knowledgeable subjects rated the auditor less responsible for the information in the financial statements, rated the auditor as using more discretion in interpreting the results of the audit process, and saw less need for additional information before making their investment decisions. The knowledge by set of reports interactions were significant (p < .05) for three attributes (#1, 5, 9).
4.7 SUMMARY OF TEST RESULTS FOR H1 AND H3
In this experiment, we examined for relative differences in perceived messages as a function of report wording, H1. There was convergence across the two statistical methods indicating no relative difference in report meanings within each set of reports as a function of the ASB's proposed wording changes. H1 was not rejected. There was no method convergence for H3, the knowledge hypothesis. The MDS results indi- cated no difference in perception due to reader knowledge, whereas the MANOVA results indicated that there was a difference. We made no conclusion about H3, which was tested further in the next experiment.
5. Absolute Difference Experiment (H2)
A powerful test of the absolute difference posited in H2 required a within-subject design with each subject being exposed to both existing and proposed wordings. This led to a 2 x 2 x 10 design with one two- level, between-subject variable, reader knowledge. The within-subject variables were report wording (two levels) and set of reports (ten levels).
5.1 TASK, PROCEDURE, SUBJECTS, AND ANALYSIS OF DATA
The 20 audit reports, 12 attributes, data set, instructions, two knowl- edge levels, and compensation were the same as in the previous experi- ment. Because the ASB's proposed changes were across-the-board, we were able to use fractionals which exposed each subject to both existing and proposed wordings, but never for the same report.5 From the original
5The rules used to generate the 12 fractionals were: (a) to prevent demand effects, no subject received the same audit report in both existing and proposed wordings; (b) to guard further against demand effects, no subject received the same reason for departure, same wording (existing or proposed), in both qualified and disclaimer versions; (c) to balance the wordings within-subject (five existing and five proposed), the unqualified and disclaimer/
unaudited reports were always of opposite wording. The fractional design was orthogonal in the context of its usage and balanced between knowledge levels.
populations we acquired 62 new subjects-24 accountants and 38 ad- vanced. There were only two groups in this experiment since report wording was a within-subject variable. The same multimethod approach was used, testing H2 and H:3 with MDS and MANOVA, with some modification to accommodate the within-subject design. The results of these analyses and a summary of major findings are presented below.
5.2 MDS RESULTS FOR H-) AND H3
The two groups' report spaces of this experiment were fundamentally different from those in our previous experiment in that the present spaces contained 20 reports-ten with existing and ten with proposed wording. The test for an absolute difference in perception due to report wording, H2, was within-subject. The knowledge test, H3, was between- subject. A scree test for each group's solution indicated that the two- dimensional report spaces were most appropriate. The R2 values were .67 for accountants and .69 for advanced.
Each groups' space was separately tested for two types of absolute wording differences-shift and dialation.6 A shift would occur when the existing and proposed sets of reports clustered about different locations in a space. A dialation would occur when the areas (sizes) of the two report wording clusters in an MDS space were different. For each MDS solution (accountants and advanced) the approach was to calculate the Euclidian distances between all pairs of reports. For each solution, this yielded 190 distances-45 (10 x 9/2) within the existing wording set, 45 within the proposed wording set, and 100 (10 X 10) between the two sets of reports.
If both sets of reports in a space were clustered about the same location, indicating no shift, the mean of the 90 (45 + 45) within-wording distances and the mean of the 100 between-wording distances would not be signif- icantly different. Alternatively, a shift would be revealed by the mean of the between-wording distances exceeding the mean of the within-wording distances. The results of separate t-tests, given in table 4, indicated significant shifts for both accountants (p < .05) and advanced (p <
.0001) groups. Since this test of shift was based solely on interreport distances, the arbitrary centroid and orientation of each MDS solution could not have driven these results.
If both wording clusters in a report space were of the same size, indicating no dialation difference, the means of the two sets of 45 within- wording distances would not be significantly different. A significant dialation difference could reveal either wording cluster to be larger than the other. The results of separate t-tests (table 4) indicated significant (p < .0001) wording dialation effects for both accountants and advanced groups. In each group's MDS report space, the set of proposed reports
6 The reader can interpret these shift and dialation effects by examining the absolute difference experiment MANOVA results which follow in the next section.
0c 0c 0c Cl
Cl L~- L~-
*.- -4 o LC, cc
c) Cl t
4-)
~ ~ ~ ~ ~ ~ ~ ~ ~ ~
-CX,_l oo C o C
CZCl Cl
'bav >
CZ~~~~~~~~
4.Q~
~ ~~~~~~~
> = . .sNs C
t Wn
Q -)
4.Q~
~ ~~~~~~~
cn cn
4- 4-
wu ;, A | J g A 3 A 3 A 3 + 3 a~~~~~~~~~C~
: ; =: ,, tQ e~~~~~~~~~~~~~~~-
? ? ? < z~~~~~~~~~~~~c
a)
had a significantly smaller dialation. These results could not be driven by arbitrary dialation of the MDS solutions because this experiment was a within-subject design with all 20 reports (both wordings) scaled in the same space. Thus, the two absolute difference tests confirmed H2, finding an absolute wording effect.
The knowledge effect was tested by between-group correlations of the rotated report space coordinates. The Pearson product-moment correla- tions were .34 (p > .10) for the first dimension and .09 (p > .10) for the second dimension, indicating a perceived message difference caused by reader knowledge. This result did not reject H3, finding a difference in perception due to reader knowledge.
5.3 MANOVA RESULTS FOR H2 AND H3
Here, a single MANOVA was used to test H2 and H3.7 First, we examined whether the two subject groups had similar attribute spaces to discount the possibility that intergroup attribute model differences were driving the MANOVA. A separate scree test for each group indicated that the two-dimensional solutions were most appropriate. The R2 values were .53 for accountants and .55 for advanced. Pearson product-moment correlations for the coordinates of the attributes in the two spaces were satisfactorily high (dimension one, r = .77; dimension two, r = .78), and we concluded the two groups' attribute spaces were similar enough so as not to drive materially the MANOVA results. The effects for report wording, knowledge, report wording by knowledge interaction, and knowledge by set of reports interaction were estimable and relevant to testing H2 and H3.8 The set of reports by report-wording interaction and the knowledge by set of reports by report-wording interaction were not estimable because of the use of a half-fractional.
In the MANOVA for H2, the report-wording main effect in the MAN- OVA was significant (F = 3.1, p < .01). Inspection of the ANOVAs indicated that, compared to the existing reports, subjects rated the proposed reports as attributing more responsibility to management for the information in the financial statements (#2; p < .01) and indicating less comparability of the current financial statements to last year's statements (#8; p < .01).
In the MANOVA for H3, the knowledge effect was significant (F = 5.5, p < .01) as well as the knowledge by set of reports interaction (F = 1.9, p < .01). Examination of the 12 ANOVAs for knowledge main effects showed three were significant (#1, 2, 6; p < .01). The more knowledgeable subjects rated management as influencing the auditor less in the choice of which audit report to render, rated management as more responsible for the information in the financial statements, and rated the auditor as using greater discretion in interpreting the results of the audit process.
7See n. 3.
8See n. 4.
There were also five significant interactions involving knowledge (#1, 2, 7, 10, 12; p c .05).
5.4 SUMMARY OF TEST RESULTS FOR H2 AND H3
In this experiment, we tested mainly for absolute differences in per- ceived messages as a function of report wording, H2. There was conver- gent evidence across the two statistical methods indicating an absolute difference in perception due to wording. H2 was not rejected. We also found method convergence for H3 and did not reject that hypothesis, finding a difference in perception due to reader knowledge.
6. Conclusion
Our main purpose in this study was to test for differences in perceived audit report messages as a function of audit report wording and reader knowledge. An exposure draft issued in 1980 by the Auditing Standards Board suggested wording changes in order to produce an absolute differ- ence in the perceived messages of audit reports. Our empirical analysis indicated that the proposed wording changes did produce an absolute, but not relative, difference in our subjects' perceptions between the two sets of alternatively worded reports. More specifically, we found that our subjects rated the proposed reports as attributing more responsibility for the information in the financial statements to management, and less to the auditor. This was the major goal of the ASB's exposure draft.
These results occurred with wording changes which affected all types of audit reports. While outside the scope of this paper, our results do not preclude the possibility that wording changes which are not across-the- board could cause relative differences in perception within a set of reports.
This is a topic for future research.
A second purpose of our study was to determine whether subject groups having different levels of audit report knowledge would perceive the messages differently. Our subjects were drawn from two levels of training different from Libby's [1979] CPAs and bankers. Nevertheless, in the MDS analysis of the relative difference experiment, where we correlated the results of the two studies, the audit report spaces showed a high level of interstudy statistical similarity. Our evidence also suggested that, in terms of the perceived relative relationships within a set of audit reports, readers of different audit report knowledge levels did not perceive signif- icantly different messages. However, the more detailed tests by ANOVA of the data from both experiments indicated that there was an audit report knowledge effect on perception, with the more knowledgeable subjects perceiving the auditor as less responsible for the information in the financial statements. Such evidence suggests that the AICPA [1982]
may be able to educate people about the intended message of audit reports through its educational programs.
REFERENCES
AMERICAN INSTITUTE OF CERTIFIED PUBLIC ACCOUNTANTS. Statement on Auditing Stan- dards No. 1. New York: AICPA, 1972.
. Statement on Auditing Standards No. 2. New York: AICPA, 1974.
. Commission on Auditors' Responsibilities: Report, Conclusions, and Recommenda- tions. New York: AICPA, 1978.
. Proposed Statement on Auditing Standards. The Auditor's Standard Report. Expo- sure Draft. New York: AICPA, September 10, 1980.
. A User's Guide to Understanding Audits and Auditor Reports. New York: AICPA, 1982.
BELKAOUI, A. "The Interprofessional Linguistic Communication of Accounting Concepts:
An Experiment in Sociolinguistics." Journal of Accounting Research (Autumn 1980): 362- 74.
BROWN, P. "A Descriptive Analysis of Select Input Bases of the Financial Accounting Standards Board." Journal of Accounting Research (Spring 1981): 232-46.
CARROLL, J., AND J. CHANG. "Analysis of Individual Differences in Multi-dimensional Scaling via an N-way Generalization of 'Eckart-Young' Decomposition." Psychometrica (September 1970): 283-319.
LIBBY, R. "Bankers' and Auditors' Perceptions of the Message Communicated by the Audit Report." Journal of Accounting Research (Spring 1979): 99-112.
ROCKNESS, H., AND L. NIKOLAI. "An Assessment of APB Voting Patterns." Journal of Accounting Research (Spring 1977): 154-67.
SOUPAC. SOUPAC Program Descriptions. Urbana: Computing Services Office, University of Illinois, 1973.
TAKANE, Y., F. YOUNG, AND J. DE LEEUW. "Nonmetric Individual Differences Multidi- mensional Scaling: An Alternating Least Squares Method with Optimal Scaling Fea- tures." Psychometrika (1977): 7-76.
YOUNG, F., AND R. LEWYCKYJ. ALSCAL-4: User's Guide. Chapel Hill, N.C.: Data Analysis and Theory Associates, 1979.