Toward an empirical understanding of computer simulation
implementation success
Roger McHaney
a,*, Timothy Paul Cronan
baDepartment of Management, College of Business Administration, Kansas State University, Manhattan, KS 66506, USA bComputer Information Systems and Quantitative Analysis, College of Business Administration, University of Arkansas,
Fayetteville, AR 72701, USA
Received 8 February 1999; accepted 30 July 1999
Abstract
This study details the empirical development of a seven-factor contingency model of simulation success. The seven factors
are software characteristics, operational cost characteristics, software environment characteristics, simulation software output
characteristics, organizational support characteristics, initial investment cost characteristics, and task characteristics. This
exploratory model is derived from salient factors hypothesized by researchers and practitioners in the simulation and IS
literature based on the premise that computer simulation can be classi®ed as a representational DSS. Additional analysis
includes use of a regression model to rank the strength of these factors in their relationship to end-user computing satisfaction.
The article concludes with discussion considering how the developed model should serve as a guideline for developers of
simulation software and support those seeking to use computer simulation in organizational decision making settings.
#
2000
Elsevier Science B.V. All rights reserved.
Keywords:Computer simulation; Decision support systems; End-user computing satisfaction; Information systems success
1. Introduction
Computerized IS have been used successfully to
support the vast quantity of information available to
business leaders and help apply this information in a
way leading to competitive advantage and economic
gain. In order to remain competitive, organizations can
no longer afford to allow decision making to be
conducted using nonscienti®c methods. This climate
has fostered the search for better technologies, tools
and methodologies to aid in the decision making
process.
Computer simulation allows decision makers to
propose `what-if' questions and learn more about
the dynamics of the system. Within this context, it
becomes a decision support tool [33].
While implementations of computer simulation
have been reported with varying levels of success
[16,20,26,48,60] and failure [7,15,30], underlying
factors relating to these outcomes have not been
investigated empirically.
Increased recognition of computer simulation's
value has stimulated demand and encouraged a wide
variety of software vendors to enter the market
[50,56]. In general, this situation has been bene®cial
*Corresponding author. Tel.: 1-785-5327479; fax: 1-785-5327024.
E-mail addresses: mchaney@ksu.edu (R. McHaney), cronan@comp.uark.edu (T.P. Cronan).
to the business decision maker, but the changing
corporate environment has resulted in a need to better
understand and improve the process of developing,
selecting, and implementing computer simulation
technology [32,37,55].
Some reported failures have led researchers to
develop methodologies for the analysis of simulation
software implementation [3,18,46,61]. During
imple-mentation, requirements must be paralleled to those of
existing software and procedures. Like general
pur-pose software, computer simulation tools are designed
to meet the perceived needs of the eventual decision
maker. However, there are many types of users and this
complicates the task. The inability of users to develop
appropriate requirements has been recognized in the
literature as being
. . .
perhaps, the greatest problem in
the military-analytical [simulation] community
[43].
This study examines computer simulation in its role
as a decision support tool. The computer simulation
literature is used to discover recurrent factors believed
to in¯uence success. These factors are organized into a
contingency framework that is used to extend a
deci-sion support systems (DSS) success model developed
by Guimaraes, Igbaria and Lu [24]. Empirical data is
collected and used to con®rm this model with factor
analysis. Finally, the model is tested via regression
against success measures.
Computer simulation implementation literature is
in its infancy at best. Most published
recommenda-tions
can
be
classi®ed
as
speculative
[21,22,25,34,44,45,47] and are unsupported
empiri-cally. While researchers have no speci®c computer
simulation research framework upon which to build,
this study supports a belief that computer simulation is
a decision support tool [1,17,23,28,29,35,53].
2. The study
The objective of the ®rst phase of this study was to
examine the speculative literature to gather and
orga-nize characteristics that are believed to in¯uence
simulation implementation success. A contingency
framework was adopted from the DSS literature and
used as a general means of organizing data. According
to Tait and Vessey [57], ``
Contingency theory itself has
no content, it is merely a framework for organizing
knowledge in a given area.
'' This, together with use in
studies of IS success [5,12,19,49,51] and DSS
research [40], makes contingency theory suitable
for an exploratory investigation.
2.1. Phase 1: contingency model of implementation
success
Using Alter's classi®cation of simulation as a
repre-sentational decision support system, the success
con-tingency themes could be expected to be very similar
to general DSS success factors, as identi®ed by
infor-mation system researchers [24] as decision maker
characteristics, task characteristics, decision support
system characteristics, and implementation
character-istics [36].
The implementation factors and environmental
characteristics were categorized into ®ve areas:
simu-lation analyst characteristics, task characteristics,
simulation product characteristics [4,39,44],
organiza-tional characteristics [38], and simulation software
provider characteristics.
These recurrent themes were expanded through a
breakdown of the simulation product category into
eight sub-groupings. The features included are input
processing, statistical, output, software environment,
animation capability, costs, and level of product
devel-opment. This breakdown and the individual variables
categorized within each grouping are shown in Fig. 1.
Table 1 provides a breakup of each variable.
These contingency themes very closely match the
general DSS factors of Guimaraes, Igbaria, and Lu.
Dubin [14] calls this approach
invention by extension
.
Other than nomenclature, only a single real difference
exists: the implementation characteristics in the
inte-grated model of DSS success are replaced by
orga-nizational
and
software provider characteristics
.
While computer simulations are often developed
in-house, the simulation language or product is generally
acquired from a vendor. Fig. 2 illustrates both the
general DSS success model and the proposed
com-puter simulation implementation success model.
3. Research methodology
3.1. Validity of dependent variable
as the primary dependent variable in this study. One
hundred and nineteen respondents completed this
portion of the survey. The validity or the extent to
which this instrument measures what it is intended to
measure of this instrument, is assessed in two ways
[54].
3.1.1. Construct validity
Construct validity is the degree to which the
mea-sures chosen are either true constructs describing the
event of interest or merely artifacts of the
methodol-ogy itself [9]. Correlation analysis and con®rmatory
factor analysis can be used to assess construct validity.
First, a factor analysis was performed to con®rm the
data re¯ected by the psychometric properties of the
EUCS instrument. Doll, Xia, and Torkzadeh [13]
recommend using a second-order factor structure.
This recommendation was veri®ed for use with
repre-sentational decision support system applications and
previously validated [41]. The second-level structure
is a single factor, called End-User Computing
Satis-faction. The ®rst-order structure consists of ®ve
fac-tors: accuracy, content, ease of use, format, and
timeliness. Five-position Likert-type scales were used
to score the responses.
3.1.2. Convergent validity
were asked to ®ll out one of two survey forms based on
their perception as to the success of the reported
simulation project. All converging measures
correla-tions were signi®cant at the 0.0001 level (Q1 at 0.80,
Q2 at 0.66, and survey selection process at 0.43).
3.2. Measurement of the dependent variable
In this research, an abstract concept is being
mea-sured as the dependent variable±simulation
imple-mentation success. A pilot test was conducted in
Table 1Variable definitions for Fig. 1
I. Simulation analyst characteristics an1: Background of developer
an2: Knowledge of simulation methodology an3: Simulation education of developer
II. Task characteristics t1: Intended use of simulation t2: Project/system complexity t3: Level of simulation detail
t4: Use of a structured approach in model development
III. Simulation software product characteristics A. Input features
i1: Interface to other software i2: Input data analysis capability i3: Portability
i4: Syntax
i5: Modeling flexibility/problem capability i6: Modeling conciseness/programming style i7: Structural modularity/macros
i8: Specialty application modules i9: Attributes for entities i10: Global variables
B. Processing features p1: Execution speed p2: Maximum model size p3: Hardware platform
C. Statistical features s1: Random-number generators s2: Random deviate generators s3: Standard distributions
o6: Summarization of multiple model runs o7: Output data analysis
o8: Individual model output observations o9: High resolution graphics displays
E. Simulation software environment features e1: User interface
e2: Ease of learning e3: On-line help e4: On-line tutorial e5: Interactive debugging e6: Degree of interaction
F. Animation capability
a1: Animation ease of development a2: Quality of picture
a3: Smoothness of movement a4: Portability for remote viewing a5: User-defined icons
c5: Model modification costs c6: Interface costs
c7: Maintenance costs c8: Training costs
c9: Computer run time costs
H. Level of simulation product development d1: Degree of product validation and verification d2: Acceptance by experts
d3: Number of active users d4: Database sophistication
IV. Simulation software provider characteristics sp1: Reputation
sp2: Reliability sp3: History sp4: Stability
sp5: General customer support sp6: Training
sp7: Technical support
earlier research, speci®c to the area of computer
simulation [41]. In this test, EUCS was found to be
a valid and reliable surrogate measure for computer
simulation implementation success.
Additionally, McHaney, Hightower and White [42]
conducted a test±retest study, which provided
addi-tional evidence that when applied to users of computer
simulation, the EUCS instrument remained internally
consistent and stable.
3.3. Independent variables
Most of the independent variables were
operatio-nalized as simple questions with tangible answers.
Since no instrument for measuring items associated
with computer simulation implementation success
exists, questions for measuring the independent
vari-ables were constructed and validated through
pretest-ing and other conventional methods [2].
3.4. Sampling procedure
This study examined users of discrete event
com-puter simulation. Five hundred and three potential
simulation users were randomly selected from a pool
consisting of the membership of the Society for
Computer Simulation and recent contributors to the
Winter Simulation Conference. Informational letters
describing the study were mailed out, followed by a
package with questionnaire forms. The ®rst form
asked for a report on a successful simulation project.
The second form asked for a report on a
less-than-successful simulation project. Respondents were told
that successful simulations are efforts that produce
accurate and useful information within time, budget,
and schedule constraints.
included that the respondent did not use simulation,
reported only an academic involvement with
simula-tion, or could not be classi®ed as a simulation user
(i.e., acted solely as a programmer). In order to be
included in the analysis, a respondent needed to report
on a real-world simulation project. Fourteen
addi-tional packets were undeliverable.
Forty of the 125 usable responses were paired,
meaning the respondent reported on both successful
and less-than-successful simulation projects. This
meant that 105 different individuals/companies
reported on a total of 125 different simulation projects.
The net response rate of usable surveys from unique
sources was 21.5 percent.
3.5. Representativeness of returns
Several demographic measures were taken to allow
us to determine whether the respondents were
unna-turally concentrated or if effects confounding the
measurement of success appear to exist. Among the
areas investigated were occupation, years of
experi-ence, and software package (Table 2). Other concerns
address the areas of animation use,
animation/statis-tics importance, and use of external vendors. When
these potential confounds were examined, their
pos-sible in¯uences on the dependent variable, end-user
computing satisfaction, were of primary interest.
However, due to concerns related to methods variance,
alternative measures of simulation implementation
success, including a single-line item for satisfaction,
a single-line item for success [11,12] and whether a
successful or less-than-successful simulation project
survey was selected, were examined.
Table 3 contains a summary of signi®cance of
potential confounds for this study. None of the general
demographics such as occupation [8,59], years of
experience or animation/statistics importance directly
correlated with success. Other measures did
signi®-cantly correlate with simulation implementation
suc-cess±use of animation, use of an external vendor, and
simulation software type. While animation and
exter-nal vendor use items were not included (due to low
response rates), this analysis indicates that these items
may play an important role in simulation
implementa-tion success. The signi®cant relaimplementa-tionship between
software product and simulation implementation
suc-cess was not unexpected. The various items forming
the simulation software product characteristics factor
vary according to the software being used. It is
inter-esting to note that simulation software specialty
packages earned a signi®cantly higher end user
com-puting satisfaction score than did traditional
simula-tion languages.
4. Results
A con®rmatory factor analysis procedure was used
to determine if the structure shown in Fig. 3 existed in
the collected data. Prior to conducting the
con®rma-Table 2Years of experience Frequency Percent
0±5 27 21.6
6±10 40 32
11±15 25 20
15±20 18 14.4
More than 20 13 10.4
Not reporting 2 1.6
Project software Frequency Percent
GPSS/H 14 11.2
AUTOMATION MASTER 3 2.4
MATLAB 2 1.6
ADA 2 1.6
SES Workbench 2 1.6
Othersa 14 11.2
Not reporting 2 1.6
tory factor analysis required to test this hypothesis,
variables relating to animation and vendor
character-istics had to be removed. Since with them, only
®fty-four total usable respondents would remain in the data
set, the six animation variables and the eleven software
providers variables were removed. Hence, a
four-factor model was tested.
4.1. Confirmatory analysis of independent variables
A con®rmatory factor analysis was run using the
SAS PROC CALIS program [52]. The ®t of the data to
the hypothesized model was assessed using several
measures. The ®rst was the
w
2goodness of ®t measure.
Analysis indicated that the data collected did not ®t the
hypothesized
factor
structure
(
w
2
4084.9,
P
>
w
2
0.0001). The goodness of ®t and adjusted
goodness of ®t indexes reported at 0.42 and 0.38,
respectively. Bentler and Bonett's non-normed index
was 0.36, and Bollen's non-normed index was 0.39
[6]. Thus, the collected data did not con®rm the
hypothesized structure.
An exploratory factor analysis was run to
summar-ize the interrelationships among the variables and
determine if reasonable factors would emerge. The
correlation matrix was found to be signi®cantly
dif-ferent from zero. Barlett's sphericity test indicated a
w
2value above 54,000 and a signi®cance level of 0.00.
Thus, the intercorrelation matrix contains enough
com-mon variance to make factor analysis viable. Kaiser's
MSA was 0.76 Ð adequate for an exploratory study.
The number of desired factors were selected. The initial
starting point of seven factors was selected using
Horn's Test [27], Velicer's MAP [58] and a scree plot.
The initial latent factor structure was investigated.
Table 4 contains the result. As shown, many of the
variables did not signi®cantly contribute to the factor
structure. Thus, an iterative process of removing the
least signi®cant variable using the factor loadings and
a correlation analysis with Cronbach's alpha, and
re-running the analysis was followed. All variables not
contributing to the factor structure were removed one
at a time and the analysis was re-run. The factor
analysis procedure was repeated until a ®nal
explora-tory model emerged, which is shown in Table 5.
Table 6 provides information concerning the
corre-lation analysis used in the removal of various
ques-tions in the development of the ®nal exploratory factor
analysis. Cronbach's alpha was 0.85 [10].
4.2. Discussion of the exploratory model
The exploratory factor analysis resulted in seven
interpretable and consistent factors. Forty-three of the
Table 3Summary of significance of potential confounding variablesa
Variable name EUCS Single-line success Single-line satisfaction Success/failure survey
Occupation 0.456 0.886 0.156 0.992
Years of experience 0.431 0.824 0.345 0.984
Product type 0.082* 0.419 0.068* 0.930
Importance of statistics/animation 0.131 0.873 0.931 0.657
Animation use 0.047** 0.064* 0.054** 0.271
External vendor use 0.888 0.158 0.068* 0.216
a*, significant at the 0.10 level; **, significant at the 0.05 level.
Table 4
Initial exploratory model
Rotated factor pattern
Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6 Factor 7
S3 0.77 ± ± ± ± ± ±
S5 0.75 ± ± ± ± ± ±
S2 0.73 ± ± ± ± ± ±
S1 0.72 ± ± ± ± ± ±
O1 0.68 ± ± ± ± ± ±
O5 0.66 ± ± ± ± ± ±
S4 0.66 ± ± ± ± ± ±
O6 0.66 ± ± ± ± ± ±
E5 0.61 ± ± ± ± ± ±
S7 0.58 ± ± ± ± ± ±
O7 0.58 ± ± 0.55 ± ± ±
S6 0.58 ± ± ± ± ± ±
I9 0.55 ± ± ± ± 0.47 ±
D1 0.54 ± ± ± ± ± ±
I5 0.47 ± ± ± ± ± ±
I6 0.45 ± ± ± ± ± ±
O8 0.41 ± ± ± ± ± ±
P2 ± ± ± ± ± ± ±
C8 ± 0.74 ± ± ± ± ±
C4 ± 0.74 ± ± ± ± ±
C3 ± 0.74 ± ± ± ± ±
C2 ± 0.71 ± ± ± ± ±
C7 ± 0.71 ± ± ± ± ±
C5 ± 0.68 ± ± ± ± ±
C6 ± 0.67 ± ± ± ± ±
C9 ± 0.62 ± ± ± ± ±
C1 ± 0.59 ± ± ± ± ±
I2 ± ± ± ± ± ± ±
P1 ± ÿ0.41 ± ± ± ± ±
E3 ± ± 0.71 ± ± ± ±
E1 ± ± 0.69 ± ± ± ±
E2 ± ÿ0.44 0.65 ± ± ± ±
E4 ± ± 0.64 ± ± ± ±
I4 ± ± 0.49 ± ± ± ±
I1 ± ± 0.46 ± ± ± ±
I8 ± ± ± ± ± ± ±
E6 ± ± ÿ0.67 ± ± ± ±
O9 ± ± ± 0.73 ± ± ±
O3 ± ± ± 0.61 ± ± ±
O4 ± ± ± 0.48 0.41 ± ±
AN3 ± ± ± 0.44 ± ± ±
I3 ± ± ± 0.41 ± ± ±
O2 ± ± ± ± ± ± ±
OR1 ± ± ± ± 0.73 ± ±
OR2 ± ± ± ± 0.65 ± ±
OR4 ± ± ± ± 0.61 ± ±
D2 ± ± ± ± 0.57 ± ±
D3 ± ± ± ± 0.56 ± ±
P3 ± ± ± ± ± ± ±
I10 ± ± ± ± ± 0.60 ±
59 variables initially factor analyzed were retained.
The factors were:
Factor 1: software characteristics
Factor 2: operational cost characteristics
Factor 3: software environment characteristics
Factor 4: simulation software output characteristics
Factor 5: organizational support characteristics
Factor 6: initial investment cost characteristics
Factor 7: task characteristics
Most of these are closely related to the factors or
subfactors originally hypothesized in the integrated
model for computer simulation implementation
suc-cess. The results of the factor analysis and the seven
derived factors are displayed in Table 7. Table 8
com-pares those derived factors to the hypothesized ones.
When these seven factors were regressed against
computer simulation implementation success (Table
8), a signi®cant relationship was revealed. This
regres-sion model follows the form:
EUCS
b
1FACT
1
b
2FACT
2
. . .
b
NFACT
NTable 9 contains the results of the regression analysis.
1The calculated
F
-value is 28.6. This results in a
probability >
F
of 0.0001. Therefore, the regression
is significant, indicating that a relationship between
the factors developed from the simulation data and
EUCS is present. The
R
2statistic indicates
approxi-mately 66.5 percent of the variance in EUCS can be
accounted for with the factors developed in the
exploratory factor analysis.
The ®ndings of the exploratory study indicate that
the most important features related to computer
simu-lation implementation success are those associated
with operational costs. Most respondents considered
operational costs to be important to success. The
respondents' assessment of success involved not only
model development itself, but also the cost of running
and maintaining the model. In order for simulation
implementation to be successful, it had to be within
cost expectations.
The second strongest factor Ð simulation software
output characteristics Ð shows the impact of
informa-tion and knowledge gained in the simulainforma-tion on the
decision makers. Included are concerns about output
analysis, statistical summaries, and output reporting.
Without being able to move the knowledge gained in
the simulation to some usable form, the simulation is
merely an exercise. This emphasizes the goal-oriented
nature of simulation projects. The result, rather than
precise method, appears to be very important to those
using the tool.
The next signi®cant factor Ð organizational
sup-port characteristics Ð groups the team aspects of
simulation use and its acceptance within the
corpora-tion. Here, are loadings of mentorship, teamwork,
number of active users, etc. Simulation is more than
programming. It requires the participation of systems
experts, management, and decision makers. Without
Table 4 (Continued)Rotated factor pattern
Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6 Factor 7
AN1 ± ± ± ± ± 0.57 ±
T4 ± ± ± ± ± 0.44 ±
AN2 ± ± ± ± ± ± ±
T1 ± ± ± ± ± ± ±
T2 ± ± ± ± ± ± 0.76
T3 ± ± ± ± ± ± 0.71
D4 ± ± ± ± ± ± 0.60
OR3 ± ± ± ± ± ± 0.42
proper team structure and management, the modeling
effort could be in serious danger. This interaction was
recognized by the model users.
The next factor Ð simulation environment
char-acteristics Ð relates to the modeling process. This is
the ease of using the modeling tool, language syntax,
how much of a modeler's time is required, etc. The
proliferation of easy-to-use software in all areas has
apparently impacted the expectation of simulation
software users. It was reported that signi®cantly
Table 5Final exploratory model
Rotated factor pattern
Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6 Factor 7
S3 0.79 ± ± ± ± ± ±
S5 0.78 ± ± ± ± ± ±
I9 0.70 ± ± ± ± ± ±
S2 0.70 ± ± ± ± ± ±
S4 0.67 ± ± ± ± ± ±
E5 0.66 ± ± ± ± ± ±
O5 0.65 ± ± ± ± ± ±
S1 0.65 ± ± ± ± ± ±
O6 0.62 ± ± ± ± ± ±
S6 0.61 ± ± ± ± ± ±
I10 0.56 ± ± ± ± ± ±
D1 0.55 ± ± ± ± ± ±
C6 ± 0.79 ± ± ± ± ±
C5 ± 0.79 ± ± ± ± ±
C8 ± 0.79 ± ± ± ± ±
C4 ± 0.76 ± ± ± ± ±
C7 ± 0.68 ± ± ± ± ±
C9 ± 0.54 ± ± ± ± ±
E3 ± ± 0.76 ± ± ± ±
E1 ± ± 0.72 ± ± ± ±
E2 ± ± 0.71 ± ± ± ±
E4 ± ± 0.68 ± ± ± ±
I4 ± ± 0.53 ± ± ± ±
I1 ± ± 0.49 ± ± ± ±
E6 ± ± ÿ0.70 ± ± ± ±
O7 ± ± ± 0.73 ± ± ±
O9 ± ± ± 0.65 ± ± ±
O8 ± ± ± 0.63 ± ± ±
O3 ± ± ± 0.62 ± ± ±
O1 ± ± ± 0.56 ± ± ±
S7 ± ± ± 0.51 ± ± ±
D3 ± ± ± ± 0.72 ± ±
OR4 ± ± ± ± 0.70 ± ±
OR1 ± ± ± ± 0.66 ± ±
D2 ± ± ± ± 0.62 ± ±
OR2 ± ± ± ± 0.56 ± ±
O4 ± ± ± ± 0.54 ± ±
C3 ± ± ± ± ± 0.84 ±
C2 ± ± ± ± ± 0.82 ±
C1 ± ± ± ± ± 0.65 ±
T2 ± ± ± ± ± ± 0.86
T3 ± ± ± ± ± ± 0.78
higher success scores were given to simulators
(speci-alty simulation packages) than to traditional languages
used for simulation.
The ®fth strongest relationship was with the
software characteristics factor. It can be concluded
that speci®c modeling software yields a higher
Table 6Correlation analysis
Cronbach coefficient alphaa
Deleted variable Raw variables Standardized variables
Correlation with total Alpha Correlation with total Alpha
I1 0.14 0.85 0.12 0.84
S6 0.49 0.84 0.48 0.83
O5 0.58 0.84 0.58 0.83
C1 ÿ0.15 0.88 ÿ0.15 0.85
C2 0.04 0.86 0.05 0.84
C3 ÿ0.05 0.86 ÿ0.04 0.85
C4 ÿ0.16 0.86 ÿ0.15 0.85
C5 ÿ0.19 0.86 ÿ0.18 0.85
C6 ÿ0.19 0.86 ÿ0.18 0.85
C7 ÿ0.17 0.86 ÿ0.17 0.85
C8 ÿ0.09 0.86 ÿ0.08 0.85
C9 ÿ0.17 0.86 ÿ0.17 0.85
D1 0.48 0.85 0.47 0.83
D2 0.48 0.85 0.46 0.83
D3 0.35 0.85 0.37 0.84
D4 0.15 0.85 0.16 0.84
E1 0.56 0.84 0.53 0.83
E2 0.28 0.85 0.25 0.84
E3 0.51 0.84 0.49 0.83
E4 0.42 0.85 0.41 0.84
E5 0.58 0.84 0.58 0.83
E6 ÿ0.09 0.86 ÿ0.07 0.85
I4 0.47 0.85 0.45 0.83
I9 0.47 0.85 0.47 0.83
I10 0.47 0.85 0.47 0.83
O1 0.54 0.84 0.53 0.83
O3 0.50 0.84 0.48 0.83
O4 0.52 0.84 0.51 0.83
O6 0.67 0.84 0.66 0.83
O7 0.65 0.84 0.64 0.83
O8 0.35 0.85 0.34 0.84
O9 0.44 0.85 0.43 0.83
OR1 0.24 0.85 0.25 0.84
OR2 0.21 0.85 0.22 0.84
OR4 0.36 0.85 0.36 0.84
S1 0.57 0.84 0.56 0.83
S2 0.48 0.85 0.48 0.83
S3 0.66 0.84 0.65 0.83
S4 0.53 0.84 0.52 0.83
S5 0.57 0.84 0.56 0.83
S7 0.51 0.84 0.52 0.83
T2 0.11 0.85 0.11 0.84
T3 0.25 0.85 0.27 0.84
Table 7
Derived factor structurea
Loading Name Description
Factor one: software characteristics
0.79 s3 Standard distributions
0.78 s5 Independent replications
0.70 i9 Attributes for entities
0.70 s2 Random deviate generators
0.67 s4 Observed distributions
0.66 e5 Interactive debugging
0.65 o5 Trace capabilities
0.65 s1 Random-number generators
0.62 o6 Summarization of multiple model runs
0.61 s6 Warm-up period/reset
0.56 i10 Global variables
0.55 d1 Degree of product validation and verification
Factor two: operational cost characteristics
0.79 c6 Interface costs
0.79 c5 Model modification costs
0.79 c8 Training costs
0.76 c4 Operation cost
0.68 c7 Maintenance costs
0.54 c9 Computer run time costs
Factor three: software environment characteristics
0.76 e3 On-line help
0.72 e1 User interface
0.71 e2 Ease of learning
0.68 e4 On-line tutorial
0.53 i4 Syntax
0.49 i1 Interface to other software
ÿ0.70 e6 Degree of interaction
Factor four: simulation software output characteristics
0.73 o7 Output data analysis
0.65 o9 High resolution graphics displays
0.63 o8 Individual model output observations
0.62 o3 Business graphics
0.56 o1 Standard reports
0.51 s7 Confidence intervals
Factor five: organizational support characteristics
0.72 d3 Number of active users
0.70 or4 Future frequency of use
0.66 or1 Mentorship
0.62 d2 Acceptance by experts
0.56 or2 Teamwork
0.54 o4 File creation
Factor six: initial investment costs characteristics
0.84 c3 Acquisition cost
0.82 c2 Software cost
0.65 c1 Hardware cost
Factor seven: task characteristics
0.86 t2 Project/system complexity
0.78 t3 Level of simulation detail
0.60 d4 Database sophistication
Table 8
Comparison of exploratory factor composition with hypothesized factor structure
Hypothesized factor composition Exploratory factor composition
Simulation software characteristics factors
Simulation software product characteristics statistical features Software characteristics
s1: Random-number generators s1: Random-number generators
s2: Random deviate generators s2: Random deviate generators
s3: Standard distributions s3: Standard distributions
s4: Observed distributions s4: Observed distributions
s5: Independent replications s5: Independent replications
s6: Warm-up period/reset s6: Warm-up period/reset
s7: Confidence intervals o5: Trace capabilities
o6: Summarization of multiple model runs i9: Attributes for entities
i10: Global variables e5: Interactive debugging
d1: Degree of product validation and verification
Simulation cost characteristics factors
Costs Initial investment costs characteristics
c1: Hardware cost c1: Hardware cost
c2: Software cost c2: Software cost
c3: Acquisition cost c3: Acquisition cost
c4: Operation cost Operational cost characteristics
c5: Model modification costs c4: Operation cost
c6: Interface costs c5: Model modification costs
c7: Maintenance costs c6: Interface costs
c8: Training costs c7: Maintenance costs
c9: Computer run time costs c8: Training costs
c9: Computer run time costs
Software environment characteristics factor
Simulation software environment features Software environment characteristics
e1: User interface e1: User interface
e2: Ease of learning e2: Ease of learning
e3: On-line help e3: On-line help
e4: On-line tutorial e4: On-line tutorial
e5: Interactive debugging e6: Degree of interaction
e6: Degree of interaction i1: Interface to other software
i4: Syntax
Simulation software output characteristics factor
Output features Simulation software output characteristics
o1: Standard reports o1: Standard reports
o2: Customized reports o3: Business graphics
o3: Business graphics o7: Output data analysis
o4: File creation o8: Individual model output observations
o5: Trace capabilities o9: High resolution graphics displays
o6: Summarization of multiple runs s7: Confidence intervals
o7: Output data analysis o8: Individual model observations o9: High resolution graphics displays
Organizational support characteristics factor
Organizational characteristics Organizational support characteristics
degree of perceived success. Various questions
load-ing on this item include the underlyload-ing statistical
sophistication, debugging facilities, and other
soft-ware features.
Initial investment costs is a factor that again
empha-sizes the cost aspect related to simulation
implemen-tation success. If the software costs too much, the
project may not be considered successful, even if the
simulation is used to make a good decision. This factor
may have been weakened through a memory effect.
The initial investment may have been made some time
ago. This could explain the relative strength of
opera-tional costs compared with investment costs.
The ®nal and weakest factor (not signi®cant) was
task characteristics. Although enough common
var-iance was present to form a task characteristics factor,
the beta coef®cient for this factor was not signi®cant in
the regression analysis. This indicates that though task
characteristics may be related to simulation
imple-mentation success, it does not appear to contribute
enough to be considered signi®cant. Possibly, the
questionnaire items may have been worded poorly
or interpreted in several different ways. Alternatively,
the construct may just not be important to many of the
respondents. Perhaps, the dif®culty of the task or
complexity of the decision to be made simply is not
Table 8 (Continued)Hypothesized factor composition Exploratory factor composition
or2: Teamwork or2: Teamwork
or3: Corporate goals or4: Future frequency of use
or4: Future frequency of use d2: Acceptance by experts
d3: Number of active users o4: File creation
Task characteristics factor
Task characteristics Task characteristics
t1: Intended use of simulation t2: Project/system complexity
t2: Project/system complexity t3: Level of simulation detail
t3: Level of simulation detail d4: Database sophistication
t4: Use of a structured approach
Table 9
Regression based on exploratory factor analysis
Analysis of variancea
Source DF Sum of squares Mean square F-value P>F
Model 7 4746.0 678.0 28.6 0.0001
Error 101 2393.2 23.7
C total 108 7139.2
Parameter estimates
Variable DF Parameter estimate Standard error T for H0: Parameter0 P> |T|
INTERCEP 1 46.5 0.47 99.7 0.0001
Software 1 1.7 0.47 3.7 0.0003
Operational cost 1 ÿ3.7 0.47 ÿ7.9 0.0001
Software environment 1 2.6 0.47 5.6 0.0001
Software output 1 3.2 0.46 6.9 0.0001
Organizational support 1 2.6 0.46 5.7 0.0001
Initial investment 1 ÿ1.7 0.47 ÿ3.6 0.0005
Task 1 0.0 0.46 0.1 0.9501
a factor relating to simulation implementation success.
Fig. 4 illustrates all seven factors.
5. Conclusion
5.1. Practical implications
Computer simulation is a primary decision making
aid in the areas of operations management, operations
research, industrial engineering and management
science. Because of its promise, a high degree of
commercialization of this technology is taking place.
The fact that animation has a signi®cant
rela-tionship with simulation implementation success
indicates investment in an animation system might
improve the value of simulation as a decision support
tool.
The results of this study do not indicate simulation
implementation success is a random occurrence, nor is
it manifested in the same way in every situation. But
rather, it relates to a variety of factors ranging from
software to cost to organizational support.
5.2. Limitations of the study
The exploratory nature of this study results in some
obvious limitations. First, the questions used to
develop the independent variable side of the
contin-gency model need further re®nement. As with any
exploration into a previously unstudied area, this was
the ®rst attempt. Some of the factors that did not load
might be as a result of questionnaire construction
rather than lack of factor importance.
An important limitation to recognize is the nature of
the developed framework. Since the research approach
is based on contingency theory and is exploratory in
nature, no causality can be assumed.
The assumption that computer simulation is a
repre-sentational model, DSS is another potential limitation
in terms of theoretical development. If this assumption
were not made, it would become dif®cult to justify the
use of information system literature. The sample size
may also be a limitation.
6. Summary
This study identi®ed a framework upon which
computer simulation implementation success can be
studied empirically. This framework was used as a
basis for exploratory empirical research into the
factors correlated with computer simulation
imple-mentation success. A mail survey was conducted to
measure recurrent factors associated with success or
failure. An exploratory model based on a set of
hypothesized factors was derived. This analysis
indicated the presence of seven factors in the data.
These, although not matching the hypothesized model
exactly, did relate very closely, providing the ®rst step
toward an understanding of computer simulation
implementation success.
References
[1] S. Alter, A taxonomy of decision support systems, Sloan Management Review, Fall, 1977, pp. 37±56.
[2] E. Babbie, Survey Research Methods, Wadsworth, Belmont, CA, 1990.
[3] O. Balci, Guidelines for successful simulation studies, in: Proceedings of the 1990 Winter Simulation Conference, Society for Computer Simulation, San Diego, CA, 1990, pp. 25±32. [4] J. Banks, Selecting simulation software, in: Proceedings of
the 1991 Winter Simulation Conference, Society for Compu-ter Simulation, San Diego, CA, 1991, pp. 15±20.
[5] S. Blili, L. Raymond, S. Rivard, Impact of task uncertainty, user involvement, and competence on the success of end-user computing, Information & Management 33(3), 1998, pp. 137±153.
[6] K.A. Bollen, Structural Equations with Latent Variables, Wiley, New York, 1989.
[7] J.S. Carson, Convincing users of a model's validity is challenging aspect of modeler's job, Industrial Engineering, June, 1986, pp. 74±75.
[8] D. Christy, H. Watson, The application of simulation: a survey of industry practice, Interfaces 13(5), 1983, pp. 47±52. [9] T.D. Cook, D.T. Campbell, Quasi-Experimentation: Design and Analysis Issues in Field Settings, Houghton Mifflin, Boston, MA, 1979.
[10] L.J. Cronbach, Coefficient alpha and the internal consistency of tests, Psychometrika 16, 1951, pp. 297±334.
[11] F.D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly, September, 1989, pp. 319±340.
[12] W.J. Doll, G. Torkzadeh, The measurement of end-user computing satisfaction, MIS Quarterly, June, 1988, pp. 259± 274.
[13] W.J. Doll, W. Xia, G. Torkzadeh, A confirmatory factor analysis of the end-user computing satisfaction instrument, MIS Quarterly, June, 1994, pp. 453±461.
[14] R. Dubin, Theory Building, Free Press, New York, 1978. [15] T. Duff, Avoid the pitfalls of simulation, Automation,
November, 1991, pp. 32±36.
[16] R. Farina, G.A. Kochenberger, T. Obremski, The computer runs the Bolder Boulder: a simulation of a major running race, Interfaces 19(2), 1989, pp. 48±55.
[17] S. Floyd, C. Turner, K. Davis, Model-based decision support systems: an effective implementation framework, Computers in Operations Research 15(5), 1989, pp. 481±491.
[18] C.A. Fossett, D. Harrison, H. Weintrob, S.I. Gass, An assessment procedure for simulation models: a case study, Operations Research 39(5), 1991, pp. 710±723.
[19] C.R. Franz, D. Robey, Organizational context, user involve-ment, and the usefulness of information systems, Decision Sciences 17(2), 1986, pp. 329±356.
[20] T.J. Gogg, C. Sands, Hughes Aircraft designs automated storeroom system through simulation application, Industrial Engineering, August, 1990, pp. 49±57.
[21] J. Gouskos, Three benefits every simulation buyer should understand, Industrial Engineering, July, 1992, p. 34.
[22] J.W. Grant, S.A. Weiner, Factors to consider in choosing a graphically animated simulation system, Industrial Engineer-ing, August, 1986, pp. 37±40, 65±68.
[23] P. Gray, I. Borovits, The contrasting roles of monte carlo simulation and gaming in decision support systems, Simula-tion 47(6), 1986, pp. 233±239.
[24] T. Guimaraes, M. Igbaria, M. Lu, The determinants of DSS success: an integrated model, Decision Sciences 23(2), 1992, pp. 409±430.
[25] S.W. Haider, J. Banks, Simulation software products for analyzing manufacturing systems, Industrial Engineering, July, 1986, pp. 98±103.
[26] J. Higdon, Planning a material handling simulation, Industrial Engineering, November, 1988, pp. 55±59.
[27] J.L. Horn, A rationale and test for the number of factors in factor analysis, Psychometrika 30(2), 1965, pp. 179±185. [28] W.C. House, Business Simulation for Decision Makers,
PBI-Petrocelli, New York, 1977.
[29] C.H. Jones, At last real computer power for decision makers, Harvard Business Review, September±October, 1970, pp. 75± 89.
[30] L. Keller, C. Harrell, J. Leavy, The three best reasons why simulation fails, Industrial Engineering, April, 1991, pp. 27±31. [31] F.N. Kerlinger, Foundations of Behavioral Research,
Har-court, Brace and Jovanovich, Fort Worth, TX, 1986. [32] A.M. Law, S.W. Haider, Selecting simulation software for
manufacturing applications: practical guidelines and software survey, Industrial Engineering, May, 1989, pp. 33±46. [33] A.M. Law, W.D. Kelton, Simulation Modeling and Analysis,
2nd ed., McGraw-Hill, New York, 1993.
[34] A.M. Law, M.G. McComas, How to select simulation software for manufacturing applications, Industrial Engineer-ing, July, 1992, pp. 29±35.
[35] L. Lin, J. Cochran, J. Sarkis, A metamodel-based decision support system for shop floor production control, Computers in Industry 18, 1992, pp. 155±168.
[36] H.C. Lucas, Empirical evidence for a descriptive model of implementation, MIS Quarterly 2(2), 1978, pp. 27±41. [37] R. Lynch, Implementing packaged application software:
hidden costs and new challenges, Systems, Objectives, Solutions 4, 1984, pp. 227±234.
[38] K. Mabrouk, Mentorship: a stepping stone to simulation success, Industrial Engineering, February, 1994, pp. 41±43. [39] G.T. Mackulak, J.K. Cochran, P.A. Savory, Ascertaining
important features for industrial simulation environments, Simulation 63(4), 1994, pp. 211±221.
[40] R.I. Mann, H.J. Watson, A contingency model for user involvement in DSS development, MIS Quarterly, March, 1984, pp. 27±37.
[41] R. McHaney, T.P. Cronan, Computer simulation success: on the use of the end-user computing satisfaction instrument, Decision Sciences 29 (2), 1998, pp. 525±534.
[42] R. McHaney, R. Hightower, D. White, EUCS test±retest reliability in representational model decision support systems, Information & Management 36, 1999, pp. 109±119. [43] R.J. Might, Principles for the design and selection of combat
[44] H. Min, Selection of software: the analytic hierarchy process, International Journal of Physical Distribution & Logistics Management 22(1), 1992, pp. 42±52.
[45] J. Mott, K. Tumay, Developing a strategy for justifying simulation, Industrial Engineering, July, 1992, pp. 38±42. [46] K. Musselman, Conducting a successful simulation project,
in: Proceedings of the 1992 Winter Simulation Conference, Society for Computer Simulation, San Diego, CA, 1992, pp. 115±121.
[47] B.U. Nwoke, D.R. Nelson, An overview of simulation in manufacturing, Industrial Engineering, July, 1993, pp. 43±57. [48] S. Randhawa, A. Mechling, R. Joerger, A simulation-based resource planning system for Oregon motor vehicles division, Interfaces 19(6), 1989, pp. 40±51.
[49] S. Rivard, S.L. Huff, User developed applications: evaluation of success from the DP perspective, MIS Quarterly 8(1), 1984, pp. 39±50.
[50] J. Rodrigues (Ed.), Directory of Simulation Software, vol. 4, The Society for Computer Simulation, San Diego, CA, 1993. [51] L.G. Sanders, J.F. Courtney, A field study of organizational factors influencing DSS success, MIS Quarterly 9(1), 1985, pp. 77±93.
[52] SAS Institute, SAS User's Guide, vol. 1, ACECLUS-FREQ, version 6, 4th ed., SAS Institute, Inc., Cary, NA, 1994. [53] R.L. Schultz, System simulation: the use of simulation for
decision making, Behavioral Science 19, 1974, pp. 344±350. [54] D.W. Straub, Validating instruments in MIS research, MIS
Quarterly, June, 1989, pp. 147±166.
[55] P. Sussman, Evaluating decision support software, Datama-tion, 15 October 1984, pp. 171±172.
[56] J. Swain, Flexible tools for modeling, OR/MS Today, December, 1993, pp. 62±78.
[57] P. Tait, I. Vessey, The effect of user involvement on system success: a contingency approach, MIS Quarterly 12(1), 1988, pp. 91±108.
[58] W.F. Velicer, Determining the number of components from the matrix of partial correlations, Psychometrika 41(3), 1976, pp. 321±327.
[59] H.J. Watson, D. Christy, The evolving use of simulation, Simulation and Games, September, 1982, pp. 351±363. [60] A. Wilt, D. Goddin, Health care case study: simulating
staffing needs and work flow in an outpatient diagnostic center, Industrial Engineering, May, 1989, pp. 22±26. [61] B.D. Withers, A.A.B. Pritsker, D.H. Withers, A structured
definition of the modeling process, in: Proceedings of the
1993 Winter Simulation Conference, Society for Computer Simulation, San Diego, CA, 1993, pp. 1109±1117.
Roger McHaneyFor eight years prior to his return to academia, Roger McHaney was employed by the Jervis B. Webb Company. While there, he simulated numerous materials-handling systems for customers, including General Mo-tors, Goodyear, Ford, IBM, Chrysler, Kodak, Caterpillar, the Los Angeles Times, and the Boston Globe. His current research interests include auto-mated guided vehicle system simulation, innovative uses for simulation languages, simulation's use in DSS and simulation success. After completing a Ph.D. in Computer Information Systems and Quantitative Analysis at the University of Arkansas College of Business, Dr. McHaney became an Assistant Professor at Kansas State University. He is the author of the 1991 Academic Press book,Computer Simulation: A Practical Perspec-tive and has published in Decision Sciences, The International Journal of Production Research, Decision Support Systems,
Simulation, and various other journals.
Timothy Paul Cronanis Professor of Computer Information and Quantitative Analysis at the University of Arkansas, Fayetteville. Dr. Cronan received the D.B.A. from Louisiana Tech University, and is an active member of the Decision Sciences Institute and The Association for Computing Machinery. He has served as regional vice president and on the board of directors of the Decision Sciences Institute and as president of the Southwest Region of the Institute. In addition, he served as associate editor forMIS Quarterly. His research interests include local area networks, downsizing, expert systems, performance analysis and effectiveness, and end-user computing. Publications have appeared inDecision Sciences,MIS Quarterly,OMEGA,The International Journal of Management Science, The Journal of Management Information Systems,Communications of the ACM,