• Tidak ada hasil yang ditemukan

Directory UMM :Data Elmu:jurnal:I:Information and Management:Authorlist C:

N/A
N/A
Protected

Academic year: 2017

Membagikan "Directory UMM :Data Elmu:jurnal:I:Information and Management:Authorlist C:"

Copied!
14
0
0

Teks penuh

(1)

Research

System usage behavior as a proxy for user satisfaction:

an empirical investigation

Charles E. Downing

1

Operations and Strategic Management Department, Wallace E. Carroll School of Management, Boston College, Chestnut Hill, MA 02167, USA

Received 3 March 1998; accepted 12 October 1998

Abstract

Organizations are increasingly recognizing that user satisfaction with information systems is one of the most important determinants of the success of those systems. However, current satisfaction measures involve an intrusion into the users' worlds, and are frequently deemed to be too cumbersome to be justi®ed ®nancially and practically. This paper describes a methodology designed to solve this contemporary problem. Based on theory which suggests that behavioral observations can be used to measure satisfaction, system usage statistics from an information system were captured around the clock for 6 months to determine users' satisfaction with the system. A traditional satisfaction evaluation instrument, a validated survey, was applied in parallel, to verify that the analysis of the behavioral data yielded similar results. The ®nal results were analyzed statistically to demonstrate that behavioral analysis is a viable alternative to the survey in satisfaction measurement.#1999 Elsevier Science B.V. All rights reserved.

Keywords: System usage; User satisfaction; Measurement of user satisfaction; Survey; Behavioral data; Empirical study

1. Introduction

As information technology has become a dominant presence in the business community, both academic and practitioner literature have increasingly recog-nized the importance of users' satisfaction in the success of information system (IS) applications [3, 10, 14]. One author states that user satisfaction is often considered the most important factor in reviewing the quality of an information system [19].

Given this recognition, several techniques for mea-suring users' IS satisfaction have subsequently

emerged [1, 4, 13]. Methodologies ranging from the use of Likert Scales to complex manipulations of the Semantic Differential have demonstrated impressive validity and reliability. Yet, business executives inter-viewed for this study readily admit to not applying these measures for anything but a one-shot academic study. Their retrospective assessment of why this is the case centers around one prevailing reason: their orga-nizations ®nd the process too cumbersome to be justi®ed ®nancially and practically.

Careful review of the literature reveals that while deviations in exact methodology and number and type of dimensions in the satisfaction equation are many, all the approaches have the underlying similarity of some sort of intrusion into the user's world [1, 4, 8, 9, 12, 13,

1Tel.: +1-617-552-0435; fax: +1-617-552-0433; e-mail:

downinch@bc.edu

(2)

15, 18]. In all cases reviewed for this research, the users were either requested to complete a question-naire, interviewed, or directly observed (with the notable exception of [17]). In the current business climate of downsizing and belt-tightening, it is easy to understand the executives who might not want their employees routinely subjected to such activity.

2. Conceptual framework and research hypothesis

This research focuses on creating a methodology to solve the management paradox of simultaneously needing and desiring information system satisfaction data and being unable or unwilling to constantly survey users to get it. Due to current electronic systems capabilities, capturing system usage data requires little effort and cost. Thus, if such usage data could serve as a proxy for user satisfaction, manage-ment would have an ongoing measure of satisfaction which was also practical to obtain. Both theory [7] and a recent path analysis [2] suggest that such a link between computer system satisfaction and computer system usage in fact exists.

Fishbein and Ajzen [7] reason that human intuition has always linked behavior and attitude, and carefully enumerate the evolution of scienti®c studies surround-ing that linkage. They cite Nemeth's study [16], in which the number of seconds a person spent talking to another person was taken as a measure of liking, and Wicker's research [21], which was able to identify studies in which ``at least one attitudinal measure and one overt behavioral measure toward the same object [were] obtained for each subject'' ([21], p. 48). They conclude by stating ``In the preceding discussion we have suggested that behavioral observations can be used to measure the person's attitude'' ([7], p. 357). Concerning the procedures to follow to accomplish such measurement, they explain that investigators will need to empirically test the behavior±attitude relation-ship in an exploratory fashion, continuing until hypothesized relationships are demonstrated and duplicated.

Similarly, Baroudi, et al. [2] demonstrate empirical evidence that system usage and user information satisfaction are linked. They carefully specify that ``user information satisfaction is an attitude toward

the information system while system usage is a beha-vior'' ([2], p. 234), and conclude that ``. . .the study

provides evidence that the user's satisfaction with the system will lead to greater system usage.'' While they debate the direction of this linkage (Satisfaction !

Usage vs. Usage!Satisfaction), of importance to this study is that a linkage exists.

Thus, since information systems have the ability to easily capture usage behavior, and behavior can be a predictor of attitude, and user information system satisfaction is an attitude, it follows that capturing usage behavior can assist management in determining user satisfaction. The research hypothesis is stated as follows:

Research hypothesis

H0:Analyzing system usage data will prove to be a valid alternative to a survey in measuring user satis-faction with an information system.

Ha:The analysis of usage data will exhibit no relation-ship to a survey when measuring user satisfaction with an information system, and therefore will not prove to be a valid alternative.

3. Methodology

(3)

the survey: to determine the level of user satisfaction with the primary system. The research model is shown in Fig. 1.

The system studied, named Savings Express, is a 12-line telephone interactive voice response system (IVRS) responsible for providing 401(k) retirement plan information to 10,252 internal employees. As is the case with other IVRSs in this ®eld, customers can use their touch-tone telephones to access personal account or general plan information, request forms and plan brochures, and make various personal account changes (transfer account balances, initiate withdrawals and loans, change contribution amounts, etc.). Additionally, the IVRS allows customers to model unlimited `what if' scenarios of potential loans and projected plan account balances. As a system responsible for all of the input, processing, storage, and output needs of the company's Savings and Pro®t Sharing Plan, and differing from a `normal' end-user system only in its interface (telephone input and spoken response for output vs. the traditional key-board and screen), this IVRS provided an excellent ®eld information system for empirical examination of the research hypothesis. A graphical depiction of Savings Express appears in Fig. 2. Note that the components in Fig. 2 which have double underlines beneath them are enhancements to the system which were added midway through this study.

3.1. Measuring user satisfaction

3.1.1. First measure of user satisfaction ± Traditional survey

The traditional means of determining user satisfaction is through the use of a survey. The goal of this study is not to validate or test a new instrument, but simply to apply the one which is most widely accepted. The literature reveals many successful vehicles for measuring user satisfaction [1, 13], but in the realm of end-user computing the work of Doll and Torkzadeh [4] remains the standard. Their instrument was painstakingly developed and tested for both reliability and validity. Methodological and conceptual issues about their instrument have been raised [6], but test±retest studies have further demonstrated the reliability and stability of the instru-ment [11, 20]. As such, the Doll and Torkzadeh measure of end-user satisfaction was adopted for this study.

This instrument measures end-user satisfaction across ®ve components ± content, accuracy, format, ease of use, and timeliness ± using 12 questions with Likert-type scales. The instrument developed for this study followed these speci®cations, with the exception of number of questions. Due to practical constraints associated with the company-sponsored survey, six questions were used to address the ®ve components of satisfaction, as opposed to the Doll and Torkzadeh guideline of 12 questions. However, issues of relia-bility and validity arising from this difference in number of questions have been addressed [5]. Fig. 3 shows the overall instrument structure, as well as the recommended instrument questions compared to the actual statements used. A copy of the instrument appears in Appendix A.

Finally, for ease of presentation purposes, the state-ments were given a code, and these codes appear in the rightmost column of Fig. 3.

While researchers have had impressive success in validating this instrument, practical concerns center-ing around the obtrusiveness and cost remain.

3.1.2. Second measure of user satisfaction ± usage behavior

The methodology for this study involved surveying users as described above, with the additional element of recording precise details of their behavior. To

(4)

provide the usage data for this comparison, a meta-monitoring system under the IVRS was collecting detailed caller usage data, notably which touch tones were pressed and when. This collection took place 7 days a week, 24 hours a day during a 6-month period. If the meta-monitoring system analysis was to function similar to the survey, it needed to address the same parameters which were used in the survey (the codes in Fig. 3). To address these parameters, an automated rule-set had to be constructed for the meta-monitoring system to follow for each parameter. The iterative process described by Fishbein and Ajzen [7] was employed to create these `rules' for the meta-monitoring system to follow to determine values for

each parameter. This process proceeded as follows: a group of experts was questioned, in Delphi fashion, concerning what usage behavior might indicate satis-faction for each parameter. Seven high-level man-agers, all familiar with information systems in general and the system studied in this research in particular, were questioned for this research. Based on their answers, acceptable ranges of behavioral equivalents to survey responses (`1' to `5') were created. The goal was to have time and space dimen-sions of callers' usage behavior (`time' ± when, how often, and how long a caller called and remained in a certain section of Savings Express, and `space' ± which sections and options that caller used) analyzed

(5)

to create equivalent responses to the quality parameter statements. Take as an example the CONT1 quality parameter (the response to the statement ``The infor-mation helps me plan my ®nances.'') When examining the usage data, should the rules dictate that an average of one call per month by a caller to the General Plan Information Menu means that that caller gives a response of ``1 ± strongly agree?'' Does it need to be two or more calls? Does at least one call per quarter need to have been made by the caller to the Personal Fund Transfer Information section before that caller can be categorized as responding ``1 ± strongly agree''

to the CONT1 parameter? The establishment of these rules, which the meta-monitoring system could use to determine the six necessary parameters, is incredibly subjective at best. To stabilize and quantify the pro-cess, the most subjective aspect of the propro-cess, the brainstorming as to what the rules might be, was used only to establish ranges in which the rules could ®t. In the example just mentioned, acceptable ranges for the CONT1 parameter might be as follows (note that these ranges were allowed to overlap during this phase to allow ¯exibility during the construction of the ®nal rules):

(6)

Response Range

Strongly agree ± 1 Average total monthly calls to General Plan Information Menu and Personal Fund Transfer Information, > 2 Somewhat agree ± 2 Average total monthly

calls (to sections listed above), > 1 and < 3 Neutral ± 3 Average total monthly

calls, > 0.5 and < 2 Somewhat disagree ± 4 Average total monthly

calls, > 0 and < 1 Strongly disagree ± 5 Average total monthly

calls, < 0.5

In other words, for a caller to be considered to have responded `2 - Somewhat agree,' his/her average total monthly calls to the General Plan Information Menu and Personal Fund Transfer Information section would have toat leastbe larger than 1, and could not be 3 or larger. Such a range implies that an average less than or equal to 1 has to beconsidered a lower response than a `2,' and an average greater than or equal to 3 can only be viewed as a response of `1.'

After the somewhat lengthy and involved process of establishing the ranges, the ®nal rules for each para-meter were set. This rule-setting process took place in a trial and error manner, with the trial and error bounds being the ranges explained above. The ranges were applied to the activity of 500 randomly selected call-ers. The population for the random selection was anyone who had called the system and entered a Social Security number from 1 April 1993 until 30 June 1993 (the time-frame equivalent of the ®rst survey), and the random selection was performed similarly to the survey random selection. After the 500 Social Security Numbers had been selected, the guiding force in the trial and error process was statistical comparison of the meta-monitoring results to the data from the ®rst survey. Hypothesis testing was used, with the mean of a given parameter from the meta-monitoring system being tested against the mean of the similar parameter from the survey. The null hypothesis was that the means of the two parameters were equal, and the alternative hypothesis that they were not, at

ˆ0.10. If the test showed that the means were not equal, the meta-monitoring code was changed to the

extent allowed by the ranges (in an attempt to move the means closer together), and a new test was run. The large amount of subjectivity in the process was there-fore lessened by the width of the ranges, which came from the consensus of the expert panel. The meta-monitoring code was changed and tests re-run until an acceptable match was achieved, or until no further testing would be useful (the meta-monitoring code had reached the edge of one of its ranges and the hypoth-esis test still showed the means not equal). After this process of establishing rules using comparison with the ®rst survey distribution, the rule-set was tested against the second survey distribution for validation. The meta-monitoring code used was written in the programming language Visual Basic.

3.1.3. Data collection

Data collection took place from 1 April 1993 to 30 September 1993. Two distributions of 500 surveys were mailed to employees who had called the system, the ®rst in late May and the second in late August. Recipients were asked to rate their agreement with the statements listed in Fig. 3 on a 1±5 scale (1 being `Strongly agree'). As mentioned, the meta-monitoring system was continuously collecting data throughout the survey distribution process. A graphical depiction of the data collection timeline appears in Fig. 4.

4. Results

4.1. The survey

As mentioned, survey recipients were asked to rate their agreement or disagreement with the statements in Fig. 3. A copy of the instrument appears in Appen-dix A. Response rates were 52.6% for the ®rst survey and 56% for the second. Data for the two surveys appears in Table 1.

4.2. Usage behavior

(7)

A sample of the raw meta-monitoring analysis output (with the Social Security numbers removed) which was a result of these rules appears in Appen-dix B. The results of the z-tests of these parameter results versus the survey parameter results for the second distribution appear in Table 3. Note that two-tailed tests were conducted, with the goal being to determine if the means were equal, with the hypoth-eses being:

H0:1ÿ2ˆ0

Ha:1ÿ26ˆ0

The rejection region is z<ÿz=2 orz>z=2, and

with ˆ0.10 and the large degrees of freedom the sample sizes afford, z/21.65. Note that variables

with `_S' after them refer to the survey response mean of the parameter listed in front, and variables with `_M' after them refer to the meta-monitoring equiva-lent response means to these parameters. For example, ACC_S is the mean of the collection of survey responses to the statement ``The information is accu-rate'' and ACC_M is the mean of the collection of meta-monitoring equivalent responses to the same statement.

Fig. 4. Data collection timeline.

Table 1

Survey summary data

Total of individual ratings (each 1±5)

Number of valid responses (1±5)

Number of blanks Number of 0s Average rating (1±5)

First survey summary data

CONT1 466 252 8 6 1.85

FORM 376 250 10 6 1.50

ACC 336 240 8 18 1.40

EASY 334 256 7 3 1.30

CONT2 575 235 14 17 2.45

TIME 935 223 13 30 4.19

Second survey summary data

CONT1 490 270 6 4 1.81

FORM 445 270 6 4 1.65

ACC 382 257 10 13 1.49

EASY 375 271 6 3 1.38

CONT2 656 260 10 10 2.52

(8)

Table 2

Final rule sets which translate usage data into satisfaction parameters

Final rules for CONT1 parameter

Meta-monitoring equivalent response CONT1

The response to the statement ``The information helps me plan my finances''.

Non-respondent Sumˆ0

1 ± Strongly agree Average3.5

2 ± Somewhat agree Average2

3 ± Neutral Average1

4 ± Somewhat disagree Average0.5

5 ± Strongly disagree Average < 0.5

0 ± Don't know ±

Calculations Sumˆ(calls to Personal Account Information Menu for the quarter)‡3*(calls to General Plan Information Menu for the quarter)‡2*(calls to Personal Contribution section for the quarter)‡2*(calls to Personal Transfer section)‡2*(calls to the Personal Loan Information section)‡2*(calls to the Loan Modeling section).

AverageˆSum/ (number of months in which calls were made)

Final rules for FORM parameter

Meta-monitoring equivalent response FORM

The response to the statement ``The information helps me understand the Savings and Profit Sharing Plan and the features and options available to me.''

Non-respondent Suminfˆ0 or Sumpersˆ0

1 ± Strongly agree Ratio0.5

2 ± Somewhat agree Ratio0.3

3 ± Neutral Ratio0.2

4 ± Somewhat disagree Ratio0.15

5 ± Strongly disagree Ratio0.1

0 ± Don't know Ratio < 0.1

Calculations Suminfˆ(total calls to General Plan Information Menu for the quarter)‡(total calls to Savings Express Explanation section for the quarter).

Sumpersˆtotal calls to the Personal Account Information Menu for the quarter. RatioˆSuminf/Sumpers

Final rules for CONT2 parameter

Meta-monitoring equivalent response CONT2

The response to the statement ``I would like to receive more information.''

Non-respondent Sumpinf3

1 ± Strongly agree Ratioˆ0 and (Staravgˆ0 or Staravg > 120) 2 ± Somewhat agree Ratio0.35 and (Staravgˆ0 or Staravg > 90) 3 ± Neutral Ratio0.5 and (Staravgˆ0 or Staravg > 60) 4 ± Somewhat disagree Ratio0.85 and Staravg > 30

5 ± Strongly disagree Ratio1.5 and Staravg > 10

0 ± Don't know Ratio > 1.5

Calculations Sumpersˆtotal calls to the Personal Account Information Menu for the quarter. Sumpinˆtotal calls to the PIN Change section for the quarter.

RatioˆSumpin/Sumpers.

Sumstarˆtotal number of times the `star' key was pressed.

Sumtostarˆtotal seconds elapsed before pressing the star key, summed for each occurrence of pressing the star key.

StaravgˆSumtostar/Sumstar.

Final rules for ACC parameter

Meta-monitoring equivalent response ACC

The response to the statement ``The information is accurate''.

(9)

Table 2 (Continued)

1 ± Strongly agree Avg 1ˆ1 or (Avg 1ˆ2 and Avg 2ˆ1)

2 ± Somewhat agree Avg 1ˆ2 or (Avg 1ˆ3 and Avg 2ˆ1)

3 ± Neutral Avg 1ˆ3 or (Avg 1ˆ4 and Avg 2ˆ1)

4 ± Somewhat disagree Avg 1ˆ4 or (Avg 1ˆ5 and Avg 2ˆ1)

5 ± Strongly disagree Avg 1ˆ5

0 ± Don't know Avg 1 > 5

Calculations Sum 1ˆ(Maximum of total calls to any of the following modules for month #1: Account Balance section, Personal Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan Information section)‡(Maximum of total calls to any of the following modules for month #2: Account Balance section, Personal Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan Information section)‡(Maximum of total calls to any of the following modules for month #3: Account Balance section, Personal Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan Information section).

Sum 2ˆ(Second largest number of total calls to any of the following modules for month #1: Account Balance section, Personal Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan Information section)‡(Second largest number of total calls to any of the following modules for month #2: Account Balance section, Personal Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan Information section)‡(Second largest number of total calls to any of the following modules for month #3: Account Balance section, Personal Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan Information section).

Avg 1ˆInteger value of [Sum1/(number of months in which calls were made)]. Avg 2ˆInteger value of [Sum2/(number of months in which calls were made)].

Final rules for EASY parameter

Meta-monitoring equivalent response EASY

The response to the statement ``I can quickly and easily obtain the information I need.'' Non-respondent Sumpers4 and Sumtostar6ˆ0 and Sumend6ˆ0

1 ± Strongly agree Sumtostarˆ0 or Sumstarˆ0 or StarAvg360

2 ± Somewhat agree StarAvg240

3 ± Neutral StarAvg120

4 ± Somewhat disagree StarAvg60

5 ± Strongly disagree StarAvg < 60

0 ± Don't know ±

Calculations Sumpersˆtotal calls to the Personal Account Information Menu for the quarter. Sumendˆtotal calls to the End Call section for the quarter.

Sumstarˆtotal number of times the ``star'' key was pressed.

Sumtostarˆtotal seconds elapsed before pressing the star key, summed for each occurrence of pressing the star key.

StaravgˆSumtostar/Sumstar.

Final rules for TIME parameter

Meta-monitoring equivalent response TIME

The response to the statement ``The information is out-of-date.''

Non-respondent PersAvg0.5

1 ± Strongly agree DaySum4

2 ± Somewhat agree DaySum3

3 ± Neutral DaySum2

4 ± Somewhat disagree DaySum1

5 ± Strongly disagree DaySum0

0 ± Don't know ±

Calculations Sumpersˆtotal calls to the Personal Account Information Menu for the quarter. PersAvgˆSumpers/ (number of months in which calls were made).

(10)

Table 3

Statistical justification of meta-monitoring parameter rules ± comparison of resulting meta-monitoring parameter means (`_M') with survey parameter means (`_S')

Parameters Null hypothesis Alternative hypothesis z-statistic Result

CONT1_S vs. CONT1_M CONT1_SˆCONT1_M CONT1_S6ˆCONT1_M 0.97 Fail to reject the hypothesis that CONT1_SˆCONT1_M

FORM_S vs. FORM_M FORM_SˆFORM_M FORM_S6ˆFORM_M 0.85 Fail to reject the hypothesis that FORM_SˆFORM_M

CONT2_S vs. CONT2_M CONT2_SˆCONT2_M CONT2_S6ˆCONT2_M ÿ1.69 Reject the hypothesis that CONT2_SˆCONT2_M

ACC_S vs. ACC_M ACC_SˆACC_M ACC_S6ˆACC_M 1.68 Reject the hypothesis that ACC_SˆACC_M

EASY_S vs. EASY_M EASY_SˆEASY_M EASY_S6ˆEASY_M 1.41 Fail to reject the hypothesis that EASY_SˆEASY_M TIME_S vs. TIME_M TIME_SˆTIME_M TIME_S6ˆTIME_M ÿ0.86 Fail to reject the hypothesis that TIME_SˆTIME_M

C.E.

Downing

/

Informat

ion

&

Management

35

(1999)

(11)

It is important to note that when the result is ``fail to reject the hypothesis that the two means are equal,'' statistically thisdoes notmean that it is `accepted' that the means are equal.

Technically, the meta-monitoring parameters CONT2_M and ACC_M were proven to be not equal to their survey counterparts (even after the meta-monitoring rules for these parameters had been taken to the extreme side of their ranges). However, CON-T2_SˆCONT2_M was rejected because ÿ1.69 <

ÿ1.65, and ACC_SˆACC_M was rejected because 1.68 > 1.65. As these hypotheses were `just barely' rejected, histograms of the percentage of responses were plotted to better examine the distributions of the responses, and these histograms appear in Fig. 5.

These distributions appear to be quite similar, and the null hypotheses were just on the fringe of rejection. Therefore,z-tests were run on the means of the above

percentage distributions, and the results were z -sta-tistics of 0.65 for CONT2 andÿ1.53 for ACC, both of which result in a conclusion of ``fail to reject that the means are equal.'' Therefore, it is the opinion of this researcher that on these grounds the results from the

meta-monitoring CONT2 and ACC parameters can still be considered useful.

A summary of the parameter means for the survey and the parameter means for the usage data collected and analyzed by the meta-monitoring system appears in Table 4. Numbers from both the ®rst survey dis-tribution comparison (`rule derivation') and second survey distribution comparison (`rule validation') are included. When examining these means, it is impor-tant to note that the comparisons of interest to the study are between the meta-monitoring and survey parameter means; the non-movement of means after the enhancements attracts attention; however, the purpose of the study was to judge the meta-monitoring system's ability to mirror the survey.

5. Discussion and conclusions

This research has shown that system usage beha-vior, which is easily tracked and recorded, can be analyzed to produce a measure of user satisfaction. The rules established by the experts used for this study

Fig. 5. Response percentage histograms.

Table 4

Survey and meta-monitoring satisfaction parameter mean comparison (possible mean values range from ``1 ± strongly agree'' (extremely satisfied), to ``5 ± strongly disagree'' (extremely dissatisfied))

Before enhancements Parameter After enhancements

Survey mean for parameter

Meta-monitoring mean for parameter

Survey mean for parameter

Meta-monitoring mean for parameter

1.85 1.78 Cont1 1.82 1.89

1.50 1.44 Form 1.65 1.58

1.47 1.67 Cont2 1.51 1.47

1.43 1.33 Acc 1.43 1.36

1.31 1.21 Easy Time 1.38 1.27

(12)

allowed a meta-monitoring information system to analyze system usage data and arrive at a measure of user satisfaction which was similar to that found using a validated survey. The success of this approach in a single organizational test has exciting potential rami®cations. If further testing of the methodology were to validate these results, business organizations could have an ongoing measure of user satisfaction, at minimal relative expense. This continuous measure could be achieved with start-up energy and costs only slightly higher than those associated with current one-time measures of user satisfaction, with nearly zero additional expenses. Thus, managers could easily and con®dently track user satisfaction with information systems in their organization, and appropriate action could be taken if satisfaction dropped off suddenly or gradually. Such tracking continues to take place with the system described in this study.

While the potential uses of such a system could be consequential, there are limitations in the research, and further testing and veri®cation is needed. Speci-®cally, the following activities are desirable:

1. install and study additional meta-monitoring systems, to solidify conclusions based on induc-tive reasoning.

2. install and study additional meta-monitoring sys-tems, to observe the system's performance when

user satisfaction has a signi®cant increase or decrease.

3. install and study additional meta-monitoring sys-tems and control syssys-tems simultaneously, to achieve enhanced control of the results.

4. install and study different types of meta-monitoring systems, in particular systems whichmustbe used (in contrast to the voluntary nature of the usage of this system).

5. study the meta-monitoring rule creation process in greater depth and with greater formality, with more experts being involved.

Appendix A

Survey instrument

HOW DO YOU LIKE SAVINGS EXPRESS?

Please answer the following questions based on your experience with the Savings and Pro®t Sharing Plan information you have received from Savings Express regarding your account balance, amounts available for loan or withdrawal, etc. since Savings Express started April 1st, 1993. Assume that questions are referring to the automated part of Savings Express (recorded voice) unless otherwise indicated.

Strongly The information helps me plan my finances. 1 2 3 4 5 0 The information helps me understand the

Savings and Profit Sharing Plan and the features and options available to me.

1 2 3 4 5 0

Savings Express representatives are courteous and helpful.

1 2 3 4 5 0

I feel comfortable that the information which Savings Express conveys to me will be kept confidential.

1 2 3 4 5 0

The information is consistent. 1 2 3 4 5 0

The information is accurate. 1 2 3 4 5 0

I can quickly and easily obtain the information I need.

1 2 3 4 5 0

I would like to receive more information. 1 2 3 4 5 0

The information is out-of-date. 1 2 3 4 5 0

Overall, I am satisfied with the way the information is communicated to me.

(13)

THANK YOU

Please return your survey in the pre-addressed, postage-paid envelope by June 15, 1993.

Appendix B

Sample of actual meta-monitoring system analysis

Output contains derived responses for each ran-domly selected Social Security number to the six quality parameters. Note that the Social Security numbers have been removed in the interest of con-®dentiality. (Table 5).

References

[1] J.E. Bailey, S.W. Pearson, Development of a tool for measuring and analyzing computer user satisfaction, Man-agement Science 29(5), 1983, pp. 530±545.

[2] J.J. Baroudi, M.H. Olson, B. Ives, An empirical study of the impact of user involvement on system usage and information satisfaction, Communications of the ACM 29(3), 1986, pp. 232±238.

[3] G. Cole, User acceptance key to success of voice processing, Computing Canada 18(13), 1992, pp. 39±40.

[4] W.J. Doll, G. Torkzadeh, The measurement of end-user computing satisfaction, MIS Quarterly 12(2), 1988, pp. 259±274. [5] C.E. Downing, Rhetoric or reality? The professed satisfac-tion of older customers with informasatisfac-tion technology, Journal of End User Computing 9(1), 1997, pp. 15±27.

[6] J. Etezadi-Amoli, A. Farhoomand, On end-user computing satisfaction, MIS Quarterly 15(1), 1991, pp. 1±4.

[7] M. Fishbein, I. Ajzen, Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research, Ad-dison-Wesley, Boston, MA, 1975.

PLEASE GIVE US YOUR PERSONAL PROFILE

What is your age? 1 2 3 4 5

25 or under 26±35 36±45 46±55 56 or older

What is your sex? 1 2

Female Male Have you ever used an automated voice

response system before?

1 2

Yes No

Table 5

Meta-monitoring derived answers to the six quality parameters, from the 1 April 1993 to 30 June 1993 dump

CONT1 FORM CONT2 ACC EASY TIME

2 ± ± 1 ± 4

Table 5 (Continued)

CONT1 FORM CONT2 ACC EASY TIME

(14)

[8] M. Fleischer, J. Morrell, The use of of®ce automation by managers: A survey, Information Management Review 4(1), 1988, pp. 1±13.

[9] L. Foster, D. Flynn, Management information technology: Its effects on organizational form and function, MIS Quarterly 8(4), 1984, pp. 229±235.

[10] C. Gallagher, Perceptions of the value of a management information system, Academy of Management Journal 17, 1974, pp. 46±55.

[11] A. Hendrickson, K. Glorfeld, T. Cronan, On the repeated test-retest reliability of the end-user computing satisfaction instrument: A comment, Decision Sciences 25(4), 1994, pp. 655±667.

[12] S. Hiltz, K. Johnson, User satisfaction with computer-mediated communication systems, Management Science 36(6), 1990, pp. 739±764.

[13] B. Ives, M. Olson, J. Baroudi, The measurement of user information satisfaction, Communications of the ACM 26(10), 1983, pp. 785±794.

[14] J. Maglitta, Anxious allies: 1995 CEO/CFO survey, Compu-terworld Special Report 12, 1995, pp. 5±9.

[15] Z. Millman, J. Hartwick, The impact of automated of®ce systems on middle managers and their work, MIS Quarterly 11(4), 1987, pp. 479±490.

[16] C. Nemeth, Effects of free versus constrained behavior on attraction between people, Journal of Personality and Social Psychology 15, 1970, pp. 302±311.

[17] S. Sampson, Rami®cations of monitoring service quality through passively solicited customer feedback, Decision Sciences 27(4), 1996, pp. 601±622.

[18] M. Sheldrick, Technology for the elderly, Electronic News 38(1912), 1992, pp. 22.

[19] P. Tom, Managing Information as a Corporate Resource, HarperCollins, New York, 1991.

[20] G. Torkzadeh, W. Doll, Test-retest reliability of the end-user computing satisfaction instrument, Decision Sciences 22(1), 1991, pp. 26±38.

[21] A. Wicker, Attitudes vs. actions: The relationship of verbal and overt behavioral responses to attitude objects, Journal of Social Issues 25, 1969, pp. 41±78.

Referensi

Dokumen terkait

Kesimpulan penelitian ini adalah perencanaan kegiatan ekstrakurikuler bahasa asing dilaksanakan pada awal semester melibatkan waka kurikulum, waka kesiswaan dan guru

Penelitian ini bertujuan untuk membuktikan bahwa karakteristik dewan komisaris dan masa jabatan direktur utama memiliki peran penting dalam proses evaluasi kinerja dewan

Burhanuddin hendaknya dapat membuka wawasan berpikir ke arah yang lebih rasionil dan tidak terperangkap dengan tradisi-tradisi yang bercampur dengan paham- paham syirik, bid’ah

The latest edition of Prudent Practices includes an expanded section on improving environmental health and safety performance by providing guid- ance on how to

bahwa berdasarkan pertimbangan sebagaimana dimaksud dalam huruf a dan untuk melaksanakan ketentuan Pasal 4 ayat (4) Undang-Undang Nomor 19 Tahun 2003 tentang Badan Usaha

[r]

Catatan : Agar membawa dokumen perusahaan asli sesuai dalam isian kualifikasi serta menyerahkan rekaman/copy-nyaM. Demikian undangan dari kami dan atas perhatiannya

Berdasarkan Berita Acara Evaluasi Dokumen Penawaran dan Kualifikasi, tanggal 24 Maret 2016 yang menghasilkan Calon Pemenang dan berdasarkan Dokumen Penawaran dan Kualifikasi