• Tidak ada hasil yang ditemukan

Combining Qualitative and Quantitative Methods in ... - Blog Staff

N/A
N/A
Protected

Academic year: 2024

Membagikan "Combining Qualitative and Quantitative Methods in ... - Blog Staff"

Copied!
17
0
0

Teks penuh

(1)

Combining Qualitative and Quantitative

Methods in

Information Systems Research:

A Case Study^

By: Bonnie Kaplan

Department of Quantitative Anaiysis and information Systems

University of Cincinnati Cincinnati, OH 45221-0130 Dennis Duchon

Division of iVIanagement and Mariteting

Coiiege of Business University of Texas

at San Antonio

San Antonio, TX 78285-0634

Abstract

This article reports how quantitative and quaiita- tive methods were combined in a longitudinal multidisciplinary study of interreiationships be- tween perceptions of work and a computer in- formation system. The articie describes the problems and contributions stemming from different research perspectives and methodolo- gical approaches. It illustrates four methodolo- gical points: (1) the value of combining quaiita- tive and quantitative methods; (2) the need for context-specific measures of job characteristics rather than exciusive reliance on standard con- text-independent instruments; (3) the import- ance of process measures when evaluating in- formation systems; and (4) the need to explore the necessary relationships between a compu- ter system and the perceptions of its users, rather than unidirectional assessment of compu- ter system impacts on users or of users charac- teristics on computer system impiementation.

Despite the normative nature of these points, the most important conclusion is the desirability for a variety of approaches to studying informa- tion systems. No one approach to information systems research can provide the richness that information systems, as a discipline, needs for further advancement.

Keywords: Methodology, research methods, re- search perspectives, qualitative meth- ods, interpretivist perspective, com- puter system impacts, computer system evaluation, organizational im- pacts, work, medical and health care applications

ACM Categories: K,4,3, H.O, J,3

' This paper was presented at the Ninth Annual Inter- national Conference on Information Systems, Min- neapolis, MN, November 30-December 3, 1988,

Introduction

Information systems had its origins in a variety of reference disciplines with distinct theoretical research perspectives on the important issues to study and the methods to study them (Bariff and Ginzberg, 1982; Dickson, et al., 1982; Men- delson, et al,, 1987), This article describes a study that combined some of these distinct per- spectives and methods. The article discusses how limitations of one research perspective can be addressed by also using an alternative. This emphasis reflects the lesser familiarity informa- tion systems researchers might have with the perspective receiving more in-depth discussion and promotion, A key point to this article is the importance that both perspectives had for this study; no implication of preference in perspec- tives is intended.

The discussion of perspectives provides back- ground for understanding the process and meth- ods of this research. Following the discussion.

(2)

the article describes how this study evolved. The emphasis is on methods rather than on research findings.

The positivist perspective and quantitative methods

Despite the differences in reference disciplines and the debate over a paradigm for information systems, American information systems research generally is characterized by a methodology of formulating hypotheses that are tested through controlled experiment or statistical analysis. The assumption underlying this methodological ap- proach is that research designs should be based on the positivist model of controlling (or at least measuring) variables and testing pre-specified hypotheses (Kauber, 1986), although alternative methods might be acceptable until research has reached this more advanced and "scientific"

stage. Despite some recent recognition given dif- ferent research perspectives and methods (e.g,, Ives and Olson, 1984; Klein, 1986; Kling, 1980;

Lyytinen, 1987; Markus and Robey, 1988;

Weick, 1984), even those who argue for intro- ducing doctoral students to such alternative ap- proaches as field study and simulation methods nevertheless advocate research based primarily on the positivist tradition (e.g,, Bariff and Ginzberg, 1982; Dickson, et al,, 1982),

Exclusive reliance on statistical or experimental testing of hypotheses has been soundly criticized in the social sciences, where some of its major proponents have called its effects "disastrous"

(Cook and Campbell, 1979, p, 92), There are two grounds on which the approach can be faulted. First, the assumption that only through statistical or experimental hypothesis testing will science progress has come under attack, par- ticularly by psychologists who perhaps have the dubious distinction of having practiced it the long- est, Meehl (1978), for example, argues that sci- ence does not, and cannot, proceed by incre- mental gains achieved through statistical signifi- cance testing of hypotheses. Sociologists, too, have contributed to this debate, notably with Glaser and Strauss' (1967) influential argument for theory building through inductive qualitative research rather than through continual hypothe- sis testing.

A second fault with the approach is the reliance on experimental or statistical control as the de- fining feature of scientific research. This reliance

stems from the admirable goal of controlling ex- perimenter bias by striving for objective meas- ures of phenomena. Achieving this goal has been assumed to require the use of quantifiable data and statistical analysis (Downey and Ire- land, 1983; Kauber, 1986) and also removing the effects of context in order to produce gener- alizable, reproducible results.

However, because the study of social systems involves so many uncontrolled — and unidenti- fied — variables, methods for studying closed systems do not apply as well in natural settings as in controlled ones (Cook and Campbell, 1979;

Manicas and Secord, 1983; Maxwell, et al,, 1986), Moreover, the simplification and abstrac- tion needed for good experimental design can remove enough features from the subject of study that only obvious results are possible. As illustrated in the next section, the stripping of context buys "objectivity" and testability at the cost of a deeper understanding of what actually is occurring,^

The interpretive perspective and quantitative methods

The need for context-dependent research has been remarked upon by researchers in a variety of disciplines that, like information systems, nec- essarily incorporate field research methods.

Even such strong advocates of quantitative and experimental approaches in behavioral research as Cook and Campbell (1979) state, "Field ex- perimentation should always include qualitative research to describe and illuminate the context and conditions under which research is con- ducted" (p, 93),

Immersion in context is a hallmark of qualitative research methods and the interpretive perspec- tive on the conduct of research. Interpretive re- searchers attempt to understand the way others construe, conceptualize, and understand events, concepts, and categories, in part because these are assumed to influence individuals behavior.

The researchers examine the social reality and intersubjective meanings held by subjects (Bredo and Feinberg, 1982a) by eliciting and observing what is significant and important to the subjects in situations where the behavior occurs ordinar- ily. Consequently, qualitative methods are char- acterized by (1) the detailed observation of, and

^ We would like to credit an anonymous reviewer with this point.

(3)

involvement of the researcher in, the natural set- ting in which the study occurs, and (2) the at- tempt to avoid prior commitment to theoretical constructs or to hypotheses formulated before gathering any data (Yin, 1984),

Qualitative strategies emphasize an interpretive approach that uses data to both pose and re- solve research questions. Researchers develop categories and meanings from the data through an iterative process that starts by developing an initial understanding of the perspectives of those being studied. That understanding is then tested and modified through cycles of additional data collection and analysis until coherent interpreta- tion is reached (Bredo and Feinberg, 1982a; Van Maanen, 1983b), Thus, although qualitative meth- ods provide less explanation of variance in sta- tistical terms than quantitative methods, they can yield data from which process theories and richer explanations of how and why processes and out- comes occur can be developed (Marcus and Robey, 1988),

Research traditions in information systems

The growing recognition of the value of qualita- tive methods in social, behavioral, organizational, and evaluation research is manifest in studies and research methodology texts (Argyris, 1985;

Bredo and Feinberg, 1982b; Lincoln and Guba, 1985; Miles and Huberman, 1984; Mintzberg, 1973; Patton, 1978; Van Maanen, 1983c), Van Maanen (1983a), for example, has long ad- vanced and practiced these approaches in or- ganizational research. However, despite the strong ties of information systems with organ- izational and behavioral research, the use of quali- tative research, though practiced and advocated in information systems, has not been as visible in this field as in others. Instead, recently there has been greater reliance on laboratory studies and surveys (Goldstein, et al,, 1986),

The dominant approach to information technol- ogy studies has been based on a positivistic ex- perimental ideal of research. Using this ap- proach, researchers examine the effects of one or more variables on another. These analyses tend to treat the research objects in one of two ways. Either they portray information technology as the determining factor and users as passive, or they view users or organizations as acting in rational consort to achieve particular outcomes through the use of information technology. In

either case, the nature of the information tech- nology and users is considered static in that they are assumed to have an essential character that is treated as unchanging over the course of the study (Bakos, 1987; Lyytinen, 1987),

Markus and Robey (1988) characterize such ap- proaches as "variance theory formulations of logi- cal structure and an imperative conception of causal agency," In variance theories, some ele- ments are identified as antecedents, and these are conceived as necessary and sufficient con- ditions for the elements identified as outcomes to occur. According to Markus and Robey, much of the thinking about the consequences of infor- mation technology in organizations assumes that either technology ("the technological imperative") or human beings ("the organizational impera- tive") are the antecedents, or agents, of change rather than that change emerges from complex indeterminant interactions between them ("the emergent perspective").

Most studies of computer systems are based on methods that measure quantitative outcomes.

These outcomes can be grouped into technical, economic, and effectiveness and performance measures. Such studies treat organizational fea- tures, user features, technological features, and information needs as static, independent, and objective rather than as dynamic, interacting con- structs, i,e,, as concepts with attributes and mean- ings that may change over time and that may be defined differently according to how individ- ual participants view and experience the rela- tionships between them.

Because such studies are restricted to readily measured static constructs, they neglect aspects of cultural environment, and social interaction and negotiation that could affect not only the out- comes (Lyytinen, 1987), but also the constructs under study. Indeed, most evaluations of com- puter information systems exhibit these charac- teristics. Published accounts of computing tradi- tionally focus on selected technical and eco- nomic characteristics of the computer system that are assessed according to what Kling and Scacchi (1982) call the "discrete-entity model,"

Economic, physical, or information processing fea- tures are explicitly chosen for study under the assumption that the computer system can be broken down into relatively independent ele- ments of equipment, people, organizational proc- esses, and the like. These elements can then be evaluated independently and additively.

Social or political issues often are ignored.

(4)

That these assumptions underlie much of the research on information technology is evident in the implementation and impacts literature. The effects of some intervention are studied with re- spect to implementation success or impacts on, for example, organizational structure, user atti- tudes, or job satisfaction (Bakos, 1987; Danziger, 1985; Ives and Olson, 1984; Kling, 1980; Markus and Robey, 1988), In these studies, information systems or computers are treated as having "im- pacts" (Danziger, 1985) rather than as socially constructed concepts with meanings that are af- fected by the "impacts" and that change from person to person or over time. Few such impact studies are longitudinal — a design promoted in information systems research to track changes over time by collecting data as events occur rather than retrospectively (Franz and Robey, 1984; Vitalari, 1985),

Often such studies do not proceed from an in- teractionist framework — one that focuses on the interaction between characteristics related to people or subunits affected by the computer system and characteristics related to the system itself (Markus and Robey, 1988). For example, they tend not to explore interrelationships be- tween job-related issues and effectiveness of in- formation technology. Jobs are often considered fixed, even though there are empirical and theo- retical reasons to expect that computers change the amount and nature of work performed by system users (Brooks, et al,, 1977; Fok, et al,, 1987; Kemp and Clegg, 1987; Kling, 1980; Mill- man and Hartwick, 1987; Zuboff, 1982; 1988), Moreover, job-related issues are interpreted dif- ferently by different individuals within an organiza- tion, and these differences can affect what consti- tutes a technology's "effectiveness" (Kling, 1980;

Lyytinen, 1987), Goodhue (1986), for example, advises assessing the "fit" of an information sys- tem with a task. However, one person performing the task may have a rather different view of it than another person performing ostensibly the same task, and thus the "fit" will differ for different users.

Consequently, different individuals may have different responses to the same system, Interac- tionist theories would account for this response.

Theories that assume that the individual, the job or, task, the organization, or the technology is fixed and that one of these determines outcomes would not (Markus and Robey, 1988),

Some argue that each type of research method has its appropriate uses (Markus and Robey, 1988; Rockart, 1984; Weick, 1984); different re-

search perspectives focus on different research questions and analytical assumptions (Kling, 1980; Kling and Scacchi, 1982; Lyytinen, 1987;

Markus, 1983; Markus and Robey, 1988), How- ever, there is disagreement concerning the value and use of suggested alternative theoretical per- spectives and practical approaches, such as criti- cal social theory (Klein, 1986), structuration theory (Barley, 1986), case study (Benbasat, et al,, 1987), and socio-technical design (Fok, et al., 1987; Mumford and Henshall, 1979). On the one hand, Mumford (1985) advocates a qualita- tive approach. She calls for studies of a "total"

situation through action-centered, interdisciplin- ary, participatory research in which research questions and hypotheses evolve as new de- velopments are introduced, Lyytinen (1987) makes a similar appeal for case studies and action research on the grounds that "this research strategy seems to be the only means of obtaining sufficiently rich data" and because the validity of such methods "is better than that of empirical studies," On the other hand, even some propo- nents of case study base their position on a re- search design that fits the quantitative or quasi- experimental approach rather than the qualitative one (Benbasat, et al,, 1987; Campbell, 1984; Yin, 1984), Moveover, there has been strong senti- ment that information systems researchers need to move beyond case study to more experimental laboratory or field tests (Benbasat, 1984),

Combining Methods

Although not the dominant paradigm, qualitative methods and interpretive perspectives have been used in a variety of ways in information systems research (Barley, 1986; Fok, et al,, 1987; Franz and Robey, 1984; Goldstein, et al,, 1986; Hirschheim, et al,, 1987; Markus, 1983;

Mumford and Henshall, 1979; Mumford, et al., 1985), Interpreting information technology in terms of social action and meanings is becom- ing more popular as evidence grows that infor- mation systems development and use is a social as well as technical process that includes prob- lems related to social, organization, and concep- tual aspects of the system (Bland, 1978; Hirsch- heim, et al., 1987; Kling and Scacchi, 1982;

Lyytinen, 1987; Markus, 1983), However, many information systems researchers who recognize the value of qualitative methods often portray these methods either as stand-alone or as a means of exploratory research preliminary to the

"real" research of generating hypotheses to be tested using experimental or statistical tech-

(5)

niques (Benbasat, 1984), Even papers in which qualitative and quantitative methods are com- bined rarely report the study's methodological rationale or details (Benbasat, et al,, 1987), One result is the failure to discuss how qualitative methods can be combined productively with quan- titative ones.

There has been a move in other fields toward combining qualitative and quantitative methods to provide a richer, contextual basis for interpre- tating and validating results (Cook and Reichardt, 1979; Light and Pillemer, 1982;

Maxwell, 1986; Meyers, 1981; Van Maanen, et al,, 1982; 1983a), These methods need not be viewed as polar opposites (Van Maanen, 1983b), It is possible to integrate quantitative and qualitative methods (Maxwell, et al,, 1986), Com- bining these methods introduces both testability and context into the research. Collecting differ- ent kinds of data by different methods from dif- ferent sources provides a wider range of cover- age that may result in a fuller picture of the unit under study than would have been achieved oth- erwise (Bonoma, 1985), Moreover, using multi- ple methods increases the robustness of results because findings can be strengthened through triangulation — the cross-validation achieved when different kinds and sources of data con- verge and are found congruent (Benbasat, et al,, 1987; Bonoma, 1985; Jick, 1983; Yin, 1984), or when an explanation is developed to account for all the data when they diverge (Trend, 1979), This article describes how qualitative and quan- titative methods were combined during the first phase of an ongoing multi-method longitudinal study. Detailed research results are reported else- where (Kaplan, 1986; 1987; Kaplan and Duchon, 1987a; 1987b; 1987c) and summarized here. This article has a methodological focus. It describes the development and process of the research and omits all but sketches of the data and analysis necessary to understand how the research evolved. Particular attention is given to how both quantitative and qualitative methods were used productively in a collaboration among investiga- tors from different research perspectives.

Research Setting

Organizational and information context

Computers have been used more in clinical labo- ratories than in many other areas of medical prac-

tice (Paplanus, 1985), The clinical laboratory rep- resents a microcosm of automation and infor- mation needs from throughout a medical center (Lincoln and Korpman, 1980), The laboratory is responsible for performing tests ordered by phy- sicians to diagnose illness or track the course of therapy and disease. Laboratories meet this responsibility by receiving and processing phy- sicians' test orders, i,e,, collecting specimens (such as blood) from patients, performing the designated tests (such as blood sugar meas- urements or assessments of bacterial sensitivity to antibiotics), and reporting the test results for use by the physician and for inclusion in the pa- tient's medical record. Laboratory technologists, who are specially trained, perform the labora- tory tests. They may also report the results and discuss them with those treating the patient.

The principal function of computers in clinical laboratories involves data management. Com- puters can relieve the clerical burden of data acquisition and transcription while adding new data entry and computer-related tasks. In addi- tion, they improve legibility, organization, and ac- curacy of laboratory results reports; increase pro- ductivity and efficiency; reduce transcription error; and change laboratory organization and turnaround time for test results (Brooks, et al., 1977; Flagle, 1974; Lewis, 1979; Nicol and Smith, 1986), Thus, such computer information systems affect both the process of work as well as the service product of a clinical laboratory.

The research site

Research was conducted within the Department of Pathology and Laboratory Medicine at a 650- bed midwestern urban metropolitan university medical center, A widely used and well-re- spected commercial laboratory computer infor- mation system was installed in April 1985 for use by all nine laboratories within the Division of Laboratory Medicine, These nine laboratories were responsible for laboratory work to support the care of patients admitted to the hospital and those treated in clinics and in the emergency unit. The laboratories also performed specialty tests for other institutions.

This research site was selected because a new computer information system was replacing a manual operation in a context representative of the entire institution's information systems needs.

Another advantage was that multiple organiza- tional units would use the same computer

(6)

system. Consequently, a comparison between units would be possible under conditions where system variables would be reasonably constant.

Thus, the study could include both macro and micro units of analysis: individuals, workers and managers, individual laboratories as a whole, the department in which the laboratories were situ- ated, and the organization as a whole. Mixing levels of analysis made it possible to explore the interplay among units at each level and across levels (Markus and Robey, 1988), The site became available because of the first author's prior contact with the director of labora- tory medicine. The director arranged for entry into the laboratories and meetings and gave his support throughout the study's duration. He was available to the research team as needed.

Research team

The project was undertaken by four faculty mem- bers in the College of Business Administration at the University of Cincinnati, The researcher from the information systems (IS) area, Bonnie Kaplan, conceived the project, conducted the fieldwork, and provided knowledge of the re- search setting. She envisioned the purpose of the study as researching what happens when a computer information system is installed into a new setting.

The other three researchers were from the or- ganizational behavior area. Two of them left the study at an early stage; Dennis Duchon re- mained. Their primary interest was in testing pre- existing theory in a new setting using question- naires as the means of gathering quantitative data for statistical analysis. They viewed quali- tative methods as a means for deriving quanti- tative measures, rather than as rich sources of research data useful for grounding theory and interpretation. Consequently, they approached the study differently from Kaplan in three ways:

(1) they decided to research the impact of the computer information system on work in the labo- ratories; (2) they began the study intending to test theory through statistical analysis of quanti- tative survey data; and (3) they did not consider interviewing and observation as a means of data collection,

iVIethods

Research question

Each member of the original research team for- mulated different research questions. The three

organizational behavior members viewed the new computer system as an opportunity to test existing theory concerning job characteristics and job satisfaction in a new setting. Although each of their research questions differed, these three wished to investigate how job characteris- tics varied in the clinical laboratories. Conse- quently, the study focused on laboratory tech- nologists. The research questions for these three researchers were to investigate (1) the effects of the computer information system on job char- acteristics, (2) how departmental technology af- fected computer acceptance, (3) how leader- subordinate relationships affected computer ac- ceptance, and (4) how job characteristics varied among laboratories and changed over time.

Working in the interpretive tradition, Kaplan ex- pected to shuttle among questions, hypotheses, and data throughout the study. Because of the other team members' primary interest in the laboratory work, she also adopted this focus. Her research question was to identify and account for both similarities and differences among labora- tory technologists and among laboratories in their responses to the computer information system.

Research design

Each research question reflected differences be- tween a quantitative hypothesis-testing approach (wliere the effects of an intervention on depend- ent variables are statistically assessed) and a qualitative approach (where categories and theo- ries are developed inductively from the data, gen- eralizations are built from the ground up, and various interpretive schemes are tried and hy- potheses are created and reformulated during the course of the study) (Glaser and Strauss, 1967; Van Maanen, 1983b), These differences resulted in a longitudinal multi-method case study that incorporated each team member's in- terest and skills. Because this initial case design included both qualitative and quantitative meth- ods, both positivist and interpretive perspectives were incorporated in order to best link method to research question,^

^ Case study, an investigation using multiple sources of evidence to study a contemporary phenomenon within its real-life context (Bonoma, 1985; Yin, 1984), has been advanced for information systems research in order to understand the nature and complexity of the processes taking place (Benbasat, et al,, 1987), Although they are often distinguished from quantita-

(7)

The initial design was developed after consider- able discussion and, as is common in qualita- tive research, was left open for modification and extension as necessary during the course of the study (Glaser and Strauss, 1967), Because re- search access to the site was not secured until shortly before the new system was installed, no pre-installation survey measures were taken. Con- sequently, as is often true in case studies, there could be no comparison of measures after in- stallation (Cook and Campbell, 1979, p, 96), The first step in the design was to interview labo- ratory directors and selected hospital adminis- trators prior to system installation. The second step was to observe in each laboratory after in- stallation. The remaining steps were to admini- ster questionnaires at several periods after the computer system was installed.

The first wave of questionnaire data gathering occurred when a new routine had been established after the computer system was installed. The second wave was planned for approximately one year later, when the initial changes caused by the computer system became part of normal procedure. Future waves would be at intervals depending on initial results.

Initially, regular participant observation at laboratory management meetings and staff meetings was not included in the design. This component was added when the meetings were instituted.

Data collection

Qualitative methods included open-ended inter- viewing, observation, participant observation, and analysis of responses to open-ended items on a survey questionnaire. Quantitative meth- ods were employed to collect and analyze data from survey questionnaires. All participants were assured of confidentiality.

Although both qualitative and quantitative ap- proaches were used, it quickly became appar- ent that they were viewed differently by research team members. Each team member conducted interviews and observations, but only the quali-

tive or quasi-experimental research (Bonoma, 1985;

George and McKeown, 1985), case studies may or may not exhibit the defining conditions of either quan- titative or qualitative research (Campbell, 1984; Yin, 1984), There are a variety of recognized approaches to case study (Benbasat, et al,, 1987; Bonoma, 1985),

tatively-oriented member kept systematic field notes to be used for data analysis, Qther team members viewed interviews and observations as providing "background" rather than "data," Con- sequently, Kaplan's field notes from each of these activities were used for analysis.

Interviews and Observations

The director of the laboratories, the chairman of the department, and the administrator of the hospital were interviewed early in the study.

Teams of two or three researchers also inter- viewed the individual laboratory directors and some chief supervisory personnel during the week prior to computer system installation, Kaplan was present at all but two of these inter- views. The purpose of the interviews was three- fold: (1) to determine what interviewees ex- pected the potential effects of the computer system to be on patient care, laboratory opera- tions, and hospital operations; (2) to inquire about possible measures and focus of the study;

and (3) to generate questionnaire items for a survey of laboratory technologists.

During the month prior to administering the survey, individual researchers were present in the laboratories to observe and talk with labora- tory staff while they worked. These observations were intended to influence questionnaire item development.

Starting three months after the computer infor- mation system was installed, Kaplan was urged by one of the laboratory directors to attend weekly meetings where directors and head su- pervisors discussed laboratory management prob- lems. These meetings were instituted as a result of system installation and they became a regu- lar feature of laboratory management even after system problems ceased to be discussed, Kaplan attended these meetings regularly through- out the study as an observer and occasional par- ticipant, and was a participant observer at other departmental meetings.

Survey Questionnaire

A survey instrument was developed for labora- tory technologists, the primary direct users of the computer information system. It was composed of three parts. The first part consisted of meas- ures adapted from the standard instruments that addressed job characteristics (Hackman and Qldham, 1976), role conflict and ambiguity (House and Rizzo, 1972), departmental technol-

(8)

ogy (Withey, et al,, 1983), and leader-member relationships (Dansereau, et al,, 1975), The second part of the questionnaire used Likert- scale measures of expectations, concerns, and perceived changes that may be related to the use of the computer system. These measures were developed by analyzing the interviews and observations to derive categories for questions that focused on the primary expectations ex- pressed by interviewees, attendees at meetings, and the laboratory technologists who were ob- served. Additional questions concerning expec- tations were adapted from Kjerulff, et al, (1982), The survey instrument concluded with four open- ended questions that assessed changes caused by the computer system and elicited suggestions for improved system use. These questions were also derived from the observations and inter- views. They were intended to serve two pur- poses. The first was to ensure that important issues were addressed even if they had been included in scaled-response questions. The second was to elicit information about impacts for which measures were difficult to develop.

All questionnaire items were pretested on a sample of laboratory personnel selected by head supervisors. After revision, the questionnaire was administered to all 248 members of the labora- tory staff seven months after computer system installation. The staff knew about the study be- cause of the prior laboratory observations and announcements at meetings. Each survey was accompanied by an explanatory letter, A re- search team member also explained the study during weekly staff meetings when the question- naires were distributed. In some laboratories, this meeting was devoted to answering the question- naire. In others, staff members were allowed to complete the survey during work hours provided that it not interfere with their job functions.

A modified version of the questionnaire was distributed to all laboratory technologists for the second wave of data collection beginning nearly a year after the first. No further surveys were conducted.

Sample

Just before the system was installed, 11 inter- views were conducted with 20 interviewees rep- resenting all the laboratories. These interview- ees included laboratory directors at all levels.

head supervisors, and an administrator of one unit of the hospital.

All 248 members of the laboratory staff were sur- veyed starting seven months after system im- plementation. Data from all 119 completed (48%) questionnaires were analyzed, Qnly seven of the respondents had been interviewed.

Most respondents were technologists who had college degrees, worked first shift, and had not worked previously in a laboratory with a com- puter information system. As is typical of labora- tory technologists, almost all were women.

Analysis and Results

Data are presented from interviews immediately prior to installation and from the first wave of survey questionnaires seven months later. The quantitative data were analyzed using a stan- dard statistical software package. Interview notes and responses to open-ended questions on the questionnaire were analyzed by the constant com- parative method (Glazer and Strauss, 1967), Using this method, categories reflecting com- puter system issues important to laboratory di- rectors and technologists were derived systematically.

Open-ended questions

Kaplan first analyzed open-ended questions on the questionnaire. Three themes predominated in the answers: (1) changes in technologists' work load, (2) improvements in results report- ing, and (3) the need for physicians and nurses to use computer system terminals rather than telephones for results inquiry. Technologists ex- pressed a general sense that their clerical duties and paperwork had increased and productivity had suffered. However, they credited the com- puter system with making laboratory test results available more quickly. They said that results re- ports were also more complete, more accurate, easier to read, and provided a picture of "the whole patient," Even though phone calls inter- rupted laboratory work, they felt that doctors and nurses expected to get test results by calling the laboratories rather than by using the computer system. In addition, respondents sensed they were being blamed by others in the medical center for problems caused by the computer system.

When responses were grouped by laboratory, marked differences were evident among some

(9)

Of the laboratories. Laboratories differed in their assessment of changes in their workload, number of telephone calls, improvement in re- porting, and attitudes expressed toward the com- puter system.

Scaled-response questions

Responses to the questionnaire items pertain- ing to job characteristics and to the scaled- response items assessing computer expecta- tions and concerns were analyzed next by two researchers who intentionally remained unaware of the findings on open-ended questions. Having already analyzed the responses to open-ended questions, Kaplan assisted them in an initial Q- son of computer system questionnaire items and later interpretation of a factor analysis of these items. A fourth researcher had left the study team. Another of the original team members left the study during data analysis, leaving Duchon and Kaplan to interpret the results of Duchon's statistical analysis,

A factor analysis of job items provided evidence of construct validity for the measures. Four fac- tors were extracted: skill variety, task identity, autonomy, and feedback. These factors com- prise a four-factor model of the core job dimen- sions (Birnbaum, et al,, 1986; Ferris and Gilmore, 1985), and are also used to assess job complexity (Stone, 1974), Qverall, no job char- acteristic dififerences due to environmental or in- dividual factors were found. Respondents re- ported the same levels of job characteristics regardless of age, gender, job experience, etc.

Five computer system variables were extracted:

external communications, service outcomes, per- sonal intentions, personal hassles, and in- creased blame. Reliability for these five factors ranged between ,53 and ,87, Data on these vari- ables indicated that respondents generally were positive about the computer system. They re- ported that the system had improved relations and aided communication between staff in the laboratories and the rest of the medical center (external communications), and that results re- porting and service was better (service outcomes).

Respondents were very positive about their own continued use of the computer system (personal intentions). They did not think that irritants of their jobs, such as the number of phone calls and the amount of work (personal hassles), had

increased, or that laboratory staff was blamed more and cooperated with less by physicians and nurses (increased blame). There were no statistically significant correlations between job characteristic measures and computer system measures.

At this point, nothing reportable had been found in the quantitative data. This was surprising be- cause computer system variables were similar to themes identified in responses to the open- ended questions, where some laboratories dif- fered markedly in their responses. Consequently, Kaplan continued to seek explanations for the differences among laboratories that were evident in the qualitative data.

Interviews

Next, Kaplan again analyzed the interviews with laboratory directors, this time to determine ex- pectations of the directors prior to system im- plementation. She thought that perhaps different expectations among directors could have contri- buted to different responses within the laborator- ies. Different expectations were not found. The analysis indicated that prior to implementation, directors generally agreed that there would be more work for laboratory technologists, but that technologists' jobs would not be changed by the computer system.

This assessment was made with full knowledge that technologists would have to enter test re- sults and some test orders, and that their pa- penwork and billing duties were expected to de- crease. Interviewees also expected that test results would be available more quickly. They thought that reports would have fewer transcrip- tion errors, would be more legible, and would provide useful cumulative information and pa- tient profiles. They expected these improve- ments to reduce the number of unnecessary labo- ratory tests and stat (i,e., emergency) and duplicate test orders, and anticipated that the number of telephone inquiries concerning results would decrease.

Developing an Interpretative Theoretical Model

The five computer system variables that were developed from the questionnaire items reflected themes very similar to those found in the quali- tative data from interviews and responses to

(10)

open-ended questionnaires: changes in work- load and changes in results reporting. This con- gruence of themes from independent sources of data strengthened confidence in their validity.

The analysis of qualitative data from the ques- tionnaires suggested important differences among laboratories, even in the absence of sta- tistical verification, because the differences were so striking. Knowledge obtained from observa- tions in the laboratories and at meetings, as well as from comments made to her by individual labo- ratory technologists and directors, strengthened Kaplan's conviction that these differences were important. It remained to determine the nature of the differences and to explain the lack of re- portable statistical results.

An impression gained from the qualitative data suggested an interpretation, Kaplan had noticed that laboratory technologists seemed to empha- size either the increased work or the improved results reporting in their answers to the open- ended questions. The repetition of these themes in all the data sources reinforced their impor- tance. This repetition suggested that there were two groups of respondents corresponding to these two themes. The finding that directors did not expect technologists' jobs to change raised the question of what constituted a technologist's job. It seemed that directors and technologists might have different conceptions of the technolo- gist's job and that these differences were re- flected in their assessment of the computer

system. Individual laboratory technologists might also differ in their views of their jobs, just as they differed in what they emphasized about the computer system in their comments on the questionnaires.

These insights led to a model depicting two groups of laboratory technologists. According to this model, one group saw their jobs in terms of producing results reports, the other in terms of the laboratory bench work necessary to pro- duce those results reports. As shown in Figure 1, the group who saw its jobs in terms of bench work was oriented toward the process of pro- ducing laboratory results, whereas the group who viewed its work in terms of reporting results was oriented toward the outcomes of laboratory work; the members of this group saw themselves as providing a service.

The ways in which the computer affected the production work in the laboratories were as- sessed in those questionnaire items comprising personal hassles and increased blame, e,g,, ef- fects on paper work and telephone calls. Thus, the group who saw its jobs in terms of the bench work (i.e,, the process of producing laboratory tests results) would have responded to the com- puter system according to how it had affected what was assessed in these items. The product- oriented group of respondents who saw its jobs in terms of the service provided (i,e,, the prod- uct, rather than the process, of laboratory work) would have assessed the computer system in

View of Job Responses to Computer System Computer System Variables

Orientation

Process Product

Bench Work Increased Work Load

Hassles Blame

Results, Service Improved Results

Reporting Communications

Service

Orientation = (Communications + Service) - (Hassles + Blame)

Figure 1. A Modei Depicting Different Orientations to Work and the Computer in Clinicai Laboratories

(11)

terms reflected in its responses to external com- munications and service outcomes items, e,g,, improved results reporting.

This interpretation indicated a reason why job characteristics measures did not depict differ- ences among laboratories and why there was no correlation between job characteristics and computer system variables. The kinds of differ- ences in job orientation depicted in the model would not have been measured by job charac- teristic measures.

After this model was proposed, two new vari- ables were created to measure whether tech- nologists' responses differed according to the computer system's impact on process versus product aspects of their jobs. As shown in Figure 1, one variable combined scores on external com- munication and service outcomes, and the other combined scores on personal hassles and in- creased blame. The variable personal intentions was omitted because there was no theoretical basis for including it; personal intentions did not assess the interaction between specific aspects of the computer system and the job. Moreover, this variable was not a good discriminator be- tween laboratories or individuals because indi- vidual respondents' scores were all high.

There was a significant negative correlation be- tween these two new variables, thus indicating that respondents tended to have high scores on one variable and low scores on the other, i.e.

they were either product- or process-oriented.

An orientation score for each respondent was then computed by subtracting the sum of that person's scores on personal hassles and in- creased blame from his or her scores on external communication and service outcomes.

When the orientation score was regressed on laboratories, statistically significant differences in orientation were found across laboratories. Thus, some laboratories, like some technologists, were process-oriented while others were product- oriented. Moreover, respondents from the labo- ratories rating the strongest process orientation expressed the most hostility on the open-ended questions from the survey, whereas respondents from laboratories with the strongest product ori- entation expressed strong satisfaction with the computer system.

Thus, our results suggested that the interpreta- tion was correct. Discussion with the laboratory director about the intermediate and final results

and about the research papers produced from the study supported the interpretation.

Discussion

This study illustrates the difficulties as well as the strengths that occurred when research per- spectives from different disciplines were com- bined. Although both authors agree that this col- laboration enriched the study as well as their own understanding of research, it was rife with frustrations. Neither author initially realized the extent to which differing values, assumptions, and vocabularies would interfere with the pro- ject. Continued interaction was necessary to rec- ognize that there were differences even in un- derstanding the same words. Persistent effort was needed to identify and resolve these differ- ences. A key incident in the study stemmed from these differences. Because the incident is illus- trative of the difficulties as well as the enrich- ment caused by the different perspectives of re- search team members, we recount it here.

After the initial statistical analysis of data from the scaled response questions, Duchon thought that there were no statistical results worth re- porting, Kaplan thought he meant that there were no statistically significant differences among labo- ratories in their reaction to the computer system.

However, she did not agree with these results because they did not fit with her observations and analysis, Duchon remained convinced that there were no results worth pursuing. Conse- quently, Kaplan began to analyze the remaining data from the interviews. This analysis strength- ened her convictions that the qualitative data did indicate patterns worth investigating and that Duchon had to be convinced of this. That deter- mination led to the development of the interpre- tive theoretical model.

The turning point in the study, and in the authors' collaboration, happened when Duchon re- analyzed the quantitative data as suggested by Kaplan, This new analysis supported her inter- pretation. When the initial analysis was repeated, there were statistically significant differences among laboratories for the computer system vari- ables. We were not able to determine the reason for the apparent lack of reportable differences during the first statistical analysis. Because of continued difficulties in communication, misun- derstandings, in retrospect, were not surprising.

Some of the difficulties we experienced have been described so that others who participate

(12)

on similar research teams may shorten the learn- ing period and reduce the communication prob- lems. It should be clear from this account that, as is common in research that includes qualita- tive approaches, the process is messy. Impres- sions, interpretations, propositions, and hypothe- ses were developed over the course of the study, a process that hardly fits the positivist ideal of objective collection of neutral or purely descriptive "facts" (Van Maanen, 1983b), This messiness should not dismay those who experi- ence it.

Despite the methodological and communications problems, the collaboration was productive and friendly. Our tenacity in holding to our initial in- dependent analyses of the different data — though a problem at the time — and the in- creased respect each of us developed for the other's approach, in fact, were positive aspects of our research. As has happened in other pro- jects (Trend, 1979), a strong determination to accommodate each approach and to reconcile apparently conflicting data resulted in an inter- pretation that synthesized the evidence. We can- not state too strongly that the advantages of our collaboration outweighed the difficulties,

Conciusions

This article describes how qualitative and quan- titative approaches were combined in a case study of a new information system. It illustrates four methodological points: (1) the value of com- bining qualitative and quantitative methods; (2) the need for context-specific measures of job characteristics; (3) the importance of process measures when evaluating information systems;

and (4) the need to explore the necessary rela- tionships between a computer system and the perceptions of its users.

Combining qualitative and quantitative methods proved especially valuable. First, the apparent inconsistency of results between the initial quan- titative analysis and the qualitative data required exploration. The initial statistical analysis re- vealed only that certain well-recognized job char- acteristic measures were not capturing the dif- ferences in the sample and that the correlations between job characteristics and computer system variables were not significant. However, systematically different responses to the com- puter system were present in the qualitative data. Further analysis to resolve the discrepancy led to the use of new measures developed from

the quantitative questionnaire data that captured job-related responses to the computer system and that supported conclusions drawn from the qualitative data.

Thus, triangulation of data from different sources can alert researchers to potential analytical errors and omissions. Mixing methods can also lead to new insights and modes of analysis that are unlikely to occur if one method is used alone, in the absence of qualitative data, the study would have concluded that there were no re- portable statistically significant findings. How- ever, on the strength of the qualitative data, a theoretical interpretative model was developed to drive further statistical analysis. The inclusion of qualitative data resulted in the generation of grounded theory.

The results also suggest the value of investigat- ing specifically how a computer system affects users' jobs. In this study, as in others, no corre- lation was found between standard job charac- teristic measures and variables measuring re- sponses to the computer system. Nevertheless, these responses were related to differences in job orientation. Studies that independently assess characteristics and attitudes ignore the specific impacts of a computer system on work.

Standard job characteristic measures do not nec- essarily assess job-specific factors or capture the different job orientations held by workers with

"the same job," Instead of sole reliance on these measures, instruments need to include context- specific measures of how a computer system supports work, as conceptualized by the worker.

In this study, measures assessing these aspects were derived from knowledge of the setting gained by field experience, again suggesting the value of combining qualitative and quantitative methods.

The study further suggests the need to move beyond outcome evaluations to investigating how a computer system affects processes. By extending research to examine specific efforts of individual computer information systems in par- ticular contexts, our general understanding of what affects the acceptance and use of com- puter information systems can be improved.

Lastly, there is a growing amount of literature that assesses the impacts of computer informa- tion systems. This study was intended, by part of the research team, as an assessment of the impacts of a computer information system on work. However, impact studies consider only one

(13)

side of the important interaction between com- puter information system users and the computer information system: the effects of the system on users, the organization, or society. Research de- signs could just as well consider the impacts of users, the organization, or society on the com- puter information system. Sorting out the direc- tion of causality is a difficult, if not meaningless, task that is made all the more difficult by the multitudes of confounding factors common to any field setting. It also may be impossible to do because the design does not account for changes due to the interrelationships between the "dependent" and "independent" variables.

For example, in this study, the job orientation of the users could have mediated their response to the computer information system. Alterna- tively, the system itself could have caused the differences in job orientation. Rather than for- mulate the analysis in either of these ways, it seemed more fruitful to take an interactionist ap- proach and consider the interrelationships be- tween users' job orientations and their responses to the system, "Impacts" implies unidirectional causality, whereas, as has been found in other studies, investigations of interactions can be more productive and meaningful.

Despite the normative nature of the methodological points, the most important con- clusion is the need for a variety of approaches to the study of information systems. For the same reasons that combining methods can be valuable in a particular study, a variety of ap- proaches and perspectives can be valuable in the discipline as a whole. No one method can provide the richness that information systems, as a discipline, needs for further advance.

Acknowledgements

Joseph A, iVIaxwell of the Harvard Graduate School of Education provided invaluable theore- tical assistance and editorial advice. We are also grateful to the anonymous reviewers whose com- ments resulted in furtherclarifying and strengthen- ing the paper.

References

Argyris, C. Action Science: Concepts, Methods, and Skiiis for Research and Intervention, Jossey-Bass, San Francisco, CA, 1985, Bakos, J,Y, "Dependent Variables for the Study

of Firm and Industry-Level impacts of Infor-

mation Technology," Proceedings of the Eighth International Conference on Informa- tion Systems, Pittsburgh, PA, December 6-9, 1987, pp, 10-23.

Bariff, M,L, and Ginzberg, M.J, "MIS and the Be- havioral Sciences: Research Patterns and Pre- scriptions," Dafa Base (14:1), Fall 1982, pp.

19-26,

Barley, S,R. "Technology as an Occasion for Structuring: Evidence from Observations of CT Scanners and the Social Order of Radiology Departments," Administrative Science Quar- teriy (31), March 1986, pp, 78-108,

Benbasat, i. "An Analysis of Research Method- ologies," in The Information Systems Re- search Challenge, F.W, McFarlan (ed,). Har- vard Business School Press, Boston, MA, 1984, pp. 47-85,

Benbasat, I,, Goldstein, D,K, and i\/lead, M, "The Case Research Strategy in Studies of infor- mation Systems," MIS Quarterly (11:3), Sep- tember 1987, pp, 369-386.

Birnbaum, P.H,, Farh, J.L, and Wong, G,Y,Y,

"The Job Characteristics Model in Hong Kong," Journal of Appiied Psychoiogy (71:4), November 1986, pp, 598-605,

Boland, R.J, "The Process and Product of System Design," Management Science (24:9), May 1978, pp, 887-898.

Bonoma, T.V, "Case Research in Marketing: Op- portunities, Problems, and a Process," Jour- nai of Marketing Research (22:2), May 1985, pp, 199-208,

Bredo E, and Feinberg W. "Part Two: The Inter- pretive Approach to Social and Educational Research," in Knowledge and Values in Social and Educational Research, E. Bredo and W, Feinberg (eds.), temple University Press, Philadelphia, PA, 1982a, pp, 115-128, Bredo, E, and Feinberg, W, (eds), Knowiedge and Vaiues in Social and Educationai Re- search, Temple University Press, Philadelphia, PA, 1982b.

Brooks, R.C,, Casey, I,J, and Blackmon, P,W, Jr, Evaiuation of the Air Force Clinical Labo- ratory Automation Systems (AFCLAS) at Wright-Patterson USAF Medical Center, Vol, I: Summary, HDSN-77-4 (NTIS no, AD-A043 664); Vol. Ii: Analysis, HDSN-77-5 (NTIS no.

AD-A043 665), Analytic Services, Arlington, VA, 1977.

Campbell, D,T. "Foreword" in Case Study Re- search: Design and Methods, R.K. Yin (ed,).

Sage Publications, Beverly Hills, CA, 1984, pp.

7-9,

(14)

Cook, T,D. and Campbell, D.T, Quasi-Experimen- tation: Design and Analysis Issues for Fieid Settings, Houghton Mifflin, Boston, MA, 1979, Cook, T.D. and Reichardt, C,S. (eds,). Qualita- tive and Quantitative Methods in Evaluation Research, Sage Publications, Beverly Hills, CA, 1979,

Dansereau, F, Jr,, Graen, G, and Haga, W,J,

"A Vertical-Dyad Linkage Approach to Lead- ership within Formal Organizations: A Longi- tudinal investigation of the Role Making Proc- ess," Qrganizational Behavior and Human Performance (13:1), February 1975, pp, 46- 78.

Danziger, J.N, "Social Science and the Social impacts of Computer Technology," Social Sci- ence Quarterly (66:1), March 1985, pp, 3-21.

Dickson, G.W., Benbasat, I. and King, W.R. "The MIS Area: Problems, Challenges, and Oppor- tunities," Data Base (14:1), Fall 1982, pp. 7- 12,

Downey, H,K, and Ireland, R,D, "Quantitative versus Qualitative: The Case of Environmental Assessment in Organizational Studies," in Qualitative Methodology, J. Van Maanen (ed.). Sage Publications, Beverly Hills, CA, 1983, pp. 179-190.

Ferris, G.R. and Gilmore, D.C, "A Methodological Note on Job Complexity In- dexes," Journal of Applied Psychoiogy (70:1), February 1985, pp. 225-227,

Flagle, C D , "Qperations Research with Hospi- tal Computer Systems," in Hospital Computer Systems, M,F, Gollen (ed,), John Wiley and Sons, New York, NY, 1974, pp. 418-430.

Fok, L.Y., Kumar, K. and, Wood-Harper, T. "Meth- odologies for Socio-Technical-Systems (STS) Development: A Comparison," Proceedings of the Eighth International Conference on Infor- mation Systems, Pittsburgh, PA, December 6- 9, 1987, pp, 319-334.

Franz, C.R. and Robey, D. "An Investigation of User-Led System Design: Rational and Politi- cal Perspectives," Communications of the ACM (27:12), December 1984, pp. 1202- 1209.

George, A.L, and McKeown, T,J, "Case Studies and Theories of Qrganizational Decision Making," in Advances in Information Process- ing in Qrganizations (2), JAI Press, Greenwich, CT, 1985, pp, 21-58,

Glaser, B,G. and Strauss, A.L, The Discovery of Grounded Theory: Strategies for Quaiita- tive Research, Aldine, New York, NY, 1967.

Goldstein, D., Markus, M.L., Rosen, M. and Swan- son, E.B. "Use of Qualitative Methods in MIS

Research," Proceedings of the Seventh In- ternationai Conference on Information Sys- tems, San Diego, CA, December 15-17, 1986, pp. 338-339,

Goodhue, D, "IS Attitudes: Towards Theoretical Definition and Measurement Clarity," Proceed- ings of the Seventh International Conference on Information Systems, San Diego, CA, De- cember 15-17, 1986, pp, 181-194,

Hackman, J,R, and Qldham, G,R, "Motivation Through the Design of Work: Test of a Theory," Qrganizationai Behavior and Human Resources (16:2), August 1976, pp, 250-279.

Hirschheim, R,, Klein, H, and Newman, M, "A Social Action Perspective of Information Sys- tems Development," Proceedings of the Eighth Internationai Conference on Informa- tion Systems, Pittsburgh, PA, December 6-9, 1987, pp, 45-57.

House, R.J. and Rizzo, J.R. "Toward the Meas- urement of Qrganizational Practice: Scale De- velopment and Validation," Journal of Applied Psychology (56:6), Qctober 1972, pp, 388- 396.

Ives, B. and Qlson, M,H. "User Involvement and MIS Success: A Review of Research," Man- agement Science (30:5), May 1984, pp, 586- 603,

Jarvenpaa, S,L,, Dickson, G,W. and DeSanc- tis, G. "Methodological Issues in Experimen- tal IS Research: Experiences and Recommen- dations," MIS Quarterly (9:2), June 1985, pp.

141-156.

Jick, T.D. "Mixing Qualitative and Quantitative Methods: Triangulation in Action," in Quaiita- tive Methodoiogy, J, Van Maanen (ed,). Sage Publications, Beverly Hills, CA, 1983, pp. 135- 148.

Kaplan, B. "impact of a Clinical Laboratory Com- puter Systems: Users' Perceptions," in Medinfo 86: Fitth Worid Congress on Medi- cal Informatics, R. Salamon, B. Blum and M.J.

Jorgensen (eds.), North-Holland, Amsterdam, 1986, pp. 1057-1061.

Kaplan, B. "Initial Impact of a Clinical Labora- tory Computer System: Themes Common to Expectations and Actualities," Journai of Medi- cal Systems (11:2/3), June 1987, pp, 137- 147,

Kaplan, B, and Duchon, D, "Job-Related Re- sponses to a Clinical Laboratory Computer In- formation System Seven Months Post Im- plementation," in Social, Ergonomic and Stress Aspects of Work with Computers, R, Salvendy, S,L, Sauter and J,J, Hurrell, Jr,

(15)

(eds,), Elsevier, Amsterdam, 1987a, pp, 17- 24,

Kaplan, B. and Duchon, D, "Employee Accep- tance of a Computer Information System: The Role of Work Qrientation," Working Paper IS- 1988-004A, Department of Quantitative Analy- sis and Information Systems, University of Cin- cinnati, Cincinnati, QH, April 1987b,

Kaplan, B, and Duchon D, "A Qualitative and Quantitative Investigation of a Computer System's Impact on Work in Clinical Labora- tories," Working Paper IS-1987-001, Depart- ment of Quantitative Analysis and Information Systems, University of Cincinnati, Cincinnati, QH, December 1987c,

Kauber, P, "What's Wrong With a Science of MIS?" Proceedings of the 1986 Decision Sci- ence Institute, Honolulu, HA, November 23- 25, 1986, pp, 572-574,

Kemp, N,J. and Clegg, C.W. "Information Tech- nology and Job Design: A Case Study on Com- puterized Numerically Controlled Machine Tool Workers," Behavior and Information Tech- nology (6:2), 1987, pp, 109-124,

Kjerulff, K, H,, Counte, M,A,, Salloway, J,C, and Campbell, B.C. "Predicting Employee Adap- tation to the Implementation of a Medical In- formation System," Proceedings of the Sixth Annual Symposium on Computer Applications in Medical Care, IEEE Computer Society Press, Silver Springs, MD, 1982, pp. 392-397, Klein, H, "The Critical Social Theory Perspec- tive on Information Systems Development," Pro- ceedings of the 1986 Decision Science Insti- tute, Honolulu, HA, November 23-25, 1986, pp. 575-577.

Kling, R. "Social Analyses of Computing: Theo- retical Perspectives in Recent Empirical Re- search," Computing Surveys (12:1), March 1980, pp. 61-110.

Kling, R. and Scacchi, W, "The Web of Comput- ing: Computer Technology as Social Qrgani- zation," in Advances in Computers (21), M,G, Yovits (ed,). Academic Press, New York, NY, 1982, pp, 2-90,

Lewis, J,W, "Clinical Laboratory Information Sys- tems," Proceedings of the IEEE (67:9), Sep- tember 1979, pp, 1229-1300,

Light, R.J. and Pillemer, D,B, "Numbers and Nar- rative: Combining Their Strengths in Research Reviews," Harvard Educationai Review (52:1), February 1982, pp, 1-26,

Lincoln, T,L, and Korpman, R,A, "Computers, Health Care, and Medical Information Sci- ence," Science (210:4467), Qctober 17,1980, pp, 257-263,

Lincoln, Y,S, and Guba, E,G. Naturaiistic Inquiry, Sage Publications, Beverly Hills, CA, 1985, Lyytinen, K, "Different Perspectives on Informa-

tion Systems: Problems and Solutions," ACM Computing Surveys (19:1), March 1987, pp, 5-46.

Manicas, P.T. and Secord, P.F, "Implications For Psychology of the New Philosophy of Sci- ence," American Psychologist (38:4), April 1983, pp, 399-413,

Markus, M,L, "Power, Politics, and MIS implemen- tation," Communications of the ACM (26:6), June 1983, pp. 430-444.

Markus, M.L, and Robey, D. "Information Tech- nology and Qrganizational Change: Causal Structure in Theory and Research," Manage- ment Science (34:5), May 1988, pp. 583-598.

Maxwell, J.A,, Bashook, P.G. and Sandlow, L.J,

"Combining Ethnographic and Experimental Methods in Educational Research: A Case Study," in Educational Evaluation: Ethnogra- phy in Theory, Practice, and Poiitics, D,M, Fet- terman and M,A, Pitman (eds.). Sage Publi- cations, Beverly Hills, CA, 1986, pp, 121-143.

Meehl, P.E. "Theoretical Risks and Tabular As- terisks: Sir Karl, Sir Ronald, and the Slow Pro- gress of Soft Psychology," Journal of Consult- ing and Clinicai Psychoiogy (46:4), August 1978, pp, 806-834.

Mendelson, H., Ariav, G., Moore, J,, DeSanctis, G, "Competing Reference Disciplines for MIS Research," Proceedings of the Eighth Inter- national Conference on information Systems, Pittsburgh, PA, December 6-9, 1987, pp. 455- 458,

Meyers, W,R, The Evaiuation Enterprise, Jossey- Bass, San Francisco, GA, 1981,

Miles, M,B, and Huberman, A,M. Quaiitative Data Anaiysis: A Sourcebook of New Meth- ods, Sage Publications, Beverly Hills, CA, 1984.

Millman, Z. and Hartwick, J. "The Impact of Auto- mated Qffice Systems on Middle Managers and Their Work," MIS Quarterly (11.4), De- cember 1987, pp. 479-490.

Mintzberg, H. The Nature of Manageriai Work, Harper and Row, New York, NY, 1973, Mumford, E, "From Bank Teller to Qffice Worker:

The Pursuit of Systems Designed for People in Practice and Research," Proceedings of the Sixth International Conference on Information Systems, Indianapolis, IN, December 16-18, 1985, pp. 249-258,

Mumford, E, and Henshall, D, A Participative Ap- proach to Computer Systems Design: A Case

(16)

Study of the Introduction of a New Computer System, John Wiley and Sons, New York, NY, 1979.

Mumford, E,, Hirschheim, R,, Fitzgerald, G, and Wood-Harper, T, Research Methods in Infor- mation Systems, North-Holland, Amsterdam, 1985,

Nicol, R, and Smith, P, "A Survey of the State of the Art of Clinical Biochemistry Laboratory Computerization," International Journal of Bio- Medical Computing (18:2), March 1986, pp, 135-144.

Paplanus, S.H, "Clinical Pathology," in Computer Appiications in Ciinical Practice: An Overview, D. Levinson (ed,), Macmillan, New York, NY, 1985, pp. 118-122.

Patton, M.Q. Utiiization-Focused Evaiuation, Sage Publications, Beverly Hills, CA, 1978.

Rockart, J.F, "Conclusion to Part i," in The In- formation Systems Research Challenge, F,W, McFarlan (ed,). Harvard Business School Press, Boston, MA, 1984, pp, 97-104, Stone, E.F, "The Moderating Effect of Work Re-

lated Values on the Core Job-Scope Satisfac- tion Relationship," unpublished doctoral dis- sertation. University of California, Irvine, GA, 1974,

Trend, M,G. "Qn the Reconciliation of Qualita- tive and Quantitative Analyses: A Case Study," in Qualitative and Quantitative Meth- ods in Evaiuation Research, T.D. Gook and G.S, Reichardt (eds,). Sage Publications, Bev- erly Hills, GA, 1979, pp, 68-86,

Van Maanen, J. "Reclaiming Qualitative Meth- ods for Qrganizational Research," in Qualita- tive Methodology, J, Van Maanen (ed,). Sage Publications, Beverly Hills, GA, 1983a, pp, 9- 18.

Van Maanen, J. "Epilog: Qualitative Methods Re- claimed," in Qualitative Methodoiogy, J, Van Maanen (ed,). Sage Publications, Beverly Hills, GA, 1983b, pp. 247-268.

Van Maanen, J. (ed.). Qualitative Methodoiogy, Sage Publications, Beverly Hills, GA, 1983c.

Van Maanen, J., Dabbs, J.M, Jr, and Faulkner, R,R. Varieties of Qualitative Research, Sage Publications, Beverly Hills, GA, 1982, Vitalari, N,P, "The Need for Longitudinal Designs

in the Study of Gomputing Environments," in Research IVIethods in Information Systems, E.

Mumford, R, Hirschheim, G. Fitzgerald and T, Wood-Harper (eds,), North-Holland, Amster- dam, 1985, pp. 243-265.

Weick, K.E. "Theoretical Assumptions and Re- search Methodology Selection," and ensuing discussion, in The Information Systems Re-

search Challenge, F.W. McFarlan (ed.). Har- vard Business School Press, Boston, MA, 1984, pp. 111-133.

Withey, M., Daft, R.L and Gooper, W,H, "Meas- ure of Perrow's Work Unit Technology: An Em- pirical Assessment and a New Scale," Acad- emy of Management Journal (26:1), March 1983, pp, 45-63,

Yin, R,K. Case Study Research: Design and Methods, Sage Publications, Beverly Hills, GA, 1984.

Zuboff, S. "New Worlds of Gomputer-Mediated Work," Harvard Business Review (60:5), Sep- tember-Qctober 1982, pp. 142-152,

Zuboff, S, In the Age of the Smart Machine: The Future of Work and Power, Basic Books, New York, NY, 1988.

About the Authors

Bonnie Kaplan is an assistant professor of in- formation systems at the University of Gincin- nati's College of Business Administration. She is also an adjunct assistant professor of clinical pathology and laboratory medicine. Dr. Kaplan received her Ph.D. in history from the University of Chicago. She has had extensive practical ex- perience in information systems development at major academic medical centers, Dr, Kaplan's research interests include behavioral and policy issues in information systems, implementation of technological innovations, information systems in medicine and health care, and history and so- ciology of computing. Her publications have ap- peared in Journai of Medical Systems, Interna- tionai Journal of Technology Assessment in Health Care, Journal of Health and Human Re- sources Administration, Journal of Clinical En- gineering, and volumes on human-computer in- teraction and on medical computing,

Dennis Duchon is an assistant professor of or- ganizational behavior in the Gollege of Business at the University of Texas at San Antonio, Before joining the faculty there, he was an assistant pro- fessor of organizational behavior at the Univer- sity of Gincinnati. He received a Ph.D. in organ- izational behavior from the University of Houston.

He has worked as a manager both in the United States and abroad. Dr. Duchon's research inter- ests include behavioral decision making and the implementation of new technologies. He has pub- lished in the Journai of Appiied Psychoiogy, IEEE Transactions in Engineering Management, and Decision Sciences.

(17)

Referensi

Dokumen terkait