Alternative measures of service quality: a review
Riadh Ladhari
Department of Business Administration, University of Moncton, Moncton, Canada
Abstract
Purpose– The purpose of this paper is to identify and discuss the key conceptual and empirical issues that should be considered in the development of alternative industry-specific measurement scales of service quality (other than SERVQUAL).
Design/methodology/approach– A total of 30 studies are selected from two well-known databases: Science direct and ABI inform. These studies are subjected to a comprehensive in-depth content analysis and theoretical discussion of the key conceptual and empirical issues to be considered in the development of service-quality measurement instruments.
Findings– The study identifies deficiencies in some of the alternative service-quality measures;
however, the identified deficiencies do not invalidate the essential usefulness of the scales. The study makes constructive suggestions for the development of future scales.
Originality/value– This is the first work to describe and contrast a large number of service-quality measurement models, other than the well-known SERVQUAL instrument. The findings are of value to academics and practitioners alike.
KeywordsCustomer services quality, Psychometric tests, SERVQUAL Paper typeGeneral review
1. Introduction
A great deal of service-quality research in recent decades has been devoted to the development of measures of service quality. In particular, the SERVQUAL instrument (Parasuraman et al., 1988) has been widely applied and valued by academics and practicing managers (Buttle, 1996). However, several studies have identified potential difficulties with the use of SERVQUAL (Carman, 1990; Cronin and Taylor, 1992;
Asubontenget al., 1996; Buttle, 1996; Van Dykeet al., 1997; Llosaet al., 1998). These difficulties have related to the use of so-called “difference scores”, the ambiguity of the definition of “consumer expectations”, the stability of the SERVQUAL scale over time, and the dimensionality of the instrument. As a result of these criticisms, questions have been raised regarding the use of SERVQUAL as a generic measure of service quality and whether alternative industry-specific measures of service quality should be developed for specific service settings.
Over the past 15 years or so, at least 30 industry-specific scales of service quality have been published in the literature on service quality – including (among others) scales suggested by Saleh and Ryan (1991), Vandamme and Leunis (1993), Jabnoun and Khalifa (2005), Akbaba (2006), and Caro and Garcia (2007). However, no study has attempted to review and integrate this plethora of research on service-quality measurement. The present study addresses this gap in the literature. Its purpose is to explore some of the pertinent conceptual and empirical issues involved in the development of industry-specific measures of service quality.
www.emeraldinsight.com/0960-4529.htm
Alternative measures of service quality 65
Managing Service Quality Vol. 18 No. 1, 2008 pp. 65-86 qEmerald Group Publishing Limited 0960-4529 DOI 10.1108/09604520810842849
When the SERVQUAL scale was developed by Parasuramanet al.(1985, 1988), their aim was to provide a generic instrument for measuring service quality across a broad range of service categories. Relying on information from 12 focus groups of consumers, Parasuraman et al. (1985) reported that consumers evaluated service quality by comparing expectations (of service to be received) with perceptions (of service actually received) on ten dimensions: tangibles, reliability, responsiveness, communication, credibility, security, competence, understanding/knowing customers, courtesy, and access. In a later (Parasuramanet al. (1988)) work, the authors reduced the original ten dimensions to five:
(1) tangibles (the appearance of physical facilities, equipment, and personnel);
(2) reliability (the ability to perform the promised service dependably and accurately);
(3) responsiveness (the willingness to help customers and provide prompt service);
(4) empathy (the provision of individual care and attention to customers); and (5) assurance (the knowledge and courtesy of employees and their ability to inspire
trust and confidence).
Each dimension is measured by four to five items (making a total of 22 items across the five dimensions). Each of these 22 items is measured in two ways:
(1) the expectations of customers concerning a service; and (2) the perceived levels of service actually provided.
In making these measurements, respondents are asked to indicate their degree of agreement with certain statements on a seven-point Likert-type scale (1¼“strongly disagree” to 7¼“strongly agree”). For each item, a so-called “gap score” (G) is then calculated as the difference between the raw “perception-of-performance” score (P) and the raw “expectations score” (E). The greater the “gap score” (calculated as G¼P minusE), the higher the score for perceived service quality.
SERVQUAL has been used to measure service quality in various service industries;
these have included: the health sector (Carman, 1990; Headley and Miller, 1993; Lam, 1997; Kilbourneet al., 2004); banking (Lam, 2002; Zhouet al., 2002); fast food (Lee and Ulgado, 1997); telecommunications (Van der Wal et al., 2002); retail chain (Parasuraman et al., 1994); information systems (Jiang et al., 2000); and library services (Cook and Thompson, 2001). SERVQUAL has also been applied in various countries; these have included: the United States (Babakus and Boller, 1992; Pittet al., 1995; Jiang et al., 2000; Kilbourne et al., 2004); China (Lam, 2002; Zhou et al., 2002);
Australia (Baldwin and Sohal, 2003); Cyprus (Arasliet al., 2005); Hong Kong (Kettinger
der Walet al., 2002); The Netherlands (Kettingeret al., 1995); and the UK (Pittet al., 1995; Kilbourneet al., 2004).
Despite the widespread use of the SERVQUAL model to measure service quality, several theoretical and empirical criticisms of the scale have been raised. These can be summarised as follows:
. The concept and operationalisation of the “gap score” have been questioned. For example, it has been suggested that the notion of “subtraction” contained in the SERVQUAL model has no equivalent in theories of psychological function (Ekinci and Riley, 1998). The use of a “gap score” is said to be a poor choice as a measure of psychological construct (Van Dykeet al., 1999) because there is little evidence that customers actually assess service quality in terms of perception-minus-expectations scores (Peter et al., 1993; Buttle, 1996; Ekinci and Riley, 1998). It has been contended that service quality is more accurately assessed by measuring only perceptions of quality (Cronin and Taylor, 1992).
Moreover, the validity of the operationalisation of the “gap score” has been questioned because such scores are unlikely to be distinct from their component scores (Brownet al., 1993).
. The concept of “expectations” has been criticised for being loosely defined and open to multiple interpretations (Teas, 1993, 1994). According to this critique, expectations have been variously defined as “desires”, “wants”, “what a service provider should offer”, “the level of service the customer hopes to receive”,
“adequate service”, “normative expectations”, and “ideal standards”. As a result, it is contended that the operationalisation of SERVQUAL is itself open to multiple interpretations.
. The validity of the items and dimensions of the SERVQUAL instrument have been questioned. It has been suggested that the factor-loading pattern in a number of studies (Carman, 1990; Parasuramanet al., 1991; Babakus and Boller, 1992; Headley and Miller, 1993; Engellandet al., 2000) indicates a weakness in terms of convergent validity because several of the SERVQUAL items had the highest loadings on different dimensions from those in Parasuramanet al.(1988).
. A number of researchers have suggested that different dimensions are more appropriate for expectations, perceptions, and gap scores. Suggestions have included: one dimension (Cronin and Taylor, 1992; Lam, 1997); two dimensions (Babakus and Boller, 1992; Gounaris, 2005); three dimensions (Chi Cuiet al., 2003;
Arasli et al., 2005; Najjar and Bishu, 2006); four dimensions (Kilbourne et al., 2004); six dimensions (Carman, 1990; Headley and Miller, 1993); seven dimensions (Walbridge and Delene, 1993); and nine dimensions (Carman, 1990). Moreover, other studies have reported a poor fit when tested against a five-factor model with confirmatory factor analysis (CFA) (Chi Cui et al., 2003;
Badriet al., 2005).
. It has been contended that perception scores (as in the “SERVPERF” instrument) outperform gap scores in predicting overall evaluation of service (Cronin and Taylor, 1992; McAlexanderet al., 1994).
measures of
service quality
67
3. Industry-specific measure of service quality
In view of the problems outlined above, the applicability of a generic scale for measuring service quality in all settings has been questioned (Babakus and Boller, 1992; Van Dykeet al., 1997; Jabnoun and Khalifa, 2005; Akbaba, 2006; Caro and Garcia, 2007). Moreover, it has been argued that a simple adaptation of the SERVQUAL items is insufficient to measure service quality across a diversity of service industries (Carman, 1990; Babakus and Boller, 1992; Brownet al., 1993; Van Dykeet al., 1997). For example, Carman (1990) contended that certain dimensions required expansion by the inclusion of 13 additional items to the SERVQUAL instrument in order to capture service quality adequately across different services. It has also been contended that service quality is a simple unidimensional construct in some contexts, but a complex multidimensional construct in others (Babakus and Boller, 1992). For these reasons, it has been suggested that industry-specific measures of service quality might be more appropriate than a single generic scale (Babakus and Boller, 1992; Van Dyke et al., 1997; Caro and Garcia, 2007). Dabholkaret al.(1996, p. 14) summarized this view in the following terms:
... it appears that a [single] measure of service quality across industries is not feasible.
Therefore, future research on service quality should involve the development of industry-specific measures of service quality.
As a consequence of these arguments, much of the emphasis in recent research has moved from attempts to adapt SERVQUAL to the development of alternative industry-specific measures. Table I summarizes 30 industry-specific measures of service quality taken from two databases: “Science direct” and “ABI inform”. The features of these measures are discussed below.
3.1 Service industries and countries
It is apparent from Table I that alternative scales have been developed to measure service quality in a variety of service industries. These have included (among others):
restaurants (Stevens et al., 1995); retail banks (Aldlaigan and Buttle, 2002;
Sureshchandar et al., 2002); career centers (Engelland et al., 2000); internet retail (Jandaet al., 2002); hotels (Ekinci and Riley, 1998; Akbaba, 2006; Wilkinset al., 2007);
hospitals (Soweret al., 2001); and higher education (Markovic, 2006).
Moreover, the scales have been developed in various countries. These have included Turkey (Akbaba, 2006); Australia (Wilkinset al., 2007); Canada (Saleh and Ryan, 1991);
Croatia (Markovic, 2006); India (Sureshchandaret al., 2002); the USA (Dabholkaret al., 1996); Korea (Kang and James, 2004); Hong Kong (Lam and Zhang, 1999); Belgium
Study (Country) Sample administration procedure Scale items) Reliability Knustonet al.
(1990)
Lodging industry (USA)
201 adults Telephone interviews
Confirmatory factor analysis
26 items; expectations-only scores
Seven-point Likert scale, ranging from strongly agree (7) to strongly disagree (1)
5 dimensions:
reliability (4 items), assurance (5), responsiveness (3), tangibles (6), empathy (8)
Ranged from 0.63 to 0.80
Saleh and Ryan (1991)
Hospitality industry (Canada)
200 hotel guests, 17 management staff
Self-administered Exploratory factor analysis
32 items for hotel guests and 33 items for management staff;
perception minus expectations Five-point Likert scale, ranging from highly satisfied (1) to highly dissatisfied (5)
4 dimensions for hotel guests: tangibles and reliability (10), responsiveness (8), assurance (8), empathy (6)
5 dimensions for management staff:
tangibles (7), reliability (3), responsiveness (8), assurance (8), empathy (7)
Ranged from 0.74 to 0.93 for hotel guests; ranged from 0.63 to 0.80 for management staff
Bouman and van der Wiele (1992)
Care service industry (The Netherlands)
226 customers of care service firms
Self-administered Exploratory factor analysis
40 items;
Perception-minus-expectations scores
Seven-point Likert scale, ranging from very unimportant (1) to very important (7) for expectations and from 1¼definitely not appropriate to 7¼definitely appropriate for perceptions
3 factors: customer kindness (19), tangibles (13), faith (8)
Ranged from 0.76 to 0.92
Vandamme and Leunis (1993)
Health care sector (Belgium)
70 patients Self-administered Exploratory factor analysis
17 items;
Perception-minus-expectations scores
Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7)
6 dimensions: tangibles (4), medical
responsiveness (3), assurance I (3), assurance II (3), nursing staff (2), personal beliefs and values (2)
Ranged from 0.58 to 0.82
(continued)
TableReviewofservice-qualityscales
Alternative measures of service quality
69
Study
Service industry
(Country) Sample
Questionnaire administration
Data analysis
procedure Scale
Dimensions (number of
items) Reliability
Stevenset al.
(1995)
Restaurant industry (USA)
200 respondents for fine-dining, 198 for casual-dining, 198 for quick-service restaurants
Telephone interviews
Confirmatory factor analysis
29 items; expectations-only scores
Seven-point Likert scale, ranging from strongly agree (7) to strongly disagree (1)
5 dimensions: tangibles (10), reliability (5), responsiveness (3), assurance (6), empathy (5)
Ranged from 0.89 to 0.92
Tomes and Ng (1995)
Service quality in NHS or NHS trust hospital services (England)
132 patients admitted in large general hospital in the east of England
Self-administered Exploratory factor analysis
49 items,
perception-minus-expectations (factor analysis is based on expectations-only scores)
7 dimensions: empathy (10), relationship of mutual respect (9), dignity (9), understanding of illness (5), religious needs (1), food (6), physical environment (9)
Ranged from 0.64 to 0.92
Dabholkar et al.(1996)
Retail service quality (USA)
227 shoppers for the first study and 149 for the cross-validation study
Self-administered Confirmatory factor analysis
28 items; perception-only scores
Five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5)
5 dimensions: physical aspects (6 items), reliability (5), personal interaction (9), problem solving (3), policy (5)
Ranged from 0.85 to 0.92
Lam and Zhang (1999)
Travel agents (Hong Kong)
209 users of travel agents
Self-administered Exploratory factor analysis
23 items;
perception-minus-expectations scores
Seven-point Likert scales, ranging from strongly agree (7) to strongly disagree (1)
5 dimensions:
responsiveness and assurance (6), reliability (5), empathy (4), resources and corporate image (5), tangibility (3)
Ranged from 0.67 to 0.88
Mentzeret al.
(1999)
Logistic service quality (USA)
5531 defense logistics agency users
Self- administered
Confirmatory factor analysis
25 items; perception-only scores
Five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5)
9 dimensions:
information quality (2), ordering procedures (2), ordering release quantities (3), timeliness (3), order accuracy (3), order quality (3), order condition (3), order discrepancy handling (3), personnel contact quality (3)
Ranged form 0.73 to 0.89
(continued)
Study (Country) Sample administration procedure Scale items) Reliability Shemwell and
Yavas (1999)
Hospital service quality (USA)
218 respondents residing in different neighborhoods SMSA
Self-administered Confirmatory factor analysis
14 items; perception-only scores
Seven-point scale, ranging from poor (1) to outstanding (7)
3 dimensions: search attributes (5 items), credence attributes (4 items), experience attributes (5 items)
Ranged form 0.75 to 0.83
Engellandet al.
(2000)
Career service centers on college campuses (USA)
262 undergraduate college students for the
exploratory study and 237 for the validation
Self-administered Exploratory factor analysis;
Confirmatory factor analysis
17 items;
perception-minus-expectations scores
Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7)
5 dimensions: tangibles (4), reliability (4), responsiveness (3), assurance (3), empathy (3)
Ranged from 0.76 to 0.89
Frochot and Hughes (2000)
Service quality provided in historic houses (England and Scotland)
790 visitors for the final survey
Self-administered Exploratory factor analysis
24 items; Perception-only scores
Five-point Likert scale, ranging from strongly agree (5) to strongly disagree (1)
5 dimensions:
responsiveness (8), tangibles (7), communications (4), consumables (3), empathy (2)
Ranged from 0.70 to 0.83
Cook and Thompson (2001)
Library service (USA)
4407 participants Web-based administration
Exploratory factor analysis
34 items; perception-only scores
Nine-point scale, ranging from low (1) to high (9); and unnumbered graphic rating scale
4 dimensions: service (11 items), library as place (9), access to collections (7), reliability (7)
Ranged from 0.80 to 0.94
Soweret al.
(2001)
Hospital service quality (USA)
663 recently discharged patients
Exploratory factor analysis
75 items; perception-only scores
Seven-point Likert scale, ranging from strongly agree (7) to strongly disagree (1)
8 dimensions: respect and caring (26), effectiveness and continuity (15), appropriateness (15), information (7), efficiency (5), effectiveness-meals (5), first impression (1), staff diversity (1)
Ranged from 0.87 to 0.98
(continued)
Table
Alternative measures of service quality
71
Study
Service industry
(Country) Sample
Questionnaire administration
Data analysis
procedure Scale
Dimensions (number of
items) Reliability
Vaughan and Shiu (2001)
Voluntary sector (Scotland)
72 disabled service users and a parent/carer group members
Self-administered Exploratory factor analysis;
Correlation matrix analysis
27 items; perception scores and expectations scores.
10 dimensions: access (3), responsiveness (4), communication (4), humaneness (4), security (2),
enabling/empowerment (2), competence (3), reliability (3), equity (1), tangibles (1)
Aldlaigan and Buttle (2002)
Banking (UK) 975 bank customers
Mail survey Exploratory factor analysis
21 items scale; perception-only scores
Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7)
4 dimensions: service system quality (11), behavioral service quality (5), machine service quality (2), service transactional accuracy (3)
Ranged from 0.80 to 0.93 (total sample)
Jandaet al.
(2002)
Internet retail service quality (USA)
446 respondents who had made at least one internet purchase within the last six months
Administered by interviewers
Confirmatory factor analysis
22 items; perception-only scores
Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7)
5 dimensions:
performance (6 items), access (4 items), security (4 items), sensation (4 items), information (4 items)
Ranged from 0.61 to 0.83
Sureshchandar et al.(2002)
Banking (India) 277 bank customers
Self-administered Confirmatory factor analysis
41 items; perception-only scores
Seven-point Likert scale, ranging from very poor (1) to very good (7)
5 dimensions: core service or service product (5), human element of service delivery (17), systemization of service delivery (6), tangibles of service (6), social responsibility (7)
Ranged from 0.82 to 0.96
Getty and Getty (2003)
Lodging industry (USA)
229
frequent-traveler business owners
Mail survey Exploratory factor analysis
26 items; perception-only scores
Four-point scale, ranging from low (1) to high (4)
5 dimensions:
tangibility (8 items), reliability (4), responsiveness (5), confidence (5), communication (4)
High reliability- no detailed information
(continued)
Study (Country) Sample administration procedure Scale items) Reliability Khan (2003) Ecotourism 324 ecotourists
who had taken an ecotrip in the past 18 months
Mail survey Exploratory factor analysis
29 items; expectations-only scores
Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7)
6 dimensions:
ecotangibles (3), assurance (5), reliability (5), responsiveness (4), empathy (4), tangibles (8)
Ranged from 0.86 to 0.98
Wolfinbarger and Gilly (2003)
Online e-tail quality (USA)
1,013 internet users
Web-based administration using an online panel
Exploratory factor analysis;
Confirmatory factor analysis
14 items;
perception-minus-expectations scores
Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7)
4 dimensions: web site design (5),
fulfillment/reliability (3), security/privacy (3), customer service (3)
Ranged from 0.79 to 0.88
Yoon and Suh (2004)
Consulting service (Korea)
86 respondents from IT consulting sites
Self-administered Exploratory factor analysis
36 items; perception-only scores
Seven-point Likert scale, ranging from Strongly disagree (1) to Strongly agree (7)
6 dimensions:
assurance (4), responsiveness (3), reliability (12), empathy (4), process (9), education (4)
Ranged from 0.87 to 0.95
Gounaris (2005)
Business to business service (Greece)
515 senior management
Mail survey Confirmatory factor analysis (CFA).
22 items; perception-only scores
Seven-point Likert scale, ranging from entirely disagree (1) to entirely agree (7)
4 dimensions: potential quality (6), hard process quality (5), soft process quality (6), output (5)
Ranged from 0.79 to 0.88
Jabnoun and Khalifa (2005)
Bank (United Arab Emirates)
115 customers of Islamic banks and 115 customer of conventional banks
Self-administered Exploratory factor analysis
29 items; perception-only scores
4 dimensions: personal skills (12), reliability (5), image (6), value (6)
Ranged from 0.85 to 0.94
Karatepeet al.
(2005)
Bank service (Cyprus)
1220 customers Self-administered Exploratory factor analysis;
confirmatory factor analysis
20 items; perception-only scores
Five-point Likert scale, ranging from strongly agree (5) to strongly disagree (1)
4 dimensions: service environment (4), interaction quality (7), empathy (5), reliability (4)
Ranged from 0.81 to 0.92
Parasuraman et al.(2005)
Electronic service quality (internet users – not identified)
549 subjects for the development stage and 858 customers for the validation stage
Web-based administration
Exploratory factor analysis;
confirmatory factor analysis
22 items; perception-only scores
Five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5)
4 dimensions:
efficiency (8 items);
system availability (4);
fulfillment (7); privacy (3)
Ranged from 0.83 to 0.94
(continued)
Table
Alternative measures of service quality
73
Study
Service industry
(Country) Sample
Questionnaire administration
Data analysis
procedure Scale
Dimensions (number of
items) Reliability
Akbaba (2006) Business hotel industry (Turkey)
234 hotel guests Self-administered Exploratory factor analysis
25 items;
perception-minus-expectations scores
5 dimensions: tangibles (6), adequacy in service supply (7),
understanding and caring (5), assurance (4), and convenience (3)
Ranged from 0.71 to 0.86
Markovic (2006)
High education service (Croatia)
444 graduate students
Self-administered Exploratory factor analysis
26 items; expectations-only scores
Five-point Likert scale, ranging from strongly believe that the statement is wrong (1) to strongly believe that the statement is not wrong (5)
7 dimensions:
reliability (6), students in scientific work (4), empathy (4), assurance (3), e-learning (3), responsiveness (3), tangibles (3)
Ranged from 0.53 to 0.78
Caro and Garcia (2007)
Urgent transport service (Spain)
375 subjects Self-administered Exploratory factor analysis;
Confirmatory factor analysis
36 items; perception-only scores
Five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5)
4 dimensions: personal interaction (3 sub-dimensions, 14 items), design (2 sub-dimensions, 7 items), physical environment (2 sub-dimensions, 7 items), outcome (2 sub-dimensions, 8 items)
Ranged from 0.74 to 0.96
Wilkinset al.
(2007)
Hospitality service (Australia)
664 hotel guests Self-administered Exploratory factor analysis;
confirmatory factor analysis
30 items; perception-only scores
3 dimensions: physical product (3
sub-dimensions, 13 items), service experience (3 sub-dimensions, 13 items), quality food and beverage (4 items)
Ranged from 0.72 to 0.90
and Spain (Caro and Garcia, 2007).
3.2 Dimensional structure
All scales in Table I are multi-dimensional. However, the number of dimensions vary – from a minimum of two (Ekinci and Riley, 1998) to a maximum of ten (Vaughan and Shiu, 2001). It is apparent that the number of dimensions varied according to the service context and the country. For example, the factor structure for the lodging industry in Australia (Wilkinset al., 2007) was somewhat different from that in North America (Knutsonet al., 1990; Saleh and Ryan, 1991; Getty and Getty, 2003). Moreover, the factor structure varied within a given country. For example, the factor structure for the lodging industry in North America varied from five dimensions (Knutsonet al., 1990; Getty and Getty, 2003) to four (Saleh and Ryan, 1991).
Despite this variation, it is apparent that the five dimensions of SERVQUAL were, for the most part, retained in the scales examined in this review. For example, the dimension of “tangibles” (the appearance of physical facilities, equipment, and personnel) was retained in most of the scales (for example, Knutsonet al., 1990; Saleh and Ryan, 1991; Bouman and van der Wiele, 1992; Dabholkaret al., 1996; Lam and Zhang, 1999; Engellandet al., 2000; Frochot and Hughes, 2000; Sureshchandaret al., 2002; Getty and Getty, 2003; Khan, 2003; Akbaba, 2006; Markovic, 2006). Similarly, the
“empathy” dimension (the knowledge and courtesy of employees and their ability to inspire trust and confidence) was retained in numerous studies (for example, Knutson et al., 1990; Tomes and Ng, 1995; Lam and Zhang, 1999; Engellandet al., 2000; Khan, 2003; Yoon and Suh, 2004; Karatepeet al., 2005; Markovic, 2006). Similar observations apply to the other SERVQUAL dimensions. However, new dimensions were added to account for industry-specific characteristics. For example, Jandaet al. (2002) added
“security” as a specific dimension of service quality required in the internet retail industry.
3.3 Gap scores versus perception scores
Three measurement methods were found in the scales reviewed in Table I:
. performance-only scores (for example, Dabholkaret al., 1996; Ekinci and Riley, 1998; Frochot and Hughes, 2000; Jandaet al., 2002; Getty and Getty, 2003; Caro and Garcia, 2007; Wilkinset al., 2007);
. expectations-only scores (for example, Knutson et al., 1990; Khan, 2003;
Markovic, 2006); and
. perception-minus-expectations scores (for example, Engelland et al., 2000;
Wolfinbarger and Gilly, 2003).
It is apparent that, despite the practical difficulties in obtaining information on customer expectations, many studies continue to use a “gap model”. It would seem that such models facilitate the identification of strengths and weaknesses in specific quality attributes.
3.4 Technical dimension versus functional dimension
According to the two-dimensional model of Gro¨nroos (1984), service quality consists of:
measures of
service quality
75
Frochot and Hughes, 2000; Getty and Getty, 2003; Yoon and Suh, 2004; Markovic, 2006). Only a limited number of studies incorporated the technical (outcome) dimension (for example, Vaughan and Shiu, 2001; Aldlaigan and Buttle, 2002; Gounaris, 2005;
Caro and Garcia, 2007).
3.5 Number of items
The number of items in the present review varied from 14 (Shemwell and Yavas, 1999) to 75 (Sower et al., 2001) according to the industry context. For example, Sureshchandar et al. (2002) used 41 items for the banking industry, Vaughan and Shiu (2001) used 27 items in the voluntary service sector, Yoon and Suh (2004) used 36 items in the consulting service industry, Bouman and van der Wiele (1992) used 40 items in the care industry, Markovic (2006) used 26 items in the higher education industry, and Akbaba (2006) used 25 items in the business hotel industry.
To determine the number of items, most researchers generated an initial pool of scale statements from a review of literature. This initial pool was then refined through:
. focus groups (for example, Mentzeret al., 1999; Soweret al., 2001; Vaughan and Shiu, 2001; Aldlaigan and Buttle, 2002; Khan, 2003; Wilkinset al., 2007); and/or
. individual interviews with providers or users (for example, Aldlaigan and Buttle, 2002; Jandaet al., 2002; Getty and Getty, 2003; Karatepe et al., 2005; Caro and Garcia, 2007).
It is also worthy of note that, in some cases, SERVQUAL was utilised as a starting-point for the development of the item pool (for example, Dabholkaret al., 1996;
Frochot and Hughes, 2000; Sureshchandaret al., 2002) or as the fundamental structure for new instruments (for example, Engellandet al., 2000, Khan, 2003; Markovic, 2006).
3.6 Sample sizes
Sample sizes in the studies reviewed in Table I varied from 70 (Vandamme and Leunis, 1993) to 5,531 (Mentzeret al., 1999) service users. Only three studies had sample sizes of more then 1,000: 1,013 internet users (Wolfinbarger and Gilly, 2003), 1,220 customers (Karatepeet al., 2005), and 5,531 defence logistics agency users (Mentzeret al., 1999).
Three studies had sample sizes of fewer than 100 respondents/users and 14 studies had sample sizes of fewer than 250 respondents/users. Several studies did not provide details of their samples.
3.7 Analysis method
A total of 16 studies used only exploratory factor analysis (EFA) to assess their dimensional structure and items. Eight studies used confirmatory factor analysis
Wolfinbarger and Gilly, 2003; Karatepeet al., 2005; Parasuramanet al., 2005; Caro and Garcia, 2007; Wilkinset al., 2007).
Item-to-total correlation analysis (that is, correlation between the score on an item and the sum of the scores of all other items constituting a single factor) was the most commonly used methodology to decide which items to retain and which to discard. In several studies, all items were discarded that scored less than ^0.40 on the item-to-total correlation (for example, Wolfinbarger and Gilly, 2003) or^0.30 on the item-to-total correlation (for example, Jandaet al., 2002; Aldlaigan and Buttle, 2002).
Other studies used loading scores as a basis for item exclusion. For example, some studies excluded items with factor loadings less than^0.40 (for example, Engelland et al., 2000; Soweret al., 2001; Jandaet al., 2002; Jabnoun and Khalifa, 2005; Caro and Garcia, 2007), others excluded items with factor loadings less than^0.45 (for example, Markovic, 2006), and others excluded items with factor loadings less than^0.50 (for example, Lam and Zhang, 1999; Wolfinbarger and Gilly, 2003; Karatepeet al., 2005). In some studies, items with cross-loadings greater than ^0.40 were discarded (for example, Jandaet al., 2002).
3.8 Reliability and validity
Cronbach’s alpha was the most commonly used measure of scale reliability (that is, the internal homogeneity of a set of items composing a scale). Most scales in the present review exhibited good reliability (that is, Cronbach’s alphas greater than 0.60). For example, Frochot and Hughes (2000) used five dimensions with reliability coefficients ranging from 0.70 to 0.83, Akbaba (2006) used five-dimensions with reliability coefficients ranging from 0.71 to 0.86, and Khan (2003) used six dimensions ranging from 0.86 to 0.98.
To assess convergent validity (that is, the extent to which a set of items that is assumed to represent a construct does in fact converge on the same construct), most studies calculated the average variance extracted (AVE) by each dimension (with an AVE of greater than 0.5 being said to support convergent validity). Examples in the present review included Gounaris (2005) and Caro and Garcia (2007). Some researchers considered the fact that all the items loaded highly on the factor to which they were assigned as further evidence of convergent validity (for example, Dabholkar et al., 1996; Caro and Garcia, 2007).
To establish discriminant validity (that is, the extent to which measures of theoretically unrelated constructs do not correlate with one another), several researchers used CFA and compared the AVE for each factor with the variance shared by the remaining factors (for example, Wolfinbarger and Gilly, 2003; Gounaris, 2005;
Caro and Garcia, 2007). The two dimensions were confirmed as being distinct from each other if the AVE estimates were greater than the shared variance estimates. In other studies, discriminant validity was demonstrated by simply showing that the scale did not correlate strongly with other measures from which it was supposed to differ (for example, Sureshchandaret al., 2002).
To demonstrate predictive validity (that is, the extent to which the scores of one construct were empirically related to the scores of other conceptually related constructs) some researcher correlated their service-quality dimensions with overall quality (for example, Sureshchandar et al., 2002; Wolfinbarger and Gilly, 2003;
Gounaris, 2005; Jabnoun and Khalifa, 2005; Parasuramanet al., 2005). Others correlated their service-quality dimensions with other dimensions; these included: satisfaction
measures of
service quality
77
proof of validity. In addition, the methodological assessments of the new alternative instruments were not clearly presented in several studies.
4. Discussion and suggestions for future research
This review has documented a variety of industry-specific measurement scales proposed in the service-quality literature since the publication of the SERVQUAL model in 1988. It is apparent that there is ongoing debate about several aspects of such scales. These include:
. the dimensionality of service quality;
. the hierarchical structure of service quality;
. the relationship of culture to perceptions of service quality;
. comparisons between alternative scales and SERVQUAL;
. validity of service-quality scales; and
. the statistical analysis used.
These aspects are discussed below, together with suggestions for future avenues of research.
4.1 Dimensionality of service quality
All of the 30 studies reviewed here posited service quality as a multidimensional construct. However, the number and nature of the dimensions varied, depending on the service context; indeed, they varied even within the same service industry. It is apparent that the criteria used to evaluate service quality differ among customer groups and circumstances. For example, a businessperson staying in a given hotel has different service criteria from those of a tourist (Eccles and Durrand, 1997).
Scholars should therefore describe the empirical context in which a particular scale was developed and the contexts in which it can be applied. In several cases reviewed in the present study, the authors did not explicitly identify the empirical context in which the scale was developed. Future studies should replicate these measures in different contexts to ascertain whether the number and nature of dimensions are applicable in other settings.
4.2 Hierarchical structure of service quality
Several authors have suggested that service quality is a hierarchical construct consisting of various sub-dimensions (Dabholkaret al., 1996; Brady and Cronin, 2001;
Gounaris, 2005; Caro and Garcia, 2007; Wilkins et al., 2007). However, despite this
efforts have been made to provide empirical evidence for such a structure. Future research could extend scholarly understanding of service quality by undertaking empirical studies of hierarchical multidimensional conceptions of service quality in different settings.
4.3 Culture and service quality
Several researchers have suggested that there is a need to develop culturally specific measures of service quality (Winsted, 1997; Zhouet al., 2002; Raajpoot, 2004; Karatepe et al., 2005). As with other marketing constructs and measures, it has been contended that constructs of service quality that are developed in one culture might be not applicable in another (Kettingeret al., 1995; Karatepeet al., 2005). According to this view, the meanings, number, and relative importance of service-quality dimensions depend on the cultural and value orientations of customers – particularly with respect to cultural traditions of “power distance” and “individualism/collectivism” (Winsted, 1997; Espinoza, 1999; Mattila, 1999; Furreret al., 2000; Karatepe et al., 2005; Glaveli et al., 2006). Further research in this area is desirable.
4.4 Comparisons between alternative scales and SERVQUAL
Although SERVQUAL has been criticised on theoretical grounds, only one scale in the present review (“INDSERV”) has been empirically shown to outperform SERVQUAL.
It is apparent that rigorous empirical studies are needed to substantiate whether alternative scales really are superior to SERVQUAL. In particular, further studies are required to validate and refine the alternative scales. It should also be noted that the small sample sizes used in several of the studies proposing alternative scales were insufficient to permit a comprehensive psychometric assessment of the proposed scales. There is also a need to compare the new scales with SERVQUAL with regard to their ability to predict constructs known to be related to service quality – such as overall service quality, satisfaction, word of mouth, and loyalty.
Despite the widespread criticism of SERVQUAL, it is the contention of the present study that the scale continues to be the most useful model for measuring service quality. In addition, the methodological approach used by Parasuramanet al.(1985, 1988, 1991) in developing and refining SERVQUAL was more rigorous than those used by the authors of the alternative scales.
Finally, it is interesting to note that there are many similarities between the dimensions used in SERVQUAL and those developed in alternative scales. This suggests that some service-quality dimensions are generic whereas others are specific to particular industries and contexts.
4.5 Validity of service quality scales
Although the measures of service quality reviewed in this study claimed to have exhibited good reliability, it is important to note that higher alpha values can be indicative of deficiencies (rather than reliability) in a scale (Churchill, 1979; Smith, 1999). As Smith (1999) has noted, high alpha values can reflect poor design of the measurement instrument, poor scale content, or problems of data attenuation. It is thus critical to establish the validity (the extent to which an instrument measures what it is intended to measure) of any proposed measurement system.
measures of
service quality
79
likely to be valid or invalid.
It is also of interest that the new industry-specific instruments were not replicated.
As a result, their psychometric properties can be questioned.
Finally, although some studies did validate their proposed measurement scales, there remained concerns about generalizability. A generalization from a single study, no matter how large the sample, is always problematic. Future research is certainly needed to refine these scales.
4.6 Statistical analyses
From a methodological perspective, most researchers in the present review used EFA with varimax (orthogonal) rotation to reduce the items used in their constructs.
However, numerous academic researchers have criticized the use of EFA, which is a data-driven method, for this purpose. Indeed, Kwok and Sharp (1998) described the use of EFA as nothing more than a “fishing expedition”.
EFA has a number of significant shortcomings. First, common factor analysis with varimax rotation assumes uncorrelated factors or traits; its application to data exhibiting correlated factors can produce:
. incorrect conclusions regarding the number of factors; and
. distorted factor loadings (Segars and Grover, 1993).
Second, because the solution obtained is only one of an infinite number of potential solutions, the estimates obtained for factor loadings are not unique (Segars and Grover, 1993). Finally, given that items are assigned to the factors on which they load most significantly, it is possible for items to load on more than one factor; hence, the distinctiveness of the factors can be affected and the researcher might lack any sound evidence or theoretical explanation on which to base an interpretation (Ahire et al., 1996; Sureshchandaret al., 2002).
Given these limitations and the potential advantages of using CFA, a combination of EFA and CFA is desirable. These two approaches to data analysis can provide complementary perspectives of data.
5. Managerial implications
This review should assist service managers to identify the dimensions of service quality that are appropriate to their particular service industries. Service managers can use these scales for qualitative and/or quantitative purposes.
In qualitative terms, knowledge of the components of service quality can assist service managers to identify the strengths and weakness of their own firms and to make comparisons with other firms in the same service industry. Managers can use
how well the firm performs on the dimensions identified in appropriate industry-specific scales.
In addition, because the quality dimensions identified in the literature might not be exhaustive, managers should also conduct interviews with customers to ascertain what they perceive to be the key determinants in their evaluations of service quality. In conducting such focus groups and interviews, managers should be aware that expectations can vary across consumer segments; qualitative data should therefore be collected among different consumer segments. Moreover, the information received from consumers can be complemented with information obtained in discussion with their employees – especially service-contact employees who have frequent direct interactions with consumers.
On a quantitative basis, service managers can use industry-specific scales to measure:
. customer expectations and perceptions of performance with respect to various dimensions and attributes and thus identify strengths and weaknesses; and
. the importance weighting of each service-quality dimension and attribute.
In undertaking these quantitative assessments, service managers should be aware that it is inappropriate to measure expectations and perceptions simultaneously after the service is experienced; rather, customers should respond to the items on expectations before the service is experienced and to the items on perceptions after the service is experienced. The quantitative analysis should be used to correct weaknesses and to capitalize on strengths. However, managers should recognize that satisfying consumers is not necessarily sufficient to retain them; to ensure loyalty, customers should be delighted.
Finally, the most obvious implication of the present study for managers is to recognize that each service context is unique. Service providers should be careful in applying alternative scales to contexts that have few elements in common with the empirical contexts used in their development. In particular, economic and cultural factors should be taken into consideration when applying these scales to different contexts.
6. Conclusion
The measurement of service quality has received significant attention from scholars and practitioners in recent years. SERVQUAL (Parasuramanet al., 1985, 1988), which was designed to be a generic instrument applicable across a broad spectrum of services, has been extensively used, replicated, and criticised. The most important criticism of SERVQUAL has been doubt about its applicability in various specific industries. As a result, numerous studies in different service sectors have sought to develop industry-specific service-quality scales. This review, which has documented and described thirty such industry-specific scales, provides helpful direction to researchers and practitioners in developing and utilising new industry-specific instruments.
measures of
service quality
81
Asubonteng, P., McCleary, K.J. and Swan, J.E. (1996), “SERVQUAL revisited: a critical review of service quality”,Journal of Service Marketing, Vol. 10 No. 6, pp. 62-81.
Babakus, E. and Boller, G.W. (1992), “An empirical assessment of the SERVQUAL scale”,Journal of Business Research, Vol. 24 No. 3, pp. 253-68.
Badri, M.A., Abdulla, M. and Al-Madani, A. (2005), “Information technology center service quality: Assessment and application of SERVQUAL”,International Journal of Quality &
Reliability Management, Vol. 22 Nos 8/9, pp. 819-48.
Baldwin, A. and Sohal, A. (2003), “Service quality factors and outcomes in dental care”, Managing Service Quality, Vol. 13 No. 1, pp. 207-16.
Bouman, M. and van der Wiele, T. (1992), “Measuring service quality in the car service industry:
building and testing and instrument”, International Journal of Service Industry Management, Vol. 3 No. 4, pp. 4-16.
Brady, M. and Cronin, J. (2001), “Some new thoughts on conceptualizing perceived service quality: a hierarchical approach”,Journal of Marketing, Vol. 65 No. 3, pp. 34-49.
Brown, T.J., Churchill, G.A. and Peter, J.P. (1993), “Research note: improving the measurement of service quality”,Journal of Retailing, Vol. 69 No. 1, pp. 127-39.
Buttle, F. (1996), “SERVQUAL: review, critique, research agenda”, European Journal of Marketing, Vol. 30 No. 1, pp. 8-32.
Carman, J.M. (1990), “Consumer perceptions of service quality: an assessment of the SERVQUAL dimensions”,Journal of Retailing, Vol. 66 No. 1, pp. 33-55.
Caro, L.M. and Garcia, J.A.M. (2007), “Measuring perceived service quality in urgent transport service”,Journal of Retailing and Consumer Services, Vol. 14 No. 1, pp. 60-72.
Churchill, G.A. Jr (1979), “A paradigm for developing better measures of marketing constructs”, Journal of Marketing Research, Vol. 16 No. 1, pp. 64-73.
Chi Cui, C., Lewis, B.R. and Park, W. (2003), “Service quality measurement in the banking sector in South Korea”, International Journal of Bank Marketing, Vol. 21 Nos 4/5, pp. pp191-pp201.
Cook, C. and Thompson, B. (2001), “Psychometric properties of scores from the web-based LibQualþ study of perceptions of library service quality”,Library Trends, Vol. 49 No. 4, pp. 585-604.
Cronin, J.J. and Taylor, S.A. (1992), “Measuring service quality: a reexamination and extension”, Journal of Marketing, Vol. 56, July, pp. 55-68.
Dabholkar, P., Thorpe, D.I. and Rentz, J.O. (1996), “A measure of service quality for retail stores:
scale development and validation”,Journal of the Academy of Marketing Science, Vol. 24 No. 1, pp. 3-16.
Eccles, G. and Durrand, P. (1997), “Improving service quality: lessons and practice from the hotel sector”,Managing Service Quality, Vol. 7 No. 5, pp. 224-6.
quality measurement in the lodging industry: time to move the goal-posts”,Hospitality Management, Vol. 17 No. 4, pp. 349-62.
Engelland, B.T., Workman, L. and Singh, M. (2000), “Ensuring service quality for campus career services centers: a modified SERVQUAL scale”,Journal of Marketing Education, Vol. 22 No. 3, pp. 236-45.
Espinoza, M.M. (1999), “Assessing the cross-cultural applicability of a service quality measure:
a comparative study between Quebec and Peru”,International Journal of Service Industry Management, Vol. 10 No. 5, pp. 449-68.
Frochot, I. and Hughes, H. (2000), “HISTOQUAL: the development of a historic houses assessment scale”,Tourism Management, Vol. 21 No. 2, pp. 157-67.
Furrer, O., Liu, B.S.-C. and Sudharshan, D. (2000), “The relationships between culture and service quality perceptions: basis for cross-cultural market segmentation and resource allocation”, Journal of Service Research, Vol. 2 No. 4, pp. 355-71.
Getty, J.M. and Getty, R.L. (2003), “Lodging quality index (LQI): assessing customers perceptions of quality deliver”,International Journal of Contemporary Hospitality Management, Vol. 15 No. 2, pp. 94-104.
Glaveli, N., Petridou, E., Liassides, C. and Sphatis, C. (2006), “Bank service quality: evidence from five Balkan countries”,Managing Service Quality, Vol. 16 No. 4, pp. 380-94.
Gro¨nroos, C. (1984), “A service quality model and its marketing implications”,European Journal of Marketing, Vol. 18 No. 4, pp. 36-44.
Gro¨nroos, C. (1990),Service Management and Marketing, Lexington Books, Lexington, MA.
Gounaris, S. (2005), “Measuring service quality in b2b services: an evaluation of the SERVQUAL scale vis-a`-vis the INDSERV scale”, Journal of Services Marketing, Vol. 19 Nos 6/7, pp. 421-35.
Headley, D.E. and Miller, S.J. (1993), “Measuring service quality and its relationship to future consumer behavior”,Journal of Health Care Marketing, Vol. 13 No. 4, pp. 32-41.
Jabnoun, N. and Khalifa, A. (2005), “A customized measure of service quality in the UAE”, Managing Service Quality, Vol. 15 No. 4, pp. 374-88.
Janda, S., Trocchia, P.J. and Gwinner, K.P. (2002), “Consumer perceptions of internet retail”, International Journal of Service Industry Management, Vol. 13 No. 5, pp. 412-31.
Jiang, J.J., Klein, G. and Crampton, S.M. (2000), “A note on SERVQUAL reliability and validity in information system service quality measurement”, Decision Sciences, Vol. 31 No. 3, pp. 725-44.
Kang, G.-D. and James, J.J. (2004), “Service quality dimensions: an examination of Gro¨nroos’s service quality model”,Managing Service Quality, Vol. 14 No. 4, pp. 266-77.
Karatepe, O.M., Yavas, U. and Babakus, E. (2005), “Measuring service quality of banks: scale development and validation”,Journal of Retailing and Consumer Services, Vol. 12 No. 5, pp. 373-83.
Kettinger, W.L., Lee, C.C. and Lee, S. (1995), “Global measures of information service quality: a cross-national study”,Decision Sciences, Vol. 26 No. 5, pp. 569-88.
Khan, M. (2003), “ECOSERV: ecotourists quality expectations”,Annals of Tourism Research, Vol. 30 No. 1, pp. 109-24.
Kilbourne, W.E., Duffy, J.A., Duffy, M. and Giarchi, G. (2004), “The applicability of SERVQUAL in cross-national measurements of health-care quality”,Journal of Services Marketing, Vol. 18 Nos 6/7, pp. 524-33.
measures of
service quality
83
Lam, T.K.P. (2002), “Making sense of SERVQUAL’s dimensions to the Chinese customers in Macau”,Journal of Market-focused Management, Vol. 5 No. 10, pp. 43-58.
Lee, M. and Ulgado, F.M. (1997), “Consumer evaluations of fast-food services: a cross-national comparison”,Journal of Services Marketing, Vol. 11 No. 1, pp. 39-50.
Llosa, S., Chandon, J. and Orsingher, C. (1998), “An empirical study of SERVQUAL’s dimensionality”,The Service Industries Journal, Vol. 18 No. 1, pp. 16-44.
McAlexander, J.H., Kaldenberg, D.O. and Koenig, H.F. (1994), “Service quality measurement:
examination of dental practices sheds more light on the relationships between service quality, satisfaction, and purchase intentions in a health care setting”,Journal of Health Care Marketing, Vol. 14 No. 3, pp. 34-40.
Markovic, S. (2006), “Expected service quality measurement in tourism higher education”,Nase Gospodarstvo, Vol. 52 Nos 1/2, pp. 86-95.
Mattila, A.S. (1999), “The role of culture in the service evaluation processes”,Journal of Service Research, Vol. 1 No. 3, pp. 250-61.
Mentzer, J.T., Flint, D.J. and Kent, J.L. (1999), “Developing a logistics service quality scale”, Journal of Business Logistics, Vol. 20 No. 1, pp. 9-32.
Najjar, L. and Bishu, R.R. (2006), “Service quality: a case study of a bank”,Quality Management Journal, Vol. 13 No. 3, pp. 35-44.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1985), “A conceptual model of service quality and its implications for future research”,Journal of Marketing, Vol. 49 No. 4, pp. 41-50.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), “SERVQUAL: a multiple-item scale for measuring consumer perceptions of service quality”,Journal of Retailing, Vol. 64 No. 1, pp. 12-40.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1991), “Refinement and reassessment of the SERVQUAL scale”,Journal of Retailing, Vol. 67 No. 4, pp. 420-50.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1994), “Alternative scales for measuring service quality: A comparative assessment based on psychometric and diagnostic criteria”, Journal of Retailing, Vol. 70 No. 3, pp. 201-30.
Parasuraman, A., Zeithaml, V.A. and Malhotra, A. (2005), “E-S-QUAL: a multiple-item scale for assessing electronic service quality”,Journal of Service Research, Vol. 7 No. 3, pp. 213-33.
Peter, J.P., Churchill, G.A. Jr and Brown, T.J. (1993), “Caution in the use of difference scores in consumer research”,Journal of Consumer Research, Vol. 19, March, pp. 655-62.
Pitt, L.F., Watson, R.T. and Kavan, C. (1995), “Service quality: a measure of information systems effectiveness”,MIS Quarterly, Vol. 19 No. 2, pp. 173-87.
Raajpoot, N. (2004), “Reconceptualizing service encounter quality in a non-Western context”, Journal of Service Research, Vol. 7 No. 2, pp. 181-201.
from the frontier”, in Rust, R.T. and Oliver, R.L. (Eds),Service Quality: New Directions in Theory and Practice, Sage Publications, Thousand Oaks, CA, pp. 1-19.
Saleh, F. and Ryan, C. (1991), “Analysing service quality in the hospitality industry using the SERVQUAL model”,Services Industries Journal, Vol. 11 No. 3, pp. 324-43.
Segars, A.H. and Grover, V. (1993), “Re-examining perceived ease of use and usefulness: a confirmatory factor analysis”,MIS Quarterly, Vol. 17 No. 4, pp. 517-24.
Shemwell, D.J. and Yavas, U. (1999), “Measuring service quality in hospitals: scale development and managerial applications”,Journal of Marketing Theory and Practice, Vol. 7 No. 3, pp. 65-75.
Smith, A.M. (1999), “Some problems when adopting Churchill’s paradigm for the development of service quality measurement scales”, Journal of Business Research, Vol. 46 No. 2, pp. 109-20.
Stevens, P., Knuston, B. and Patton, M. (1995), “DINESERV: a tool for measuring service quality in restaurants”,Cornell Hotel and Restaurant Administration Quarterly, Vol. 36 No. 2, pp. 56-60.
Sower, V., Duffy, J.A., Kilbourne, W., Kohers, G. and Jones, P. (2001), “The dimensions of service quality for hospitals: development and use of the KQCAH scale”,Health Care Management Review, Vol. 26 No. 2, pp. 47-58.
Sureshchandar, G.S., Rajendran, C. and Anantharaman, R.N. (2002), “Determinants of customer-perceived service quality: a confirmatory factor analysis approach”,Journal of Services Marketing, Vol. 16 No. 1, pp. 9-34.
Teas, R.K. (1993), “Expectations, performance evaluation, and consumers’ perceptions of quality”,Journal of Marketing, Vol. 57 No. 4, pp. 18-34.
Teas, R.K. (1994), “Expectations as a comparison standard in measuring service quality: an assessment of a reassessment”,Journal of Marketing, Vol. 58 No. 1, pp. 132-9.
Tomes, A.E. and Ng, S.C.P. (1995), “Service quality in hospital care: the development of an in-patient questionnaire”,International Journal of Health Care Quality Assurance, Vol. 8 No. 3, pp. 25-33.
Vandamme, R. and Leunis, J. (1993), “Development of a multiple-item scale for measuring hospital service quality”,International Journal of Service Industry Management, Vol. 4 No. 3, pp. 30-49.
Van der Wal, R.W.E., Pampallis, A. and Bond, C. (2002), “Service quality in a cellular telecommunications company: a South African experience”,Managing Service Quality, Vol. 12 No. 5, pp. 323-35.
Van Dyke, T.P., Kappelman, L.A. and Prybutok, V.R. (1997), “Measuring information systems service quality: concerns on the use of the SERVQUAL questionnaire”,MIS Quarterly, Vol. 21 No. 2, pp. 195-208.
Van Dyke, T.P., Prybutok, V.R. and Kappelman, L.A. (1999), “Cautions on the use of the SERVQUAL measure to assess the quality of information systems services”,Decision Sciences, Vol. 30 No. 3, pp. 877-91.
Vaughan, L. and Shiu, E. (2001), “ARCHSECRET: a multi-item scale to measure service quality within the voluntary sector”, International Journal of Nonprofit and Voluntary Sector Marketing, Vol. 6 No. 2, pp. 131-44.
Walbridge, S.W. and Delene, L.M. (1993), “Measuring physician attitudes of service quality”, Journal of Health Care Marketing, Vol. 13 No. 1, pp. 7-15.