Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere without the permission of the Author. DEVELOPMENT OF A TOOL TO MEASURE SERVICE QUALITY, FROM THE PATIENTS' PERSPECTIVES, IN A NEW ZEALAND PUBLIC HOSPITAL. IS SERVQUAL THE ANSWER? A thesis submitted to the Institute of Technology and Engineering, Massey University, in partial fulfilment of the requirements of the degree of Master of Philosophy. Malcolm Rees 1999 Abstract The measurement of service quality, usmg a two part customer questionnaire called SER VQU AL, has been described in the literature by a number of authors. The model itself, was developed from research conducted in the credit card, long distance telephone, banking, repair and maintenance service industries. The model utilises a technique called disconfirmation which is a measure of the gap between similar components of the two questionnaires that the customers receive. For this project, it is a measure of the difference between the perception and the expectation of the service that elective surgical patients received at a provincial public hospital. 1.e. Service Quality = perception - expectation. The basis for this project has been the Tomes and Ng (1995) healthcare modification of the original SERVQUAL, which they used with some success, in a public hospital in England. The service quality was measured over a number of aspects of the service called dimensions . In this case there are eight dimensions, namely; Understanding of Illness, Relationship of Mutual Respect, Dignity, Empathy, Physical Environment, Food, Religious needs and Cultural needs. No evidence could be found of the application of the technique within the health sector in New Zealand. This project has attempted to assess the usefulness of this disconfirmation based technique, as a measure service quality, from the patients point of view, in a provincial hospital in New Zealand. II DECLARATION I declare that this research report is my own, unaided work. It is being submitted in partial fulfilment of the requirements for the degree of Master of Philosophy at Massey University. It has not been submitted before for any degree or examination in any other University . ..... t~.0.\~\~ (Name of candidate) J-:o ... ........ . day of . . ...... D~(~ .~~~. ~·~ ... , 2000. III Acknowledgments I wish to acknowledge my supervisor, Professor Don Barnes, Professor of Manufacturing and Quality systems, for his advice and guidance throughout this project. I wish to acknowledge my second supervisor, Don Houston, Lecturer, Quality Management Systems, for his advice and guidance throughout this project. I wish to thank my family; Lorraine, Nick and Tim for their support and patience whilst I have spent time undertaking this project. I wish to thank my employers, The New Zealand Defence Force, for their pro-active support of my academic endeavours. IV Table of Contents 1. INTRODUCTION ....................................... ..................... ......................................................... 14 2. SIGNIFICANT PRIOR RESEARCH ................................................................................. 16 2.1 HEALTH REFORMS IN NEW ZEALAND . ..... ... .... ..... ..... .... ... .... ....... ..... .... ...... .... ... ... .. ... ... ... 16 2.2 WHAT IS QUALITY? ... .... .. .. .... .. .. .... .. .. ... ...... .. ...... ........ .... ...... . .. ...... ... ..... .. .. ....... ... . ..... .. . . 21 ? ,., - ·-' BACKGROUND TO QUALITY MANAGEMENT FRAMEWORKS WITHIN HEALTHCARE . 2 I 2.3.1 2.3.2 ? ,., ,., -. .) . .) 2.3.4 2.3.5 2.3.6 Continuous Quality Improvement (CQI) within Healthcare .. ..... .. .... ... ... .. ... ....... 23 Total Quality Management (TQM) within Healthcare ...... ... ... .. ..... ...... ....... .. ... 25 Malcolm Baldrige Quality Programme for Healthcare 1999 .... .... .... . .... ...... 26 Quality Systems In New Zealand Healthcare ... ... ... ...... .... .. ...... .... ... ... ....... .. .... .. .... 27 Safety Standard in New Zealand . ......... ...... ... .. .. .. .... ...... .. ... ..... .... ... ...... ..... 27 Healthcare Customers/Stakeholders ... ...... .. ..... . ...... .... ........ ...... . ... .. ...... .... .. .. .. . ... 28 2.3. 7 Healthcare Service Provision .... .. .. ... .. ... ..... .. .. ...... .. ....... .. .... ..... ... .... .... ... .... .. .. .. .. .. .... 31 2.3.8 Measurement of Technical Quality within Healthcare . . ... ....... . ..... ..... .... .. .... ... 35 2.3.9 Measurement of Functional Quality ..... ... .. .... ........ ... .. .. .... ... ... ...... .... ... ..... .. ..... ... ... 36 2.4 SERVQUAL; THE BACKGROUND ... ... ... .... .... .... .. .... ....... ...... ..... ... .... ... .... ...... .. ...... ...... ..... -W 2.4.1 Gap 1: Customers' Expectation - Management Perception Gap ..... .. .. ... ... .. ...... 40 2.4.2 Gap 2: Management Perception - Service Quality Specification ..... .. ... ..... ...... .. .+o 2.4.3 Gap 3: Service Quality Specification - Service Delivery Gap ...... .... .. ........ ..... .. . .42 2.4.4 Gap 4: Service Delivery - External Communications Gap ...... .... ..... ..... ........ .. .... 42 2.4 .5 Gap 5: Customers' Expectations - Customers' Perception Gap .... .. .. .. .. .... ....... . 42 2. 4. 6 The Disconfirmation ........ ............. .. ..... .... .... .. .. ....... .. .. .. .. ...... .. ....... ...... ...... .... .. ......... ... 43 2.4. 7 Importance of Dimensions .... .. .. ... ..... .... ... .. .. .. .... ... .... .... ... .. ..... .. ... .. .......... .... ..... ...... ... 43 2.4.8 Customer Satisfaction ... ............ .... ...... ............ ..... .. ........ ....... ... ... ...... ....... .. ...... ...... .. .. . 44 v 2.4 . 9 Statistical Validation ............ .. ... .......... .. ...... ........... .. ... .. .. .......... ..... .... .... ....... .... ... ... .... 46 2 .4 .10 SER VQU AL in Service Industries ...... .... ...... .. .. .. .. ...... .. .... .. .. .. .... .. .. .. .. .. ... .. ........ .. ... 46 2.4 .11 SERVQU AL in Healthcare ............. .... .. .. ... .... ... ... .. ....... ... .. ... ... . .. .. ... .. ...... .. . 48 2.4 .12 Tomes, A and Ng, S. (1 995) SERVQU AL Modification ............. .... .. 50 2.5 SUMMATION OF L ITERATURE SEARCH ...... .... .. ......... .. .. ......... .. .. .. .. .. .. .. .. .. .. ...... .. .......... .. .. . 52 3. THE RESEARCH QUESTION ................ ...... ............... ....................................................... 54 4. RESEARCH METHODOLOGY ......................................................................................... 55 4 . 1 D EVELOPMENT OF THE Q UESTIONNAIRE .... .... ......... ... ..... .... .. .. .. .. .. ..... ..... .... ... ............ .. .... 55 4 .2 P ART O NE OF THE Q UESTIONNAIRE; T HE E XPECTATION ... .. .............. .. ... ...... .. ... .. .... 56 4.3 P ART O NE OF THE Q UESTIONNAIRE; T I-IE ilv'lPORTANCE . .. ..... .. .... .... ..... ..... .. .... .. .... ....... 56 4.4 P ART T wo OF THE Q UESTIONNAIRE; T HE P ERCEPTION .... .. ... ............ ..... .. .................. .. 57 4.5 SELECTION OF THE Focus O RGANISATION. ... ... ........ ...... ... .. . . . ... .. ..... .. .. . .. ....... .. .... 57 4.6 E nnes APPROVAL ..... .. ... .......... ... .... ... ... ... .. ... .... .. .. ... ... .. .. ... .... .. ..... .... .... .. .... ... ... ..... ... .... ... ... 59 4.7 4.8 T YPE OF SAMPLING T ECHNIQUE .. .. ............................... ...... .. .. .. ..... .. ......... .. ..... .. .. .. .. .. . 60 P RE-TESTING .... ... .. .... ..... ....... ... .. ...... .... . ... .. ... ........... .. ...... .. .... .... ... .. .. ....... .. .. .. ... .... ... .... .. . .... 60 4.9 C ONDUCT S URVEY .. .... .. .... .. ............ .. ............ .. ............. .......... .. ........... .. .. .. ..... ..... .... ..... ... .. 6 I 4 .10 MISSING D ATA .... .... ....... .. ......... .. .. ............... .. .. ..... ....... .. ............... .. ... .................. .. ... ..... ... 6 I 4. 11 RESPONSE RA TES .... ... .. .. .. ..... .. ....... ......... . .. .. .. .. .. .. ................. .. .. .. .... ...... .... .. ..................... 63 4 .12 ST A TISTICAL ANALYSIS .. .... .. .... ...... .. .. ... .... ..... .. ... .... ......... ....... .. .......... ... ............ .......... ... 63 4 . 13 L IKERT SCALES .. .... .... .. ... .. .. .. .. ......... .. .. .. ..... ..... .. .. ............ .. .. ....... .. ..... .. .. .. .. .......... .. .. .. ....... 63 4. 14 O PEN E NDED C OMMENTS .. .. ... ... .. .. .. .. ..... .... ... .... .. .. .. ...... .. .. .... .......... .. ...................... .... .. . 64 4 .15 METHODOLOGY SUMMARY ... ......... .. .............................. ..... ... .... ..... ... ... .. ... ........ ...... ...... 64 VI 5. RESEARCH RESULTS .......................................................................................................... 65 5.1 RESPONSE RATES ..... ............. .... ... .. .... .. ..... ..... .......... ..... .. ................ .............. .. ............ . ····· ·· · 65 5.2 INTER-QUARTILE RANGE AS A MEASURE OF SPREAD FROM THE MEAN .... ... ....... . ... 66 5.3 MISSING DAT A .................. ...... ............. ..... .... ... . .... .. ...... .. . ··········· .. ...... ...... .. ................. . ..... 69 5 .4 QUESTION ORDER EFFECTS ... .... .... ..... ...... .. ...... ........ ....... ........ ... ..... .... ... ....... .......... .... . . 71 5.5 5.6 RESPONSE ORDER EFFECTS ...... .... .. ........ .. .... ..... ...... .................. ...... .. ..... .... ....... ... .... .. ..... 73 SCALE ANCHORING .... .. ..... ... ...... ··· ········ ··· .. . .. ···· ·· ... . . ..... ..... ... .. ... . .... .................. . .. 7-l 5.7 THE SERVQUAL RESULTS ..... ..... ...... .... ........ ... ... ... .. ....... ...... .. ..... ... .. .. ..... .. ... .... ... .. .. .... 75 5. 7. I Service Quality Dimensions Ranked by Importance ........ .... ... .... ... ......... ... ... ...... 76 5.7 .2 Weighted Mean Scores ....... .... ..... .. ..... ........... .... ......... ... ..... ...... ............ ................... 77 5. 7.3 Understanding of Illness Service Quality Dimension ... ... . ..... ... ..... ....... . ... 79 5. 7.4 Relationship of Mutual Respect Service Quality Dimension . . . .. . . . . . . ....... 82 5. 7.5 Dignity Service Quality Dimension .... .... .... ... .. ...... .......... ... ................. ... .. ....... ..... ... 86 5.7.6 5.7.7 5.7.8 5.7.9 Physical Environment Service Quality Dimension ......... .... ..... ...... .. . . ..... . 89 Empathy Service Quality Dimension .... ... .... .. ... .... ..... .. .... ..... ................. ...... . .. .... 92 Tangibles, Food Service Quality Dimension ..... ...... .. .... ... .. .. ..... .. .... ....... .. ........... 97 Cultural Needs Service Quality Dimension .... ........ ............. .. .. ... .. .. ..... . .... IOI 5. 7.10 Religious Needs Service Quality Dimension .. .. ......... ................ .... .. ... .. .. .. ... ...... .. 103 5.8 FACTOR ANALYSIS ..... ....... .. ... ... ...... ............ ........... .. .... ..... ....... ..... ... ..... ...... .......... ... .. ...... .. 105 5.9 DEMOGRAPHIC RESULTS .. .. ............ ........ ...... ...... ... .... ....... ..... ... ... .......................... ...... ...... 106 5.9.1 Gender by Age Distribution .... ... .. ........ ................. ..... ... .. ..... ....... ..... .. ..... ..... .... ... .. ... 106 5.9.2 Gender by Age Importance Ratings ..... .. ....... ...... .... ....... .. ......... ................ ...... ..... .. 108 5.9.3 Gender by Ethnic Identity and Length of Stay in Hospital. .. .. .. ....... .. .... ... .. ... .. .. 112 5.9.4 Ethnic Responses .................... .... ...... ............ .. ......... ..................... ...................... ....... 112 VII 5. 9. 5 No Response Bias ... ............ ..... . ..... .... .... ........ .. .... .... .. .. ....... ... ..... .... .... ......... ...... 114 5.1 0 RESULTS BY LENGTH OF STAY ...... ... ....... ...................... .... ... .... ... ....... ... ..................... 115 5. 10.1 Don' t Know Responses .... ... .. ... .. ....... .......... .... ....... ....... ........ ... .......... . .. .. ... ..... .. .... .. 116 5.10.2 Responses by Length of Stay and Service Quality Dimension .... . ...... .. .... 11 8 5. 11 SUMMARY OF THE RESULTS ... ..... ........ .. .... .. ...... ... .... ..... ............ ..... ... ........ .. ..... .... ....... 130 6. DISCUSSION OF RESULTS .............................................................................................. 131 6. 1 SERVQUAL DISCUSSION ...... ... ................... ... ...... ..... .. .. .... .. ... ... .. ... .. .. ... ........... . ..... . 131 6. 1. l Closing Gap 5 Approach .. ........ ......... .. .. ... ................. .......... . .............. ........... 132 6. 1.2 Resolving Quality Issues by Individual Question .... .. ... .. ... ........ .. .. .. .. ... .... .. . . 138 6.2 STATISTICAL APPROACH ...................... ........... ......... . .... .. ........ ......... ... .................... 140 6.2. l Closed Questions Versus Open Questions .......... ..... ...................... ... ... ... ... .. ...... 141 6.3 DEMOGRAPHIC RE SUL TS . . . ....... .. ...... ....... .. ... .. .... .. .. .. . ...... . ... ... . . . . . .... ..... .. ... . .... . . .. . ......... 142 6.3. l Age Differences by Gender.... ..... ... ....................... ...... ....... . .... ... .. ............ . ..... 142 6. 3.2 Importance Ratings Higher for Females than for Males .... ... ...... .. .. ... ... ..... .. .. .... 142 6.3.3 Cultural Concerns ......... .. .. .. .. .... ............ ... .. ... .. ............................. .. .. ...... ................. 142 6.3 .4 Results by Length of Stay in Hospital .......... .......... ......... .......... .. ... ............... ... .... 143 6.4 QUESTIONNAIRE DESIGN ..... .. .... .. ..... ...... ..... ....... ............... ... .. ..... .. ..... ... .... ..... ..... ...... ... ..... 143 6.4.1 Questionnaire Bias ............ ........................................... ... ........ ... .. ........ ..... ...... ... .... .. .. 145 6.4 .2 Response Rate .................... ... ... .. .... ...... ... .. ... ... ...... ..... ... .. ... .. ...... .. .. .. ....... ....... .. ... ....... 145 6.4 .3 Selection of the Sample Group ............ ..... .............. .... .. ....... ... .... ...... ... .......... .. .... .. . 146 6.4.4 Customers as Work In Progress? ... .. .. ... .... ....... .... .. .. .. ............ .. ...... .. ... .. .. .. .... .. ....... 146 7. CONCLUSIONS ...................................................................................................................... 148 7.1 SERVQUAL CONCLUSIONS ....... ... .......... ................... .. ..... .. .. ... .. .. .. .. .. ......................... ... . 148 7. 1.1 Service Dimension Importance Conclusions ............. .. .................... ............... .. .... 148 VIII 7 .1. 2 Service Dimension Gap Conclusions ............. ............................ ........... ...... ..... .. .... 148 7.1.3 Weighted Mean Scores ... ...... ...... .. .... .......... ... .... ... ........ .... .............. ... .... ..... .. ....... ... .. 151 7.2 STATISTICAL APPROACH ....... ........ ..... .... ..... .... ..... .. ...... .. ... ......... ... ....... ......... .. ... ............ 152 7.3 OPEN QUESTIONS ....... .................. ... .... .... . .... ... .... ... .................. ....... ......... . .. .. ... ... 153 7.4 DEMOGRAPHIC RESULTS .. ..... .. .. ... ... ..... ... .... .. ...... .... .... ..... ...... .... ... ...... .. .. .. ....... ..... ..... . . 15-J. 7.4. 1 Results by Gender. ... .. ....... ..... ........ ....................... .... ......... ..... ..... ...... .... .. . .. 15-J. 7.4.2 Importance Ratings Higher for Females than for Males ... ... ....... ... .... .... ... ..... ... 15-J. 7.4.3 Results of Ethnic Identity ...... ... ... ..... .. .. .... ....... ... ... ..... ....... ... .. ... .. ............ ... ....... ...... 155 7.4.4 Results by Length of Stay in H ospital .......... ........ ... ........ ......... ... ........... ....... 155 7.5 QUESTIONNAlRE DESIGN ... ... ... .. ...... ... ..... .... ... .... ..... .... ... ... .... ... ......... .. ... ... ... .. ..... ....... .. ..... 156 7.5. 1 Questionnaire Bias ..... . .. .................... ........... ............. .... .. ........ ............... .. ...... .. 156 7.6 THE SAMPLE GROUP ......... .. .... ... ...... ... ... .. .. ... .. ............. .. .... .. ...... .. ...... ........ ....... ... ... . .. 156 7.7 PATIENT, HOSPITAL, HEALTH FUNDING AUTHORITY RELATIONSHIP. .... ... .. .... . .. 157 7.7.1 The Value of SERVQUAL .... ..... ........ .. .... ..... ... ...... ...... ................ .... ... ...... .. .. 157 8. RECOMMEND A TIONS .. ................................................................. ......... ........................... 158 8.1 LONGITUDINAL APPLICATION OF THE SURVEY TOOL .... .. ..... ..... ...... .. ...... .. .. .. .. ... ... ...... 158 8.2 THE APPROACH TO QUALITY IMPROVE!\.1ENT FROM THE QUESTIONNAIRE ...... ... .... . 158 8.2 .1 The Approach to Quality Improvement from the SERVQUAL Responses .. 158 8.2.2 The Approach to Quality Improvement from the Weighted Mean Responsesl59 8.2.3 The Approach to Quality Improvement from the Open Ended Responses .. .. 159 8.3 CONSTRUCT VALIDITY ....... ..... .... ......... ....... ... ..... .... ........ ....... ......... .. .. ........... .. ...... ........... 160 8.4 DEMOGRAPHJC INFORMATION ..... .... .... ... ............. ... ...... .... ..... .. ... ..... ....... ............ ..... .. .... .. 160 8.5 QUESTIONNAlRE RE-DEVELOPI\.1ENT .... ...... ...................... .. .... .. .... ... .. ... ..... ...................... 160 8.6 PATIENT SATISFACTION APPROACHES .. .. ... .... ..... ... ... .......... ..... ......... ... ... .. ....... ... .... ........ 161 IX 8. 7 P ATIENT, HEAL TI-I F UNDING AUTHORITY, HEALTH PROVIDER RELATIONSHIP ....... 161 8.8 RECOMMENDATION SUMMARY ........ ......... ...... ..... ........ ......... ..... .... ... .......... .... ...... ..... ... .. 161 9. REFERENCES ................... ...................................................................................................... 162 10. BIBLIOGRAPHY ............................................................................................................... 17.t 10.1 Q UESTIONNAIRE D ESIGN AND ANALYSIS .... ..... .................. .......... ......... .. .. ....... .. .. 17.t 10.2 RESEARCH METHODS ............... .... ... .. ... .. .... .. .. .. .. ... ...... .......... ... .. .......... ... .. .............. . 17.t 10.3 10.4 10.5 COMPUTING .... .... ...... .... .... . ...... ...... .. ........... ... ..... . ······ ··· ·· ····· ············ ··· .. .... ........ .... .. ..... . 17-l T HESIS F ORMATTING .. .... ........... ........... ....... .... ...... .. ........ .. .. ....... .. .... .. ....... .. ... ....... . 175 STATISTICS ......... ....... ........ ...... .. .... ..... ..... .... ...... ... .. .... .. .... .. ... .... ...... .. ...... .... .. .... ......... . ... 175 11. APPEND ICIES ..................................................................................................................... 176 11 . l APPENDIX l C USTOMER SERVICE Q UESTIONNAIRE P ART 1 ...... ........ ..... ..... .. .. ..... 177 11.2 APPENDIX 2 C USTOMER SERVICE Q UESTIONNAIRE P ART 2 .... ... ......... .. ... .. .......... .. 181 11.3 APPENDIX 3 Q UESTIONNAIRE lNFORMA TION SHEETS ....................... .... .. ........ ........ 185 11.3 .1 Handout Number One sent out with Questionnaire Part One ..... ... ...... .. .... ..... 185 11.3 .2 Handout Number Two which sent out with Questionnaire Part Two ........... 186 11.3 .3 Handout Number Three. Reminder for Part Two ... .... ......... .... .. ........ ... .... ....... 187 11.3.4 Demographic Section of the Questionnaire ...... .. ... ... .... .. ...... ... .. ... ... ..... ..... ...... ..... 188 11.4 APPENDIX 4 ETHICS APPROVALS ..... ...... .... . .... .... .. ..... .... .... .. ...... ..... ... ... .... .. .. .. ..... .... 189 11.4.1 Massey University Ethnics Committee ........ .... ... .. ........... .... .... ....... ..... ...... .. .... ... ... 189 11.4.2 Approval from The Manawatu/ Wanganui Ethics Committee ...... .............. .... 190 11 . 5 APPENDIX 5 FACTOR ANALYSIS .... .. ... ... ..... ..... .. ... .. .. .............. .. ...................... ... ...... .... 191 x List of Figures FIGURE 2-1 C REDENCE Q UALITY C ONTINUUM .... ................... ...... .. ......... ....... .. ...... .. ..... ..... .... ... ... .. 32 FIGURE 2-2. MAJOR THEORETICAL P ERSPECTIVE'S ON Q UALITY .......... .......... ......... .... ... .... .. ...... 33 FIGURE 2-3 T HE H YPOTHETICAL STRUCTURE OF S UPPORT S ERVICES ......... .. ... .. ... ....... ... ....... .... 3-t FIGURE 2-4 A C ONCEPTUAL MODEL FOR SERVICE Q UALITY ...... .............. ...................... ......... -+I FIGURE 2-5 C USTOMER S ATISFACTION M ODEL ........... .. ........ .. ................................ ... ..... ... .. ...... -t5 FIGURE 2-6 T HE C HRONOLOGICAL D EVELOPMENT OF A HEALTHCARE BASED SERVQUAL M ODEL .......... ....... ... ........ ............ .. ......... ............ ... .. ........ .. ............ .... .. ................. ...... .. .... ...... ....... .. -+9 FIGURE 4-1 RESPONSE RATE C ALCULATION ........... ........ .......... .... ..... ... ........... ..... ... ..... .... ... .... .. .. .... 63 FIGURE 5-1. G AP SCORES WHERE INTER-QUARTILE RANGE WAS T HREE OR G REATER ... ....... 67 FIGURE 5-2 B EFORE RESPONSES FOR Q UESTIONS WITH Z ERO INTER-QUARTILE RANGES . ... 68 FIGURE 5-3 Q UESTIONS 20 AND 30 SHOWING THE M ISSfNG D ATA ........... ..... .. ... .... ...... ..... .. ....... 69 FIGURE 5-4 D ISTRIBUTION OF B EFORE SCORES FOR Q UESTION 20 AND 30 .. .. .......... ........ ... ...... 70 FIGURE 5-5 P OSSIBLE Q UESTION O RDER E FFECTS .. .. .. ... ..... ... .. ... .. ... .. .... .... .... ... .... ..... .................. .. 72 FIGURE 5-6 MEAN OF D IMENSIONS RANKED IN D ESCENDING O RDER OF IMPORTANCE ....... . 76 FIGURE 5-7 W EIGHTED MEAN FOR Q UESTIONS WITH P OSITIVE SCORES ............. .............. ...... ... 77 FIGURE 5-8 W EIGHTED MEAN FOR Q UESTIONS WITH N EGATIVE SCORES .... .... .. ... .... ... ....... .... ... 77 FIGURE 5-9 MEAN IMPORTANCE AND SERVQU AL SCORE FOR D IMENSION 1: UNDERSTANDING OF ILLNESS ......... .. ..... .. ..... ..... .... ......... ....... ........ ...... ..... ..... ..... .............. ..... .. .. 79 FIGURE 5-10 MEAN IMPORTANCE AND SERVQUAL SCORE FOR D IMENSION 2: RELATIONSHIP OF M UTUAL RESPECT .. ... ..... .... ......... ... ........... .. ................... ... ................. .. ... ..... 82 FIGURE 5-11 MEAN IMPORTANCE AND SERVQUAL SCORE FOR D IMENSION 3: D IGNITY .. 86 FIGURE 5-12 MEAN IMPORTANCE AND SERVQU AL SCORE FOR D IMENSION 4 : P HYSICAL ENVIRONMENT .. ........ .. ........ ....... ....... .. .. .. ... .... .......... .... ... ............... .... .. ...... ... .... ...... ... .. .. .. ....... ...... 89 FIGURE 5-13 MEAN IMPORTANCE AND SERVQUAL SCORE FOR D IMENSION 5: EMPATHY. 92 XI FIGURE 5-14 MEAN IMPORTANCE AND SERVQU AL SCORE D IMENSION 6: T ANGIBLES; F OOD ... ......... ..... ................... .... .. ...... ... .......... ... ..... .......... .. ... .... .... ... ............ .. .. ... ...... .. .... ... ..... ......... 97 FIGURE 5-15 FOOD D IMENSION BY G ENDER FOR AGE UNDER 60 Y EARS .. ...... ... . ········-·· 99 FIGURE 5-16 F OOD DIMENSION BY G ENDER FOR AGE O VER 60 Y EARS ..................... .. ...... ... ..... 99 FIGURE 5-1 7 MEAN IMPORTANCE AND SER VQU AL SCORE OF D IMENSION 7: C ULTURAL A SPECTS .......... ......... .... ......... .... ... ... .. ....... ...... .. ... ....................... ...... ...... .... . ............................... I 0 I FIGURE 5-18 MEAN IMPORTANCE AND SERVQUAL SCORE FOR D IMENSION 8: RELIGIOUS N EEDS ........ ........ ....... .... ... ........... .. ... ...... ... .. .... ... .. .. ............ .... ... ... ................................................. 103 FIGURE 5-19 F REQUENCY OF MALE AND F EMALE RESPONDENTS BY AGE .... .. .... ......... ..... ...... 107 FIGURE 5-20 Q UESTIONS BY G ENDER WITH L ARGE D IFFERENCES IN IMPORTANCE .... .. .. ...... 108 FIGURE 5-21 L ARGE IMPORTANCE G AP BY G ENDER FOR AGE UNDER 60 YEARS ................... 109 FIGURE 5-22 L ARGE IMPORTANCE G AP BY G ENDER FOR AGE OVER 60 YEARS .. ........... ........ 109 FIGURE 5-23 SERVQUAL SCORES W HERE IMPORTANCE RATING WAS D IFFERENT ..... ....... 11 0 FIGURE 5-24 L ENGTH OF STAY FOR T OTAL SAMPLE F RAME .... ............ .. ... ... .. ... .... .. ............. ...... 115 FIGURE 5-25 E MPATHY D IMENSION BY L ENGTH OF STAY .... ... ... ........ ........ ...... .. ... ....... .. ........ ... I 18 FIGURE 5-26 M UTUAL RESPECT D IMENSION BY L ENGTH OF STAY ...... ... .. .. ..... .... .... ............ ... 120 FIGURE 5-27 UNDERSTANDING O F ILLNESS D IMENSION BY L ENGTH OF STAY ........ ..... ... ....... 122 FIGURE 5-28 DIGNITY D IMENSION BY L ENGTH OF STAY .......... ........... ...................... .... ..... .. ... ... 123 FIGURE 5-29 RELIGIOUS D IMENSION BY L ENGTH OF STAY .. .. .... ... .... ...... ...... .......... ... .. ..... .... .. ... 125 F IGURE 5-30 T ANGIBLES, F OOD BY L ENGTH OF STAY ...... ... ... .. ... ... .... ...... ...... ..... .......... ... ... ....... 126 F IGURE 5-31 P HYSICAL E NVIRONMENT D IMENSION BY L ENGTH OF STAY ..... ... .... .. ........ .... .... 127 FIGURE 5-32 C ULTURAL D IMENSION BY LENGTH OF STAY ... ..... .... ... ...................... ....... ...... .. .... 129 XII List of Tables T J\BLF. I U NDERSTANDlNG OF ILLNESS Q LJESTIONS .......................................................... .. .... ....... 79 T ABLE 2 M UTUAL RESPECT Q lJF.STTONS ..... .................... .......... .... ......... ............. ...... ........................ 82 T J\BI J·: 3 DIGNITY Q UESTIONS ... ............................. .... .. ............ ..................................................... ..... 86 T ABLE 4 PI !YSICAL E NV IRONMENT Q lJF.STIONS .................. ............... .. .. .. ...... ..... ............................ 89 T ABLE 5 E MPATI IY Q UESTJONS .... .... ........... ... ........... ........... ................. .... ... .. ................ .. ........ .......... 92 T J\BLE 6 T J\NGIBLES, FOOD Q UESTIONS ........................................................................................... 97 T ABLE 7 C ULTURAL A SPECTS Q UESTIONS .............. ...................................... ................................. JO I T J\BLF. 8 RELIGIOUS NEEDS Q UESTIONS ......................................................................................... 103 T ABLE 9 G ENDER13Y A GE D ISTRJDVTION .............................. ....... ........ ......................... ........ ...... ... 106 T /\81.F. I 0 Q UESTIONS WITH I MPORTANCE G ENDER D ifFERENCES ............................................ 1 JO T ABLE 11 G ENDER BY L ENGTH OF ST J\ Y /\ND En !NIC IDENTITY .. .. .... .............. ............. ........... I 12 T l\.BLF. 12 MAORI RESPONSES TO THE ETIINIC Q UESTIONS ........ ... ....... .. ..... .... ....... ......... .. .. ..... ... 11 3 T ADLE 13 No RESPONSE Q UE. TION. FROM D AY STAY P ATIENTS ............ .. ............................ .. I 16 T ABLE 14 E MPATHY Q UESTIONS .......... ............. ...... .............................. ........ ..... ..... ... .............. .. ..... 11 8 T Al3LE 15 M UTUAJ. RESPECT Q!JESTIONS ... ............ .. .. .. ... .... .............. ................ .. .. ............ .......... .. . 120 T ABLE 16 U NDERSTANDING OF I LLNESS Q LJESTIONS ..... .. .... ....... .. ....... ...... .. .. .... ............. ....... ... ... 122 T ABLE 17 0 IGN1TY Q lJF.STIONS ................... ... ..... ..... ....... ........ ... .. .......... ... ..... .......... ........ ........... ..... 123 T ABLE 18 RELIGIOUS Q UESTIONS········ ··· ······ ····· ······················ ········ ·· ······················· ···· ···· ············ ··· 125 T ABLE 19 T ANGIBLES; FOOD Q UESTION ........... ......... ... ....... ... ................ .. .. ................. ..... ............. 126 T ABLE 20 PHYSICAL E NVIRONMENT Q UESTIONS .................. ................... ......... .. ..... ... ........ ......... 127 T J\BLE 2 1 C ULTURAL Q UESTIONS ..................................................... ................................. .... ......... . 129 XJII 1. INTRODUCTION The need for change within the healthcare sector in ew Zealand has resulted in a radical health reform process over the last twenty years. Despite variations in the mode of health care delivery world w ide, there seem to be three constraints common to health care systems in developed countries, namely: an aging populati on, rapid advances in medical technology and g reater expectations rrom patients. While the public may expect a comprehensive service, the government in this country has signalled that we can no longer afford the luxury of an unlimited supply of health care resources (Blank 1994). There have been a number of permutations to the various organisations during the restructuring process, such as Regional Health Authorities (RHA) and Crown Health Enterprises (CHE), w hich were established to separate the funding from the supply of healthcare . This was a significant development designed to apply a degree of commercialism and competition to healthcare in an attempt to cap expenditure. One aspect that has not received much focus during this reform has been in the area of quality management w here quality systems and associated policy fo rmulation seem to be arranged on a purely ad hoc basis. Patient-acquired quality information is almost non­ existent . l n a review of the theoretical perspectives of quality, Walbridge and Delene ( 1993) separated quality into two sections, firstly what 1s provided and secondly, how it is provided. In the case of service quality in a hospital, these could be described as technical quality and funcliona/ quality. This project focuses on the measurement of the functional aspect of service quality within a public hospital. 14 The technique used is called SERVQUAL which measures, by disconfirmation, the gap between expectation and perception of the service provided, by using customer questionnaires, thereby providing a customer or patient-focused approach to the measurement of thi s aspect of service quality. A number of modifications to the original SERVQUAL model proposed by Parasuraman, Zeithaml and Berry ( 1990) have been undertaken for service industries. However, very few have been developed for the healthcare sector. No evidence could be found in the literature for the application of the model within the healthcare sector in this country. One modification that did seem applicable was developed by Tomes and Ng ( 1995) for the National Health Service (NHS) in England. That project described a SERVQUAL tool with eight quality dimensions, six of which relate to intangible aspects of care, and two of which relate to tangible aspects of care. This project has attempted to ascertain the effectiveness of the Tomes and Ng (1995) SERVQUAL modification, for the measurement of service quality, in a public hospital in ew Zealand . 15 2. SIGNIFICANT PRJOR RESEARCH 2.1 Health Reforms in New Zealand Health reforms in New Zealand began to gain momentum in the 1970s with the Labour Party's White Paper on health, which introduced the concept of the need for the rationing of health services. Further development was slow until the 1980s, when expenditure o n health in New Zealand rose from 5.1 to 7.2% of Gross Domestic Product (GDP). Momentum began to increase in 1983 , when the Area Health Boards Act decentrali sed health management away from central Government and the Department of Health, with the result that, by 1989, 14 Area Health Boards had been established. During this time population-based funding was introduced - the most significant change to hospital funding since free healthcare was introduced in 1938. This was introduced by the government in an attempt to cap hospital funding. Continued inequity in health service distribution and lengthening waiting lists culminated in two significant reviews: the 1986 Health Benefits Review, which provided the government with options for delivery and, in 1988, the Hospital and Related Services Task Force Report commonly known as the "Gibbs Report". The Gibbs Report recommended the separation of the provision and funding of health by the establishment of six Regional Health Authorities (RHAs) which would buy health services from various health providers, for a specific geographical area . Funding was to be based on a costing device called a Diagnostic Related Group (DRG), which established an average cost for each medical event . The shift was towards competition, and provision of core services only. The Labour government of the day rejected the findings of the Gibbs report, however they were accepted by a subsequent National government and became the cornerstone of a significant change to healthcare provision in this country. 16 In 1993 four RHAs were created to fund health and disability services. AHBs were abolished and 23 Crown Health Enterprises (CHE) were established to provide health services in a conventional business-like manner. Other health providers were given the opportunity to bid to provide services, thus introducing competition between providers . A Public Health Commission was established to monitor the progress and the state of the system and report to the Ministry of Health (Blank, 1994). The Crown Company Monitoring Advisory Unit (CCMAU), an arm of Treasury, was established to monitor the performance of the CHEs ( MoH, 1997). A key policy statement in 1995 titled " Advancing Health in New Zealand" from the then Ministry of Health, Hon Jenny Shipley, outlined three goals for the healthcare delivery strategy for the next 10 years. " f (> impro1•e the health of people in New Zealand. To p11t people at the centre of.,-er1•ice delive1y. To get the greatest amo11111 of health and disability support services for the dollars available." (MoH, 1995, pp. 8,9) . To support this focus, New Zealand Health Informatics Services (NZHIS) was established to provide timely and meaningful information to measure the achievements, by compiling customer service information and health and disability information to ensure that the goals were being achieved (MoH, 1996). 17 The Coalition Government elected in 1996 outlined changes to the strategies previously undertaken, re-focusing on the family and disadvantaged groups by developing the following strategy: ''Family health teams [should be established] for the delive1y of some primmy health care services by CHE 's. Increased emphasis 011 Mental health, Maori health and Child health to improve health status and reduce disparities in health status. " (MoH, 1997, p 4) The 1996 Coalition Agreement between the ational Party and New Zealand First also saw a softening of the commercial approach with the amalgamation of the four RHAs into one Health Funding Authority (HF A), which came into being in January 1998, and removal of the profit motive for publicly-owned hospitals. (MOH, 1997) . CHEs were replaced by Hospital and Health Services (ID-IS). Throughout this investigation of the health provision strategy, no one document was found which covers all aspects of health care delivery and policy. This reflects the complexity of health provision and the need to integrate a large number of stakeholders. What started as a revolution in health reform appears to have gradually mellowed into an evolution of health funding, rationing, policy and systems development as bureaucrats have come to realise the complex, emotional and individual nature of health care provision. A central theme remains, however, that people are still at the centre of the delivery process­ but, at the same time, this has to be balanced against diminishing financial resources­ rationing is here to stay. A theme that is not well developed within this reform process is in the area of quality management systems and where these sit in relation to the other reform processes being 18 undertaken. Healthcare performance indicators are still primarily financial (MoH, 1998) or clinical (from the New Zealand Health Information Service). A sad reflection of the effect of the changes in the system, without a concurrent focus on quality of service, has seen the unnecessary loss of life at Christchurch Hospital. The subsequent report from the Health and Disabilities Commissioner cites inadequacies in many areas within this " reformed" system (Health and Disability Commissioner, 1998). The reforms appear to have centred purely on cost containment with little attention being given to quality, demand for service, customers or the measurement of satisfaction with the service. Reduction in variation has appeared to be abstract concept. This is perhaps attributable to the imbalance in the model that has focused on short-term fiscal goals at the expense of long-term strategies such as quality of care. It is debatable whether or not the current strategy by itself will be effective, without concurrent quality improvement initiatives. The patients ' role in providing information about the reform process or their experiences from within the system is not well defined . Little customer/patient or customer service quality information is collected. If the healthcare system exists for this group, then a greater consideration of their needs is required . A worthwhile policy platform against which healthcare resources could be designed which also included aspects of quality management, came from Brook (1997), who said that: Eve1y one should receive all necessary health care provided that it is within the financial capacity of the count/y to pay. Variation should be reduced across three dimensions; appropriateness, excellence and humaneness. Care should be provided efficiently" (Brook, 1997, p 1614). 19 This is a worthwhile approach that should include both quality management systems and associated measurement processes to reduce variation and improve efficiency. This project develops a customer focus by providing an option for information collection that is derived from the patients-thereby addressing one aspect that is currently not well covered. 20 2.2 What is Quality? What, then, is q11ality and how does it relate to the healthcare sector? A number of definitions of exist within the literature. The one chosen here describes quality in the following manner: ''Q11ality is consistently meeting the conti1111011sly negotiated expectations of customers and other stakeholders in a way that represellls val11e for all involved'' (Kruithof and Ryall, 1994, p.20). Whilst the definition is sound, it raises a number of issues for healthcare because healthcare seems to be structured so that it meets the needs of the stakeholders other than those from the customer (the patient). Complicating this argument is the highly technical nature of the service which includes a number of stakeholders with patients often being treated as 'work in progress' because most aspects of the service are apparently too technical. This raises two questions: Firstly, who are the primary customers or stakeholders? Are they the patients, or are they the other stakeholders or even perhaps the funders? Secondly, what is being provided? Are we consistently providing the negotiated needs of the most important customer group? We are not currently canvassing much in the way of patient based healthcare information. 2.3 Background to Quality Management Frameworks within Healthcare Quality Management Systems (QMS) within healthcare come under the umbrella of the International Society for Quality in Health Care (ISQua). They have improved the accountability of healthcare standards by convergence with the International Standards Organisation's (ISO) series of standards called the ISO 9000 standards, thereby reducing both confusion and differences in terminology (Shaw, 1997). 21 Despite the existence of ISQua, many of the QMS that currently operate within the healthcare industry seem to have developed largely out of the systems approach taken by the Joint Commission on Accreditation of Healthcare Organisations (JCAHO) in the United States of America. Over the last 50 years they have developed Quality Management Systems for a number of healthcare professional groups. Currently, JCAHO accredits more than 18,000 healthcare organisations in the USA (JCAHO, 1999a). ln some instances accreditation is linked to funding levels and access, but mostly accreditation is voluntary. A recent development in the American model has seen the need for performance indicators i.e. actual quantitative measures to ascertain if the necessary quality standard is being maintained. However, such measurement tools are yet to be implemented. (JCAHO, 1999b). Godfrey and Halder ( 1997) have taken the view that the ISO standards are not suitable standards on which to base healthcare quality because they are product-based, with few relevant outcome measurements for healthcare. Furthermore they also describe the difficulty in standardising medical practice, and therefore the difficulty in defining a written set of criteria across the healthcare sector. In addition, they note that the skills of the auditors involved with the accreditation are questionable. They would prefer to see a new health care approach comprised of a mix of the ISO system, and that of a company wide quality management systems called the Malcolm Baldrige Quality Awards system. Along with these they also see the need for identifiable clinical, quality and patient satisfaction measures, followed by benchmarking in order to identify world class results. The systems approaches appear to be either technical and global, with criteria focused primarily on critical events such as infection rates, morbidity and mortality-or financial in content, such as cost per patient, bed rates and lengths of stay effects on budget. The feature lacking from the existing quality systems approaches is a focus on the customer. 22 Two specific quality systems namely; Continuous Quality Improvement and Total Quality Management are presented, separately, as they are described in the literature. It could be argued, however, that they are in fact describing the same quality management system. It depends on ones definition of CQI and TQM. 2.3.1 Continuous Quality Improvement (CQI) within Healthcare CQI is used by many hospitals as a quality strategy. It requires the description, measurement and constant improvement of key processes to meet customer needs more effectively. Chan and Ho (1997) analysed the application of CQI into both American and Canadian hospitals. Of the 3300 hospitals canvassed in the USA, 70 percent had implemented CQI in some form or other. However, they do comment that interest in CQI has declined. One of the reasons given for this was that many of the decisions within a hospital are made by doctors who are not under the control of those trying to implement CQI. The lack of control over this group makes the improvement process subject to both personal wishes and/or political influences. Despite these doubts, 80% of the hospitals canvassed reported that CQI was beneficial. This point raises the complexity of healthcare organisations with dominant technical stakeholders who are influential, yet who are usually not familiar with QMS concepts. Two recent high profile cases in New Zealand namely the inaccurate reading of large numbers of cytology smears and the inaccurate diagnosis of breast cancer (which resulted in an unnecessary mastectomy) support the view that the culture within the health sector in this country can be blase towards quality. Despite having quality management systems in place for both laboratories and hospitals generally, critical mistakes do still occur. Nerenz (1997), cites the need for change to the current organisational culture, as a whole, rather than to discrete individual processes, if CQI is to succeed. Maguerez ( 1997), 23 describes the use of this stepwise process in French hospitals, and notes that not all first attempts at CQI will be successful. However, this should not detract from the overall value of the technique. Goldman ( 1997) suggests that, if you want to make CQI successful , you have to get the processes within the systems right This will involve some very clear descriptions of: the intentions of the methods to be used, what is going to be measured, and finally, how to decide when the desired result has been achieved. Fields and Siroky ( 1994 ), describe an example of the use of two quality improvement tools used within the philosophy of CQI. Firstly, control charts to measure common and special variation and secondly, Pareto analysis to describe and rank important effects . Heckman et al. (1 998), used QI tools effectively to reduce error rates in cord blood sampling. CQI exists on various levels within healthcare. Ultimately, the success of these relies on management commitment and ability to introduce and support the initiatives. CQI also implies that there are some measurement tools in action to provide control mechanisms and proof of improvements. While technical tools may be operating, customer-focused tools are rare. Within healthcare there are two other important stakeholder groups who are worth bringing into the QI equation; a. The medical staff, who are not likely to be very familiar with matters relating to quality management . b. The patients, who are the prime reason for the existence of the healthcare organisation. The inclusion of this latter group is difficult for healthcare organisations because of the technical nature of the service provided. Patients have difficulty in assessing the quality of technical procedures such as surgery etc. 24 2.3.2 Total Quality Management (TQM) within Healthcare TQM is a systems approach to quality described by many authors. It is difficult to define exactly what it means, but its popularity warrants discussion. Motwani, Sower and Brashier (1996) refer to the definition of TQM used by JACHO, which describes the need for a company-wide structured system all working together to plan and implement CQI in work systems and work processes. Swinehart and Green (1995) support the link between CQI and TQM as a means of eliminating waste, with TQM as a means to improve quality. Omachonu ( 1990), refers to TQM as Total Service Quality Management (TQSM), and notes that, in order to move towards this goal, there is a need to assess both the visible and invisible aspects of the service. Zabada et al. ( 1998), describe a number of reasons why TQM is difficult to implement within healthcare. These include the powerful subcultures that exist within healthcare, particularly within the physician group, who measure quality mainly in terms of technical expertise within a hierarchical management structure which still resists employee empowerment. An overriding theme is the need to move away from the ' product out ' philosophy, which places the emphasis on the medical condition, to a ' market in' philosophy which places the focus on the person. This empowers people to be involved in their own medical treatment with a team philosophy regarding the quality of the service provided. The link between CQI and TQM is important. These two should not be considered in isolation, in fact, CQI should operate within an organisation which is committed to the TQM philosophy. This requires a thorough knowledge of the tools, both technical and patient focused, for the effective application of the CQI process. 25 2.3.3 Malcolm Baldrige Quality Programme for Healthcare 1999 The Malcolm Baldrige Quality Award has been in existence in industry for many years. This is a substantial quality management system originating in the USA that rates quality against specific criteria by obtaining numerical score from a third party auditor. The total numerical score is obtained and is compared with others applying for the award . The programme requires a significant commitment from the organisation involved . A Baldrige programme for the heath sector, commenced in 1999, for the first time. Of particular interest within the new health criteria is a patient satisfaction component, which accounts for approximately 20% of the overall excellence measure. It is possible that the results from this research project, if integrated into such a programme, could make up a significant portion of that component (National Institute of Standards and Technology: Malcolm Baldrige Healthcare Quality Criteria, 1999). 26 2.3.4 Quality Systems In New Zealand Healthcare Despite the Government ' s apparent lack of interest in quality management, they did signal some interest in QMS with the establishment of the New Zealand Council for Healthcare Standards ( ZCHS) in the late 1990s (NZCHS, 1994). This self-funded group was established with the assistance of the RHAs, to develop two quality management systems; one for acute services and the other for disability services. These standards were modelled on the JCAHO system ( Z CS, 1994). Currently, 132 different healthcare providers, from large hospitals to small private rest homes, are voluntarily accredited against this standard. Despite the existence of a quality standard, no government directive demanding accreditation has ever been made. Despite the apparent success of a systems approach to quality in this country, a recent contact with NZCHS suggests that quality thinking within healthcare in this country is still in a state of flux, with the emphasis sti ll on price and volume contracting. Measurement criteria, in particular, are still focused towards these outputs and not towards quality measurement. (B. Donaldson, ZCHS, personal communication, 15 March, 1999). 2.3.5 Safety Standard in New Zealand The government has stated the intention to introduce a safety standard for healthcare organisations (MoH, 1995). This was to be introduced to provide both the customers and the funders with an assurance that safe work practices were in place. Unfortunately, this new standard appears to be in addition to the existing quality standard, rather than being amalgamated of both. Perhaps a convergence of the two standards will see QMS in healthcare acceptable in this country sometime in the near future. Alternatively, it is feasible to see the safety standard superseding the NZCHS quality standard, because providers will be reluctant to spend the 27 money required for accreditation against both. The optional ZCHS may be the one dropped. 2.3.6 Healthcare Customers/Stakeholders There are a number of customer or stakeholder groups within healthcare . The preferred way to describe patients these days is as patients and not as customers-thus there has been a softening of the commercial connotation previously in vogue. Other stakeholder groups are worth considering as well. 2.3.6. l External Customers External customers are those who are impacted by the product (or, in our case, the service) but who are not part of the organisation (Juran & Gryna, 1988). As far as the Ministry of Health is concerned, people are the centre of the service deli very (MoH, 1995). In other words, the external customers are the patients and they should be at the centre of the service delivery. The management of the particular hospital chosen for thi s research project also state, as part of their business plan, that patients are their primary customer. The impression gained during thi s project is that not every provider has the same view. It is interesting, within healthcare, to refl ect on how little influence the primary customers (patients) have over the provision of the service. Furthermore, few apparent attempts are made to ensure that the service meets with their negotiated expectations. 28 2.3.6.2 Internal Customers These are stakeholders on the inside of the organisation who are actually customers of the organisation (Juran & Gryna, 1988). Examples within the health sector would include such services as the laboratory and the X-Ray department both of which function to support the medical staff, rather than the patients directly. ormally this group would be relati vely insignificant, however, because of the technical dominance that healthcare professionals have over the patients, this is not the case. The health professionals and hospital management groups can be a significant lobby group within a healthcare organisation to the extent that it is often presumed that they know what is best for the external customers. Missing from this assumption is that the internal customers should be working towards supporting the expectations of the primary customers; namely the patients. 2.3.6.3 Other Stakeholders Within healthcare there are groups other than the institutional or primary health care providers who have roles to play in the provision of the care. Examples include the Health and Disability Commissioner, whose role is to act as an advocate for patients if they consider that they have been inadequately or inappropriately treated by a hospital, the HF A, the funding organisation, whose role is to ensure that the Government ' s health goals are being met, and CCMAU, who monitor the HHSs on behalf of Treasury. It could be argued that the Government is currently the customer. It does after all pay for the service, however, in placing the Government in this position, the patient is relegated to something less significant and perhaps resembles more the 'work in progress·. A feature of this project is to change this perception and to place the patient in the position of the primary customer and from this angle, an attempt will be made to measure the quality of the service provided by the HHSs. 29 There are a large number of stakeholders in this very complex service. Often the patients are the least-informed of all these groups. This has led to the situation where the patient is often the least considered, which is not in line with the Government' s intention of how healthcare should be provided. Clearly there needs to be a change in this attitude by considering patients' wishes and needs to a greater extent. 30 2.3. 7 Healthcare Service Provision The provision of a service has been described as : ··An actil'ity .. . of an illlangible nature, that takes place between the customer ... and systems of the sen1ice p/'(wider, which are prol'ided as solutions lo a customer 's problem ... (Gronroos 1990, p.25 as cited in Mcllroy, A. ( 1996) 26-377 Massey Study Guide (pp.23-27). Palmerston North: Massey University, Department of Business Studies). When considering the intangible and perishable nature of services, it follows that measurement techniques used to measure service quality also need to be a little different from those traditionally used to measure quality in a production situation. Zeithaml and Bitner (1995), who refer to service quality generally, introduce the idea of 'credence qualities ' (p . 59) within services. These credence qualities dominate many services where professionals are providing a service which customers find difficult to judge because they have insufficient knowledge or experience. They are, therefore, left to trust the service provider without having a real understanding about the actual service being provided. On the continuum of credence values seen in Figure 2-1 , medical diagnosis is high in this respect . 31 Figure 2-1 Credence Quality Continuum Most Goods Most Services Easy to ..... Evaluate """ 0) c: :E 0 u Q) ~ ro ~ High Experience Qualities (Adapted from Zeithaml and Bitner, 1995, p. 58) ~ '(ii a. Q) 0:: ~ .. l/> ·;;; ~ c High Credence Qualitities Difficult to Evaluate The impact of this is that patients possibly evaluate quality differently from the way that service providers evaluate quality, and differently from the way they would evaluate quality if the dimensions were tangible and easily quantifiable, as would be the case, for those with high search qualities. 32 Figure 2-2. Major Theoretical Perspective's on Quality Dimension Of Quality What is provided JJ How it is provided Lehtinen& Lehtinen ( 1982) Physical Quality u Interactive Quality u Gronroos (1983) Technical Quality Quality JJ Functional Quality Quality u Berry et al. (1985) Outcome JJ Process JJ PERCEIVED SERVICE QUALITY (Walbridge and Delene, 1993 , p.8) In the review of the perspectives of quality shown in Figure 2-2, the service has also been separated into two components by a number of authors . The approach that best describes the healthcare sector separates quality into the technical (what was provided) and the functional or (how it was provided (Gronroos, 1983, cited in Walbridge and Delene, 1993). Omachonu (1990) also separated service quality into two components. Firstly, quality in fact , which is measured by conformance to a specification and audited by a third party. Secondly, quality in perception which is measured by the customers' experiences of the service quality. In summary, then, the intangible service which exhibits a high level of credence quality is separated into two components, one component highly technical and the other functional. The outcomes from the measurement of quality in this situation may be very different from 33 The outcomes from the measurement of quality in this situation may be very different from those of simple service organisations. What is being provided? To whom is it being provided and what, and how is it measured? are all questions upon which healthcare providers should continually reflect. The position taken for this research has been that the patient is the primary customer who receives the service, and the government is still the primary funder of public healthcare, who pays for the service. The service provided is segmented into a technical aspect for which specific measurement tools exist (MoH, 1996) and a functional aspect, the measurement tools for which are encountered infrequently within healthcare, but which are similar to those used to measure service quality in other service organisations. In thi s case, however, it is recognised that this tool is measuring only one aspect of the service activity, namely, the patient support services-and not the entire service as is described hypothetically in Figure 2-3 The Hypothetical Structure of Support Services. The primary goal for the research will be to assess the effectiveness of one particular service quality measurement tool , namely SERVQUAL, in measuring the functional service quality aspect, which in this case has been called the Patient Support Services component of the total health service deli very. Figure 2-3 The Hypothetical Structure of Support Services .... -- -.... ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' '' ' ' ' Patient . Actual " Medical ' ' ' ' ' ' ' ' ' ' ' ' Support ' Medical ' Support ' ' ' ' ' ' ' ' ' Services ' Operation / Services ' ' ' ' ' ' ' ' ' ,, ' ,'' ' , ' ' ' , ' ' ' , ' , ' , ' , ' , , .. ---- .... '•- --- '•- __ , 34 2.3.8 Measurement of Technical Quality within Healthcare Measurement of the technical processes within a hospital takes several forms . One is the measurement of the individual professional ' s ability to undertake the technical processes. This measurement has relied largely on the maintenance of clinical and professional standards through regulations and ethical codes by peer review from their professional organisations such as the Medical Association (Medical Practitioners Regulations, 1995) and the New Zealand Nurses Organisation (NZNO, 1993)_ Clinical standards are measured using a range of traditional reporting processes such as adverse reporting (Walshe, Bennett, and Ingram, 1995). Silber et al. ( 1997), suggest that the use of the traditional measure of death rate is not appropriate, and that three other tools may be more useful : adjusted mortality, in patient obtained complications, and death following complication. Another recent addition to the measurement of technical skills is ongomg proficiency testing. This new theme is currently being piloted for Medical Scientists in New Zealand by the Medical Laboratory Technologist Board (1995) and a position paper for proficiency testing of nurses has been released for comment by the Nursing Council of New Zealand (NZNC, 1996). This trend requires individual professionals to prove, to their respective professional bodies, that they have maintained a minimum level of professional competency. Another technique for measuring technical quality is third party audit of the systems followed by accreditation against a quality standard. As described previously, JCAHO, are responsible for accreditation of hospitals in the USA. NZCHS uses a standard which is based on the one used by JCAHO to accredit hospitals in this country. NZHCS describe quality in relation to the patient care and outcomes-including the reduced probability of undesirable outcomes. 35 Other ancillary diagnostic departments within the health sector undergo the accreditation processes specific to their disciplines e.g. Laboratory accreditation in accordance with ISO Guide 25 which began in this country in 1977 (long before hospital accreditation was considered) . More recently, pharmacy and radiography have also been included in the process, being accredited against standards specific to their professions. In 1998 almost all medical laboratories in New Zealand were voluntarily accredited against ISO Guide 25 . The mission statement from NZCHS (NZCHS, n.d .) does take into account the customer' s expectation. However the gradual development of systems approaches to quality and the inclusion of customer requirements within these approaches does suggest a gradual move in the direction of service quality as seen from the customer' s point of view The unusual aspect of the technical measurement has been the lack of input from the customer, who remains largely excluded from this component of the service. This should not negate an obligation on the part of the healthcare professionals to include the patients in decision making and then to permit the acceptance of their views. Historically, this has not been the case. The assumption has always been that the ' doctor knows best ' . Projects such as this begin to question this position and perhaps it is now appropriate to include patients' much more closely in the decision-making process regarding their healthcare. 2.3.9 Measurement of Functional Quality As explained previously, functional quality relates to how the actual process or service was being provided. Although it has been recognised that functional quality and service quality are different, the tools used to measure the two are similar. Ford, Bach and F ottler ( 1997), separate the measurement methods into : a. Qualitative: e.g. management observations, employee feed-back, quality circles and focus groups and 36 b. Quantitative: e.g. comment cards, patient surveys, telephone surveys and mystery shoppers. 2.3.9.1 Qualitative Service Quality Measurement An example of the qualitative approach used by Rantz, et al. (1998), utilised focus groups to develop a model for the assessment of nursing home quality and identified a number of service quality dimensions, namely; resident focus, care process, recreation activities, staff, facilities, dietary, and community ties. 2.3.9.2 Quantitative Service Quality Measurement Quantitative measurement is based largely on customer survey questionnaires. Two types of survey models are used in the healthcare sector for quantitative analysis . These include 'perception' and 'disconfirmation ' questionnaires. 2.3.9.2.1 Perception Surveys A perception measurement tool called "Patient Perception of Care" (p.41) was developed by Casarreal, Mills and Plant (1986) . This was installed into a multi-hospital organisation in an attempt to maintain uniform quality throughout their organisation. The complete questionnaire was not presented in the literature, however, statistical analysis in the form of Coefficient Alpha calculations showed some validity for sections of it. Another instrument was developed by Phillips, Morrison and Chae ( 1990), called the "QUALCARE scale" (p. 77). This group exarnined the perception of quality from three angles: quantification, clinical relevance and establishment of standardisation of ratings. They used seven service dimensions: physical, medical, management, psychosocial, environmental, human rights, and financial. Psychometric statistics gave reliability to the 37 construct. However, no weighting was assigned to the quality dimensions used therefore the importance of each cannot be ascertained . The importance of standardisation was noted in this project . Davis and Reineke ( 1998) examined the effect of waiting time and its effect on customer satisfaction. They showed that the perception model was more effective than the disconfirmation of the waiting time. The deficiency with perception surveys 1s that they are not based on any specific predetermined position. A more complete approach would be to first assess the expectation before comparing it with the perception. An anecdotal comment received from management of a public hospital was that ' their patients always complain about the food '. But another question that needs to be asked is; On what do they base this judgement? Do they expect a fi ve star hotel, or is it compared with the food that they would cook for themselves at home? Without this basis, quantification is invalid . In addition, the relative importance of any position needs to be ascertained, because the level of importance would dictate the effort that the management of the hospital should place on resolving the discrepancies in service quality, and the order in which these should be resolved. 2.3.9.2.2 Disconfirmation Surveys This is the method which has been chosen for detailed examination in this review. The advantage of disconfirmation is that it yields a result, not only for the perception or expectation of the service, but also for the difference between the two (a disconfirmation) . The particular model for detailed analysis in this review is a disconfirmation model called SERVQUAL (Parasuraman, Zeitharnl & Berry, 1988). 38 A contradictory position regarding the disconfirmation technique was presented by Hart (1997) . This report was concerned with the statistical approach used to quantify qualitative information. It questions the use of disconfirmation surveys because of the difficulty in defining perceptions and expectations. It also suggests that expectation may vary over the course of the service, therefore the results are time dependent. It specifically mentions the use of waiting time (this is one of the tools used in the UK to measure quality) as a measure of quality, and suggests that a reduction in waiting time may not actually reflect improved quality but simply may be a more rapid and impersonal processing of patients. While some of these comments may have merit , neither were they backed up with any research data, nor were alternatives presented . The disconfirmation technique discussed in detail in this review has shown promise, although the need to establish specific industry­ based dimensions has been identified . Regardless of the number of questions used, there are a number of issues regarding questionnaires that may affect the way in which respondents answer each question (Schuman and Presser, 1996). These include the wording, location, the type of scale and anchoring used. Anchoring describes the labels that are shown for the various options on the Likert scale. It is suggested that each possibility for each question should be described or alternatively the anchors should describe the options at the top of a matrix of Likert options. It is important that good question design and pre-testing are undertaken to limit these effects. These issues are discussed at length in the discussion of the results of the research. 39 2.4 SERVQUAL; The Background The quality measurement tool used in this project was developed by Parasuraman, Zeithaml and Berry (1985, 1988, 1990, 1994). The SERVQUAL model at Figure 2-4 is described as a series of interactions between a customer and a service provider. Gaps in the interactions between the various stakeholders are labelled one to five . Gaps one to four exist within the service providers ' organisation, whilst gap fi ve is the difference between the customers' expectation and perception of what they receive from the service provider. 2.4.1 Gap 1: Customers' Expectation - Management Perception Gap This describes the gap between what the customers expect to get and what the management think the customers expect . There can often be a difference between the two . The gap is due largely to a lack of market research, or a lack of communication with the customers. 2.4.2 Gap 2: Management Perception - Service Quality Specification This gap can exist because the management fails to set the specifications of the service e.g. how safe or how frequent , on the assumption that they understand the customers' needs. This can be due to the management not being prepared to set the standards in order to meet the customers' expectation and can occur for a number of reasons, such as; cost, lack of resources or a strategic management plan for the organisation which focuses on fiscal gain rather than on customer service. 40 Figure 2-4 A Conceptual Model for Service Quality Customer Word of Mouth Personal needs Past Experience Expected Service . ... ~-~-~-~-~ A ' I Gap 5 ,, , Perceived Service Provider Service Delivery Gap 4 E I . . xterna commun1cat1on Gap3 Gap 1 Gap2 - .. • t Service quality specifications • Management perceptions of customer expectations (Parasuraman, Zeithaml and Berry, 1990, p. 46) 41 ~ ... with customers 2.4.3 Gap 3: Service Quality Specification - Service Delivery Gap Despite a clear customer focus and the existence of quality systems, it can still be difficult to deliver a quality product or service. Many companies are unable to do so for a number of reasons including: a lack of training, unwillingness of staff and a lack of strategic planning. It is important that the management provides the necessary resources to enable the delivery of the appropriate service. 2.4.4 Gap 4: Service Delivery - External Communications Gap How well does the organisation communicate with its customers? How well does it provide what has been communicated? Both over promising and under delivering are important issues. The outcome of thi s is non-del ivery of what is promised is a reduction in the perception of the quality of the service from the customers ' point of view 2.4.5 Gap 5: Customer·s' Expectations - Customers' Perception Gap This completes the cycle with the various components of the service provision giving the customers their perception of the service. A gap can occur because the expectation of the service which they have developed over time, from their previous experience, personal needs and comments from other users of the service, may vary from the perceived service. Parasuraman et al. (1990) suggest that solving the problems associated with gaps one to four will solve the problems associated with gap five. Therefore, the overall measurement of service quality can be done by measuring gap five i.e. the difference between each customers' perceptions and expectations of the service provided. The SERVQUAL disconfirmation questionnaire referred to on many occasions in this review measures gap five . 42 2.4.6 The Disconfirmation The disconfirmation is undertaken by inviting answers to perception and expectation questions over five service quality characteristics or dimensions : reliability, assurance, tangibles, empathy and responsiveness (Parasuraman et al. , l 988) . Their initial work identified 10 dimensions, however, these were reduced and amalgamated into a 22 item, five dimension construct. Each item comprises two questions; one a perception question and the other an expectation question . Customers are asked to assign a value on a Likert scale as to whether they agree or disagree with the comments. Service quality for each question is calculated by subtracting the expectation value from the perception value for each question in each dimension. Service Quality (SQ) = Perception (P) - Expectation (E) . For each question, a negative result indicates that the service quality was not as good as expected­ while a positive result indicates that the service quality exceeded expectations. 2.4. 7 Importance of Dimensions Throughout the development of SERVQUAL, the reliability dimension has been the most important for 50% of the respondents (Parasuraman et al, 1990). The validity of the dimensions was tested using the banking, credit card, repair and maintenance and toll call services in the USA It was found that the model could be used across all those service industries. The original model has undergone development as a consequence of several contradictory reports. Teas (1993), and Cronin and Taylor (1994), suggest that the construct is not valid from either a statistical point of view or from an application point of view. Gerhard, Boshoff and Nel (1997), suggest that there are only two factors, namely intrinsic and extrinsic. This aligns somewhat with the functional and technical aspects as described by Gronroos earlier 43 in Figure 2-2. Parasuraman et al. ( 1994) provide a reassessment supporting their original claims. For this project probable dimensions were available from the Tome and Ng ( 1995) model however the relative importance was unknown. The approach with this project has been to assign an impo11ance rating to each of the questions in the questionnaire. From there, an average importance can be established when the dimensions have been identified. The construct may have some limitations such as the same total score is achieved for both a low expectation-perception score and a high expectation-high perception score. This aspect of the construct is acknowledged, and could perhaps be the subject of further investigation. There are also contentions regarding the number of factors derived. Generally speaking, the model does appear to be valid, provided that these limitations are realised, and provided that service dimensions and importance are established for each individual industry. 2.4.8 Customer Satisfaction Kristensen, Martensen and Gronholdt ( 1999), in their review of customer satisfaction, described five different models used to explain customer satisfaction. Most of the models are based on the perception of quality. Some are slightly more complex, requiring a disconfirmation between perception and the expectation. Others are less complex being modelled on expectation alone. Oliver ( 1997) notes that SERVQUAL was not designed to measure satisfaction, rather it was designed to measure service quality. Satisfaction is a function of the fulfilment that a service provides, whereas the service quality is more encompassing and includes the underlying features of a service. Ramaswamy (1996) also describes satisfaction in terms of the disconfirmation between the expected performance and perceived performance and states that, if a zero disconfirmation 44 is obtained, then satisfaction is achieved. It is possible to have a poor quality service that does provide customer satisfaction because the customers' expectations also are low. Customer satisfaction can therefore be describe as the outcome of meeting the expectation with the perception as in Figure 2-5 i.e. if a zero disconfirmation is obtained customer satisfaction has been achieved. It is important appreciate that customer satisfaction and service quality are not the same thing. This project is not deliberately undertaking an analysis of the satisfaction aspect. Figure 2-5 Customer Satisfaction Model Perception Quality Expectation (Adapted from Kristensen. Martensen & Gronboldt. ( 1999) 45 Customer Satisfaction 2.4.9 Statistical Validation The stati stical analysis used to provide validity for the technique includes: a. Factor analysis which has been used by several authors to correlate previously un­ correlated data and to reduce the number of questions in each dimension (Hayes, 1992, p. 156). The analysis may show further that there are other dimensions that were not previously considered. (Nunnally, 1978, pp. 327 - 405). b. The Cronbach Alpha Coefficient is used by most authors assessmg SER VQUAL to measure the internal consistency ratio of the sum of the individual variances to the total variance of the disconfirmation answers. The higher the coefficient the greater the reliability or internal consistency of the questions w ithin each dimension. (Cronbach, 1990, p. 207). Reliability of the statements within the dimension is acceptable if the Coefficient Alpha exceeds 0. 70 (Vandamme & Leunis, 1993). A low Cronbach Alpha indicates that one or more of the questions within the dimension should not be there. Ext racting a question and then recalculating may ascertain which should not be included. (Parasuraman et al. , 1988). c. The Factor analysis and the Cronbach alpha together provide a measure of statistical validity for the tool. Both are necessary to prove that the number dimensions and the appropriateness of the number and location of questions are correct for each dimension. 2.4.10 SERVQUAL in Service Industries Numerous investigations have shown that SERVQUAL is effective in measuring service quality in a number of different service industries (Babakus & Mangold, 1991; Gupta, 1995 : Gabbie & 0 'Neill, 1996; Donnelly & Shiu, 1999). The original work conducted by Parasuraman et al. ( 1988), showed its effectiveness in the Banking, Credit Card, Repair and Telephone Companies. The five dimensions; reliability, assurance, tangibles, empathy and 46 responsiveness, were valid across all the industries selected. In contrast, Carman ( 1990), was able to show that although the SERVQUAL model would work for several different service organisations (dental clinic, tyre store and placement centre), the dimensions needed to be modified for each industry. Cuthbert (1996a and b) used SERVQUAL to measure service quality in Higher Education. The statistical validity for the assurance dimension did not correlate with Parasuraman et al ( 1990) and the factor analysis showed that there were seven, rather than five, dimensions . Crosby and LeMay (1998) used SERVQUAL in the transport industry, and suggested that it would be more effective if a price component were to be included in the importance rating, as all dimension were considered important. The concern here is that no two studies, other than the original authors, seem to be able to replicate the same dimensions . Furthermore, few reports exist that replicate the results in the same industry, but at different locations. Clearly the generic dimensions provided by Parasuraman et al. do not suit all organisations. However, further research should be conducted to attempt to establish a baseline of dimensions for specific service industries so that the construct can have transferability from place to place within each industry. This will then provide a meaningful tool for ongoing service quality measurement. 47 2.4. l t SERVQUAL in Healthcare A small number of attempts to use SERVQUAL within healthcare have been undertaken, some using the original 22 question model, others with modifications. Figure 2-6 describes this development from the original model developed by Parasuraman et al. ( 1990), through to a notional ideal model for healthcare. Between these extremes there have been a number of attempts to reconsider both the number of questions and the number of dimensions found . A feature of these developments is that they have not followed a strict chronological order. Recent papers used the original SERVQUAL model with apparent success (Youssef, Nel & Boviad 1996; Lam 1997). The former study is the only group found so far who included the significance of the dimensions in the calculations of the service quality. Although both noted that industry specific dimensions should be established, neither attempted to define any. 48 Figure 2-6 The Chronological Development of a Healthcare based SERVQUAL Model Original SERVOUAL 5 dimensions 22 questions Health care ~~~~~ ...... ~~~~,-.~~~"""!"'~~~~,-.~~~..,..~~~.....,~ SERVQUAL Lam (1997) Original 5 dimensions Original 22 questions wi th health slant Babakus et al. (1992) Original 5 dimensions 15 new health questions Vandamme(1992) Original dimensions 28 health questions Attempt lo change dimensions Tomes (1995) 7 Dimension Carman (1990) 9 Dimensions Health questions Youssef (1996) Original 22 items Original 5 dimensions Dimensions related lo scores Anderson (1996) Original 5 dimensions 15 health questions Dimensions related lo scores 49 Health questions Babakus and Mangold ( 1992), modified the original 22 item generic model down to a 15 paired health sector questionnaire. The original dimensions of service quality were maintained. This study succeeded in confirming the validity of their modification rather than attempting to test service quality specifically. Anderson and Zwelling ( 1996), used the Babakus model to compare service quality in five outpatient clinics. In addition to the 15 questions, the importance of the dimensions was included in the research. They also analysed the expectation and perception components of the model separately, and found significant differences in the expectation, but not in the perception, between the different clinics. This was the only study found in the literature that used the construct to compare different areas within a hospital providing similar services. No evidence of the longitudinal use of the tool within healthcare could be found in the literature. 49 Vandamme and Leunis (1993), had limited success in re-defining the dimensions of the construct to focus on the health sector. They suggest that tangibles, assurance and nursing staff were the important dimensions, but stopped short of re-developing the model entirely. In an earlier project, Carmen ( 1990) attempted to determine the dimensions for four different service providers. In an acute hospital , he found that there are nine dimensions: admission, accommodation, food, privacy, nursing, explanation, visitor access and courtesy, discharge planning and patient accounting. This project had modified the original model to a similar degree to that of the Tomes and g (1995) modification, but unfortunately, details of the questionnaire were not provided in the literature, so it could not be used as a base­ line for this project. The conclusion from this is that SERVQUAL is a useful tool for measuring service quality within health care, provided that the original construct is modified to suit the situation. Subsequent quality dimensions identified are likely to be different from the original construct as well. No generic or well-proven healthcare modification could be found in the literature. Nor could an evaluation of the use of the tool for the longitudinal measurement of service quality be found . 2.4.12 Tomes, A. and Ng, S. (1995) SERVQUAL Modification This modification of the original construct was developed in the in-patient medical wards in an NHS hospital in England. Seven dimensions, namely: empathy, relationship of mutual respect, dignity, understanding of illness, religious needs, food, and the physical environment were established from focus groups which were comprised of management, nursing staff and patients. A 49 question statement with a seven point Likert scale was adopted. 50 Generally the outcomes of the Tomes and Ng research were favourable, with both high average expectation scores (mean 5.06 - 6.42) and perception scores (5 .55 - 6.33). They found twenty positive gap scores which indicate that the expectation was exceeded in these cases. Two zero scores indicated that expectation was equalled in these cases. The noteworthy areas of under-provision included; communication with doctors, doctors should spend more time with them and doctors should make an effort to explain things in layman ' s language. The factor analysis yielded seven factors . Five were intangible, namely: empathy, relationship of mutual respect, dignity, understanding of their illness and religious needs. Two factors were tangible, namely: food and physical environment. One feature lacking in the Tomes and Ng model was an importance rating. This was excluded to avoid emphasis on any particular aspect of the functional quality. Given that the intention of their research was merely to design and validate the measurement tool, this was satisfactory. However, a ranking of the various dimensions would have facilitated an order of preference for ongoing quality improvement. This research project has included the importance ranking for this very reason. The inclusion of an importance rating also sped up the research process because there was no need to initially define the dimensions before their importance was ascertained, it all happened during the same phase of the project. 51 2.5 Summation of Literature Search The healthcare sector has undergone rapid change over the last two decades. This has not always been focused on the customer, nor on quality, but rather on short term fiscal goals and rationing of care. However, whilst rationing has remained, the commercial approach has been softened somewhat recently. As for compliance with the wishes of the Minister of Health - who recently described one of the goals of our healthcare systems as "placing people at the centre of the service delivery" (MoH, 1995, p. 9), no clear strategy appears to exist for ascertaining from patients whether or not this goal is being achieved. It is certainly not obvious that the primary customers are in fact considered to be the patients. The Ministry of Health itself has no clear approach towards patient focus, (with the exception of the development of the Office of the Health and Disabilities Commissioner) . The continuous flow of reports in the news media from patients who have received inappropriate or inadequate care from our public health system would suggest that neither the care nor the technically focused quality management systems currently being used are effective. Complicating this situation is the highly complex and technical nature of the service itself, which makes it difficult for patients to be included in any quality debate because they are unaware, in most instances, of the technical aspects of their treatment. Quality measurement has been conducted from either a clinical basis or qualitatively by maintenance of standards as determined by professional bodies, government regulation or accreditation agencies. Again, this process has failed to recognise the need to deliver and measure quality service from the customer's point of view. Very few useful validated measurement tools exist for this purpose. 52 Both TQM and CQI are alive, but not necessarily well, in the healthcare sector. The reasons for this are a combination of a lack of training, understanding and commitment and also the alienation of stakeholder groups e.g . doctors, w ithin the organisation. It is also, again, a reflection of the complexity of the organisations, and the lack of focus on the primary customer. Until hospital s are one team with an identifiable customer group that is trained and focused in quality improvement processes, TQM and CQI will continue to have only limited success. The measurement of service quality fro m the customers' point of view, using quantitative techniques, has been primarily focused on perception surveys which, by themselves, are of limited use because the results are not calculated relative to any reference. Some attempts have been made to use the disconfirmation model called SER VQUAL, with modifications. This model appears to have some application within healthcare fo r the measurement of service quality-provided that health-specific questions and dimensions are ascertained. No research cou ld be found that has developed a definitive list of dimensions for the healthcare sector. Very little duplication of previous modifications has been undertaken. lt is therefore difficult to say, at this stage, that this tool has any ongoing application within the healthcare sector. No evidence could be found of the application of this tool within the healthcare sector in New Zealand. 53 3. THE RESEARCH QUESTION Problem Statement: Customer Based Service Quality Measurement; Is SERVQUAL the Answer? Over the last 20 years, the public healthcare system in this country has undergone a significant change to the way it operates. This has happened as a consequence of a number of constraints namely; patient expectation, limited funds, technological developments and our aging population . Throughout this reform process, the Government have contended that they will continue to provide public healthcare, and that the patient will be at the centre of the service delivery process. Despite this, the patients have been largely excluded from the provider, supplier service equation and, as a consequence, they have been excluded from the data gathering on service quality. This is largely because the highly complex and technical nature of the service has resulted in quality being measured in technical terms only. Even the technical measurement of quality is limited to only a few medical departments and as discussed previously these departments are not immune from making critical mistakes which effect patient care. In 1995, the Government signalled a re-focus on the patient, by specifying that people are to be the centre of service delivery-but still very little patient-derived quality information is collected, detailing the quality of the service from their point of view. This project will attempt to ascertain the value of a modified SERVQUAL questionnaire, as a tool to measure the patient component of service quality within a public hospital. 54 4. RESEARCH METHODOLOGY 4.1 Development of the Questionnaire From the literature search, it was apparent that most researchers began this type of research by conducting several focus groups with customers, then developed their survey and from the information obtained (Vavra, 1997). For thi s project, the approach has been to take pre-existing questions from focus group work conducted by Tomes and g ( 1995). Additional questions were included to cover ethnic concerns (Q4 and Q30). The hospital management were given the questionnaire for comment. o further modifications were considered necessary. The result was the two-part questio nnaire at Appendix I and Appendix 2, which consisted of fifty-one questions. Part One examines the expectation of the service provided, and Part Two examines the perception of what was received. The supporting documentation at Appendix 3 includes: a. An information sheet sent out with Part One that explained the details of the research and the people involved. b. The Part Two information sheet explained the need for the respondent to complete and return Part Two of the questionnaire so that the complete picture of expectation and perception could be included in the statistical analysis. Both parts were necessary for the disconfirmation analysis. c. A reminder sheet for those who failed to return Part Two of the questionnaire which was sent out, if required, with a second Part Two. d . Demographic information such as age, ethnic identity and time spent in hospital was also requested. The time spent in hospital is significant because the day-stay patients (patients 55 held for less than one day) may have different views than those who spent more than one day in hospital. 4.2 Part One of the Questionnaire; The Expectation The expectation part of the questionnaire was sent out by mail to patients prior to admission to the hospital. The questions ask for a response using a seven-point Likert scale. A one response implied that they strongly disagreed with the statement, and a seven implied that they strongly agreed w ith each statement. Question one reads " My doctors should explain the reasons for the tests and procedures which are carried out on me" . This section was sent back to the hospital by mail and identified so that it could be matched with Part Two later. 4.3 Part One of the Questionnaire; The Importance In the original SERVQUAL model the importance of the various dimensions of service quality are established by asking the respondent to rank the dimensions by placement of a percentage score for each (Parasuraman et al , 1990). To do this, the dimensions would have to be known from the beginning which would have required preliminary data collection to ascertain the dimensions. The approach taken with this research was slightly different. We asked for an importance response for each question in Part One of the survey. We therefore developed both the dimensions and the importance rating for each question at the same time. The hospital involved also considered it essential to include the importance rating so that the project results could be prioritised in accordance with the importance rating. 56 4.4 Part Two of the Questionnaire; The Perception. The hospital staff sent out Part Two of the survey when each patient had been di scharged from hospital. This had the same number of questions as Part One but the wording for each question changed slightly to determine the perceived experience at the hospital. e.g. for question one of Part Two it read "My doctors did explain the reasons for the tests and pmcedures which were carried out 011 me". Part Two was collated with Part One for each respondent fo r later stati stical analysis. 4.5 Selection of the Focus Organisation This proved to be more difficult than was first anticipated. The first approach was made to a large public hospital who found no value in this type of data collection because " they were already doing enough of this quality stuff and in any case we already know what issues concern our patients". Without an in depth analysis of the processes they undertake it was not possible to comment further. Their lack of interest did raise the point that there may already be quality management tools in use to measure customer satisfaction or service quality. In addition, there was the realisation that not everyone has the same level of enthusiasm as the researcher for this service quality measurement tool. The second patient group chosen was from a private hospital. The management showed positive support for the project in the first instance but, when it finally came to the detail, they did not want to "upset the doctors''. This raised another question regarding patients in a private hospital. Who are the customers in a private hospital? Are they the patients or are they the doctors? Or are they both customers? A private hospital admission seems to be an unwritten contract between the hospital and the doctor, leaving the patient merely as work in progress. This relationship warrants further research, because the funders of private 57 health, be they insurance companies or private individuals may not entirely agree with this situation. Jun, Peterson and Zsidisin (1998), identified differences in quality dimensions in the different stakeholder groups namely; management, medical staff and patients, by using focus groups from each customer group. The significance of this is that all groups are going to have different expectations of service quality. The third patient group, the one finally used for the research, came from a smaller provincial public hospital whose staff identify the patient as their primary customer in their business plan. This research project was seen as being very useful in filling a gap in their business plan. They were very keen to find out what their patients had to say about the delivery of the service. They expected that not every answer would be positive, however, these were all seen as opportunities for quality improvement. A personal communication with NZHIS, in Wellington, did indicate that this research may fill a significant gap regarding customer health information identified in the Government ' s strategy for the year 2000 . (V. Stevanovic, personal communication, 4 March 1999). Currently, most of the information collected by this group is clinical or critical event data. The unexpected delays in identifying a suitable customer group delayed the commencement of the research project by some six months. In hindsight, having a really goo