THE USE OF RATING SCALES BY CANADIAN PSYCHIATRISTS:
QUALITATIVE AND QUANTITATIVE EVIDENCE
Oloruntoba Oluboka, Sandra Stewart, David Haslam, Jessica Wodlinger and Susan Adams, Northeast Mental Health Centre, North Bay Campus, North Bay, Ontario, Canada.
Dr. Sandra Stewart, Department of Medical Research, Northeast Mental Health Centre, North Bay Campus, North Bay, Ontario Canada P1B 8L1
(705) 474-1205, ext. 2156
Objective: This pilot study reports on the results of a national survey of Canadian
psychiatrists regarding their utilization of rating scales in routine practice.
Method: A randomly selected group comprising 25% of all practicing Canadian psychiatrists (1012/4048) were surveyed by mail. Where possible, the number of psychiatrists surveyed in each province was proportionate to the population available for sampling.
Results: Approximately 36% of all psychiatrists surveyed participated in the study. Of those that responded, the majority indicated that they do not use rating scales during the course of patient treatment. However, lack of training in the utilization of outcome measures may be a contributing factor given that 56% of all respondents have received no formal training in rating scale use.
Conclusions: Of those psychiatrists that completed the survey, 68% are interested in incorporating rating scales/outcome measures into their routine treatment practices.
Evidence-based medicine has become a prevalent topic in the scientific literature (1). As part of this overall practice movement towards specific evidence-based care strategies, practitioners have been encouraged to utilize outcome measures as a routine part of their clinical care (2, 3). Outcome measures can be conceptualized as a broad class of instruments that are used primarily to quantitatively assess the outcome of a clinical intervention, such as that observed in psychiatric symptoms after the introduction of a specific medication. In addition to providing a quantified measure of symptoms, outcome measures can also encompass indicators regarding the functional level of patients, quality of life indices, patient satisfaction, adverse pharmacological events and general health status. Rating scales have been specifically developed to provide a quantitative description to an observation, statement, affect, symptom or behaviour that lends itself to the statistical manipulation of clinical data in order to assess change. The assessment of change is paramount to guiding clinical practice in psychiatry and the utilization of reliable and standardized rating scales can facilitate the objective assessment of treatment response. However, barriers to routinely using rating scales have been identified and include a lack of demonstrated reliability and validity with specific clinical populations, limited staffing resources available for outcome data collection, time constraints and patient tolerability to provide the necessary information. Despite the requirement to quantify treatment effectiveness beyond subjective opinion, it is unknown to what extent Canadian psychiatrists incorporate rating scales into their overall practice. This question has been addressed to some extent in the UK (4, 5) but to the best of our knowledge, not in Canada. Thus, the purpose of this pilot study was to survey Canadian psychiatrists on their utilization of rating scales during treatment.
A postal questionnaire survey was sent to a randomly selected sample of 25% (1012/4048) of all practicing psychiatrists across Canada (using the MD Select database). Random selection was accomplished using an “every other name” selection from the database. The number of psychiatrists surveyed in each province was determined using ratios to the total sample of psychiatrists. For example, 45% of all practicing psychiatrists reside in Ontario, Canada. Accordingly, 450 Ontario psychiatrists were surveyed. Institutional approval to conduct this research was not sought as no contact with patients was carried out.
A self-report survey consisting of seventeen general questions was used in this survey. The questions were formulated by the research team and constructed in such a way to assess general qualities of rating scales. The majority of questions required the respondents to check an applicable box from a variety of responses (i.e., “what is your client population? Adults 18 and over ? Children under 13 ?, etc.”). Two specific questions were Likert scales with the first addressing issues pertaining to the relevancy of rating scales to various aspects of general practice (i.e., relevancy to diagnosis, monitoring response to treatment, monitoring medication side effects, etc.). The second Likert scale question assessed when specified rating scales were utilized (i.e., admission, discharge, etc.). Estimated total time for completion of the survey was less than 10 minutes.
Descriptive statistics for all survey items were calculated using frequencies, means and percentages. All analyses were completed using SPSS software, version 17.0 (6).
Responses to the survey questions were collapsed into eight broad categories, with the findings as noted.
Approximately 36% of those surveyed returned the completed questionnaire. The majority of respondents (62%) were male with a median of 17 years in practice. Approximately 55% of respondents practiced in areas with >500,000 people. Most respondents (70%) reported receiving compensation on a fee for service basis.
Major Professional Activity:
Approximately 60% of respondents practiced in either an outpatient setting or in private practice. Approximately 15% reported practicing in an inpatient setting.
The majority (78%) of respondents practice in the field of adult psychiatry.
Formal Rating Scale Training during Residency:
Respondents were surveyed on how much formal training (defined as the number of “courses taken”) they had received during their psychiatric residency programs. The majority of respondents (56%) reported receiving no formal training. Of those who indicated that they had some formal training, 17% reported that they utilized rating scales in the context of research activities during that training. Approximately 24% of respondents reported receiving one formal course, with 8% reporting two courses and 6% reporting five or more courses.
Use in specific intervention modalities:
Respondents were asked to indicate in which specific intervention modalities they utilize rating scales to monitor patient progress in treatment. The most common context was use in pharmacotherapy (33%). Approximately 27% of respondents indicated that they “never utilized” rating scales, while about 18% of respondents indicated use of rating scales in both psychotherapy and pharmacotherapy contexts.
Relevancy of Rating Scales:
The respondents were surveyed on 7 different questions regarding the relevancy of rating scales to specific areas of practice. A 5 point Likert scale was used, with “1” indicating “no relevance” to “5” indicating “very relevant”. The percentage of the sample that answered with a response greater than “3” on each Likert scale to each question indicated that the majority of respondents (61%) felt that rating scales were, in fact, relevant to various aspects of practice, including diagnosis (55%), monitoring symptoms (71%), response to treatment (72%), medication side effects (55%) and remission of an illness episode (57%). Approximately 67% of respondents felt that patient improvement on rating scale scores was "clinically significant".
Utilization of Specific Rating Scales:
A variety of common rating scales were identified and respondents were asked to indicate in which of several situations they used the scale (i.e., admission, discharge, various intervals, never, etc.). A very high percentage of respondents never use the BAI (84%), PANSS (66%), CGI (66%), AIMS (50%), ESRS (85%), UKU (93%), TSERS (97%) and the BPRS (69%). Approximately 61% of respondents reporting using the HAM-D at various intervals throughout treatment.
Interest in Continuing Education:
Respondents were surveyed as to their interest in participating in a continuing educational event focused on the relevancy of rating scales to psychiatric practice. Most respondents (68%) indicated an interest in attending such an event.
Evidence based practice encourages the use of valid, reliable and appropriately applied outcome instruments to track how effective various forms of treatment have been. In moving outside the confines of clinical trials research, tools used to derive outcome measures, such as clinician/patient administered rating scales, anchor the field of psychotherapy effectiveness studies. Such studies strive to understand patient change in the context of therapy practiced in “real world” clinical settings (7, 8). To reliably demonstrate whether a treatment has been beneficial one should measure any observed change, not just provide subjective clinical observation attesting to that change. Outcome information helps the practitioner to ensure that treatments have been maximally effective. However, frequency and utilization patterns of standardized outcome instruments (i.e., published rating scales) by practicing psychiatrists in Canada have not been previously surveyed. The objective of this pilot study was to evaluate such patterns of use.
The main and most surprising finding from this study is the fact that the majority of psychiatrists responding to the survey do not utilize rating scales during treatment. In this context it becomes important to then raise the question of how treatment outcome is being determined. Evidence based medicine has advanced the mental health field beyond the perspective of clinically derived subjective impression as to whether a treatment has been effective. The health care profession is in the age of accountability not only to the patients served but also to the broader public (i.e., funding partners). Clinician derived opinions, without the enhancement of objective quantified data, will not suffice as demonstrations of treatment utility. It has been argued that practitioners have an ethical responsibility to examine the quality of services they provide (8). Any observed change in clinical presentation should be measured and quantified in order to ensure that clear statements regarding the type and magnitude of any change can be made. Outcome measures in general are designed to assess and track client change over time. Clearly, a lack of training in using outcome instruments could present as a barrier to implementation. The responders to this survey reported that they had minimal or no training in the use of outcome tools, which may account, in part, for the low utilization rate reported.
Although the survey respondents indicated that they generally did not use outcome tools, those that did utilize these instruments generally did so in the context of monitoring patient response to medication. While symptom reduction is an important part of treatment effectiveness, other factors, such as overall functioning, quality of life and interpersonal relationships, tracking adverse events and satisfaction with care are also important (9).
Reponses to the question concerning the relevancy of rating scales were somewhat surprising. In general, responders viewed rating scales as being relevant not only to their practice in general, but to other issues such as diagnosis, monitoring side effects and response to treatment. The majority of responders also viewed “improvement” on rating scale scores as having some clinical significance. This generally favourable view of rating scales seems contradictory when compared to the minimal number of practitioners who actually utilize scales. Issues such as little time to administer scales in a busy clinical practice, in conjunction with lack of training and knowledge about rating scales, may account, in part, for this discrepancy.
The main limitation of this study was the relatively poor response rate (approximately 36%) to the survey. Another potential limitation may be recall bias when estimating the amount of prior training in rating scale use. Nevertheless, the results have provided a snapshot of a segment of practicing psychiatrists’ views on rating scales. A follow-up study incorporating strategies to increase the completed response rate is warranted. Such strategies might include follow-up to non-responders using telephone, fax and email reminders.
Studies have shown that immediate and regular feedback provided to practitioners regarding clients’ responses on outcome instruments, especially in situations where little clinical change, or deterioration, is evident, can appropriately alter a course of treatment. (10, 11). In this time of change to health care practices, with the emphasis on quality of care indicators and in turn, measured outcomes, psychiatrists should now be encouraged to take a leap forward into the “measurement pool”. Such a leap demonstrates the implementation of a proactive approach that contributes to assuring quality of care to both consumers and third party payers. Just as opinion polls measure interest in various topics, tracking patient progress using outcome instruments implies to the consumer an interest on the part of the clinician to provide quality clinical care (12). Over two-thirds of the responders to this survey were interested in attending CME’s focused on the use of outcome instruments. The fundamental question practitioners must ask, “is treatment working for this client?”, can most reliably be answered through the use of rating scales and other quality improvement methods. Canadian psychiatrists are willing to objectively demonstrate that their treatments are effective. The time is ripe for continuing education organizers to provide training in simple, efficient and cost effective outcome instruments.
FUNDING AND SUPPORT:
This research was funded in part by the Department of Medical Research, Northeast Mental Health Centre, North Bay, Ontario.
Dr. O.Oluboka is now with the Department of Psychiatry, University of Alberta, Calgary, Alberta, Canada. Dr. David Haslam is now with the London Health Sciences Centre, London, Ontario, Canada.
1. Keeley P. Clinical Guidelines. Palliative Medicine, 2003; 17:368-374.
2. Andrews G. and Whittchen H. Clinical practice, measurement and information technology. Psychological Medicine, 1995; 25, 3:443-446.
3. Sajatovic M. and Ramirez L. Rating Scales in Mental Health. Lexi-Comp, Inc.; 2001.
4. Gilbody S, House A and Sheldon T. Psychiatrists in the UK do not use outcome measures. British Journal of Psychiatry, 2002; 180:101-103.
5. Slade M, Thornicroft G, and Glover G. The feasibility of routine outcome measures in mental health. Social Psychiatry and Psychiatric Epidemiology, 1999; 34:243-249.
6. SPSS 13.0, 2004, SPSS Inc.
7. Seligman M. The effectiveness of psychotherapy: The Consumer Reports study. American Psychologist, 1995; 50:965-974.
8. Ogles B, Lambert M, and Fields S. Essentials of outcome assessment. John Wiley & Sons, Inc.; 2002.
9. Macklin E. Domains of study and methodological challenges. In Severer LI and Dickey B, editors. Outcomes assessment in clinical practice. Baltimore: Williams and Wilkins; 1996. p 19-24.
10. Lambert M, Hansen N and Finch A. Patient-focused research: using patient outcome data to enhance treatment effects. Journal of Consulting and Clinical Psychology, 2001; 69:159-172.
11. Lambert M, Whipple J, Smart D, Vermeer's D, Nielsen S and Hawkins E. The effects of providing therapists with feedback on patients' progress during psychotherapy: Are outcomes enhanced? Psychotherapy Research, 2001; 11, 1:49-68.
12. Kelly T. Clinical Outcome Measurement: A Call to Action. Journal of Psychology and Christianity, 2003; 22, 3:254-258.
Copyright Priory Lodge Education Limited 2009
First Published March 2009