Jennifer L. DelVentura, Ph.D., ABPP
Jennifer L. Steiner, Ph.D., ABPP
In a post in a previous issue of this newsletter, titled “Gender disparities in Pain and Pain Care,” we explored the evidence that women are not only at higher risk for pain and pain conditions but that their pain appears to be underestimated and, in some cases, undertreated compared with men’s pain. Similar patterns are evident in diagnosis and treatment of pain in racial/ethnic/SES minority patients, ranging from differences in prevalence and severity of pain in these groups to differential access to resources and pain care.
While there are many factors operating at different levels (community, institutional/systemic, familial/social, individual) contributing to these disparities, the impact of provider biases on pain care is of particular interest for us, as clinicians. Biases are ubiquitous, often unconscious, and a normal part of human cognition, but they are not innocuous. They may even be inconsistent with conscious beliefs—e.g., one may deny having negative beliefs about minority groups, but still evidence presence of unconscious biases in decision-making. But importantly, with awareness and effort biases are malleable. It is our task as ethical, caring providers to strive to minimize the impact of harmful biases on patient care.
Using measurement tools normed for minority populations
One approach to reducing impact of biases involves using pain assessment tools normed for minority patient populations. Even seemingly unbiased tools may be interpreted and rated differently by minority groups. Take for example the widely-used numerical pain rating scale with verbal anchors (e.g., “moderate,” “severe” or “worst pain imaginable”) to describe pain intensity. While these scales appear bias-free, their use rests on the assumption that we all interpret pain descriptors in the same way. Yet this may not be true, as is demonstrated in a study by Campbell and colleagues[1], comparing pain ratings to thermal stimuli in men vs. women and white vs. black participants. Findings indicated that when using a generic numerical pain rating scale, women and black participants rated the stimuli as significantly more painful than their counterparts. However, when participants were allowed to individualize the rating scale by moving the verbal anchors to reflect their subjective interpretation of pain, group differences in pain ratings were no longer significant[1] suggesting group differences in how these anchors were interpreted, perhaps attributable to culture, socialization, or other factors.
The Campbell et al [1] study points to the need for measures modified for or normed to different populations. Indeed, a few measures with such norms do exist. For example, for psychologists, the MMPI-II-RF and Millon Behavioral Medicine Diagnostic [2], offer a multitude of specific patient group norms divided by gender (but not ethnicity). Additionally, several behavioral measures of functioning that are commonly used by physical therapists now have more extensive norms for different populations. The unipedal stance (UPST) now has norms for age and gender [3] and several other common tests such as the timed up-and-go test and single limb stance test have established significant differences based on age [4-6]. However, most measures lack norms specific to race/ethnicity, and thus this remains an important area of development for the field of pain management. Because such minority group norms are not widely available, it is crucial to be aware of the inherent limitations of our commonly used measures.
Provider-level strategies for managing bias
But what else can providers do to reduce impact of bias on patient care? Social psychology research offers valuable insights into strategies for addressing biases in our work. Based on this research, Burgess and colleagues [7] put forth a multi-step, evidence-based framework for addressing biases in healthcare, parts of which we will briefly summarize here.
First, fostering providers’ internal motivation for change is foundational to this model and involves bringing awareness to the presence of biases in our work. This can be done using techniques like the implicit association test (IAT, [8, 9], a measure of response latency that evaluates the strength of an association between pairs of contrasting concepts and is believed to tap into implicit connections between concepts in the brain. The IAT takes advantage of the brain’s inherent tendency to pair concepts together in service of faster processing, the more closely two concepts are linked together for an individual, the faster the person should be able to respond when one component of the pair is activated in the brain. It is a computer-administered task that has been used to highlight unconscious biases or preferences for people that belong to particular social groups (race, gender, religion., etc.), however, it is important to note that there is some debate as to what the IAT actually measures, and whether implicit bias is correlated with explicit bias and/or explicit behavior [10, 11]. Nevertheless, completing an IAT for a number of variables, i.e. gender, race, size, etc. (at projectimplicit.net)[1] may offer some insight into our own associations and may inform self-reflection.
To give an example of what such an exercise might look like we invite you to consider the following sentences (adapted from the group exercise described in Holm et al., 2017) and count how many are true for you:
- I can feel confident that others feel that I am qualified upon first impression.
- I can speak in a roomful of medical providers and feel that I am heard.
- My age adds to my credibility.
- When I report pain or physical symptoms to my doctor, I can feel confident that my race or gender identification will not work against me.
- When I report pain or physical symptoms to my doctor, I can feel confident that others will take them seriously and not assume I am motivated by secondary gain.
- I can feel confident that if a family member requires hospital or emergency treatment they would be treated with dignity and respect even if they don’t mention my connection with the hospital.
Consider what you notice here. How many feel true for you? And how might this reflect privilege (or lack thereof) in a healthcare environment? The intention here is not to blame or shame individuals who carry privilege, but rather to consider how this privilege might impact our experience and the quality of care we receive [7, 12].
Now, we invite you to bring to mind a patient or acquaintance with minority group affiliations (race, gender, age, SES). Then with this patient in mind, read through and consider these sentences again from this person’s perspective. How many of these might feel true to this person? And in turn, how might this impact actual or perceived care? The answers and experience might be rather different in this case, and may be uncomfortable for us to consider. Indeed, it is common for exercises like the above to elicit some negative emotions and internal discomfort (e.g., cognitive dissonance). However, when elicited in a safe, nonjudging environment, these negative emotions can serve to motivate behavior change.
Other strategies and considerations for providers
Other steps in the Burgess et al [7] model include increasing contact and comfort with minority groups, and facilitating perspective-taking and empathy for minority group patients—e.g., imagining situations from the patient’s perspective. However, empathy can suffer due to stress, burnout, and time (e.g., over the course of one’s career). Even when great strides have been made in reducing the impact of biases and increasing awareness of one’s biases, as creatures of habit we tend to regress back into old patterns if not careful. Thus, self-care and becoming attuned to our own needs is vital to reducing bias in our work. It is important that providers practice recognizing signs of burn-out within themselves and routinely re-assess for potential biases. At a systems level, this perhaps highlights the need for greater resources, i.e. advocating for lighter patient caseloads, more time with patients, more time for education (such as seminars or experiential trainings) of this nature both during graduate-level training and at the post-licensure level, and more.
All considered, it is important to note that perfection is neither expected nor realistic in efforts to reduce negative impacts of biases. Rather, we should strive to reduce biases through practice of empathy, perspective-taking, awareness, and seeing patients as individuals rather than through the lens of group membership.
References
- Campbell, T.S., et al., Relationship of ethnicity, gender, and ambulatory blood pressure to pain sensitivity: Effects of individualized pain rating scales. The Journal of Pain, 2004. 5(3): p. 183.
- Millon, T., et al., Millon Behavioral Medicine Diagnostic. 2001, Minneapolis, MN: NCS Assessments.
- Springer, B.A., et al., Normative values for the unipedal stance test with eyes open and closed. J Geriatr Phys Ther, 2007. 30(1): p. 8-15.
- Hirano, K., et al., Impact of low back pain, knee pain, and timed up-and-go test on quality of life in community-living people. J Orthop Sci, 2014. 19(1): p. 164-71.
- Bohannon, R., Single limb stance times: a descriptive meta-analysis of data from indivdiuals at least 60 years of age. . Topics in Geriatric Rehabilitation, 2006. 22(1): p. 70-77.
- Steffen, T.M., T.A. Hacker, and L. Mollinger, Age- and gender-related test performance in community-dwelling elderly people: Six-Minute Walk Test, Berg Balance Scale, Timed Up & Go Test, and gait speeds. Phys Ther, 2002. 82(2): p. 128-37.
- Burgess, D., et al., Reducing racial bias among health care providers: lessons from social-cognitive psychology. J Gen Intern Med, 2007. 22(6): p. 882-7.
- Greenwald, A.G., D.E. McGhee, and J.L. Schwartz, Measuring individual differences in implicit cognition: the implicit association test. J Pers Soc Psychol, 1998. 74(6): p. 1464-80.
- Nosek, B.A., A.G. Greenwald, and M.R. Banaji, Understanding and using the Implicit Association Test: II. Method variables and construct validity. Pers Soc Psychol Bull, 2005. 31(2): p. 166-80.
- Lane, K.A., et al., Understanding and using the implicit association test: IV. Implicit Measures of Attitudes, 2007: p. 59-102.
- Hofmann, W., et al., A meta-analysis on the correlation between the implicit association test and explicit self-report measures. Pers Soc Psychol Bull, 2005. 31(10): p. 1369-85.
- Holm, A.L., et al., Recognizing Privilege and Bias: An Interactive Exercise to Expand Health Care Providers’ Personal Awareness. Acad Med, 2017. 92(3): p. 360-364.
[1] Projectimplicit.net is run by a non-profit organization and collects the data for scientific purposes.
[2] The Cultural and Linguistic Competence Health Practitioner Assessment (CLCHPA) through the Georgetown University National Center for Cultural Competence can be found at https://nccc.georgetown.edu/assessments/. Please note that the assessment and website are temporarily out of service for revisions