Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin
Читать онлайн книгу.them. Be sure to inform them of what their participation in the intervention would require of them (e.g., time commitment, modality, and homework) as well as any undesirable side effects or discomfort they might experience with that intervention as well as the possibility that the treatment may not work for them. With this information, the client might not consent to the treatment, in which case you'll need to consider an alternative intervention with the next best evidence base. A side benefit of engaging the client in making an informed decision is that doing so might improve the client's commitment to the treatment process, which, in turn, might enhance the prospects for a successful treatment outcome. Recall from our discussion in Chapter 1 that some of the most important factors influencing service effectiveness are related to the quality of the client-practitioner relationship.
2.5 Step 5: Monitor Client Progress
Before you begin to provide the chosen intervention, you and the client should identify some measurable treatment goals that can be monitored to see if the intervention is really helping the client. This phase is important for several reasons. One reason, as noted previously, is even our most effective interventions don't help everybody. Your client may be one of the folks who doesn't benefit from it. Another reason is that even if your client could benefit from the intervention, perhaps there is something about the way you are providing it – or something about your practice context – that is making it less effective than it was in the research studies. When interventions are implemented in usual practice, they may not be implemented with fidelity. In other words, interventions are often changed by practitioners in response to the particulars of their practice context, client characteristics, or their own preferences. Unfortunately, these changes can compromise the effectiveness of the intervention. We discuss more about issues related to intervention fidelity in Chapter 12.
By monitoring client progress, you'll also be better equipped to determine whether you need to continue or alter the intervention in light of goal attainment or lack thereof. Monitoring client progress additionally might enable you to share with clients on an ongoing basis charted graphs or dashboards displaying their treatment progress. This sharing might further enhance client commitment to treatment. It also provides more chances for clients to inform you of things that they might have experienced at certain points outside of treatment that coincide with blips up or down on the graphs. Learning these things might enhance your ability to help the client. Chapters 7 and 12 of this book pertain to this phase of the EIP process.
2.6 Feasibility Constraints
Having research evidence inform your practice decisions is a lot easier said than done. For example, searching for and finding the best scientific evidence to inform practice decisions can be difficult and time consuming. Your caseload demands may leave little time to search for evidence, appraise it, and then obtain the needed skills to provide the intervention you'd like to provide. Moreover, in some areas of practice there may be very little rigorous research evidence available. This can be especially true outside of the health and mental health fields of practice. As you engage in the EIP process, you might identify important gaps in the research.
Another problem is that even when you find the best evidence, it might not easily guide your practice decisions. Perhaps, for example, equally strong studies reach conflicting conclusions. In the vast literature evaluating the effectiveness of exposure therapy versus EMDR therapy in treating PTSD, for example, Rubin (2003) found approximately equal numbers of rigorous clinical outcome experiments favoring the effectiveness of exposure therapy over EMDR and favoring EMDR over exposure therapy.
Some searches will fail to find any rigorous studies that clearly supply strong evidence supporting the effectiveness of a particular intervention approach. Perhaps, instead, you might find many seriously flawed studies, each of which supports the effectiveness of a different intervention approach. Some searches might just find what interventions are ineffective (at least those searches might help you in deciding what not to do).
Some searches might find that the best scientific evidence supports an intervention approach that doesn't fit some aspect of your practice situation. For example, although exposure therapy and EMDR both have strong evidence for their effectiveness in treating PTSD, some clients refuse to participate in those therapies because they fear that the treatment process will be too painful in requiring them to recall and discuss the details of the trauma or perhaps visit places in vivo that resemble the site of the trauma (clinicians often succeed in helping clients surmount their fears of these therapies, but that is not always the case). Also, as noted earlier, these interventions can be harmful to clients with substance abuse disorders or who are suicidal if they are provided to such clients before those comorbid disorders are alleviated.
Likewise, some interventions with the best evidence might never have been evaluated with a population of clients like yours, and your clients might have attributes that in some important ways are not like the attributes of those clients who participated in the evaluations. Suppose, for example, you reside in Alaska and want to start a program to treat Native Alaskan girls who have been victims of physical or sexual abuse and who suffer from PTSD. If you search the literature for effective treatments for PTSD, you are likely to find that the best evidence supports the effectiveness of interventions such as exposure therapy, EMDR, or cognitive restructuring. We say the “best” evidence because those interventions are likely to have been supported by the most scientifically rigorous outcome evaluations. However, in a search completed in preparing for a talk on EBP that Rubin presented in Anchorage, Alaska, in 2006, he found no rigorous evaluations of the foregoing evaluations in which Native Alaskans participated.
He did, however, find numerous articles discussing the high prevalence of comorbidity with substance abuse among physically or sexually abused Native Alaskan girls. That illustrates another difficulty. Most of the evaluations offering the best evidence regarding the effectiveness of these treatments have excluded participants whose PTSD was comorbid with substance abuse. Thus, you would face a double whammy in trying to develop your treatment program based on the best evaluations. You would have serious doubts as to whether the findings of those studies can be generalized to Native Alaskan girls or girls with comorbidity. Even if the ethnicity issue didn't matter, the comorbidity issue might matter a great deal.
Even if you can't find the best sorts of evidence supporting the effectiveness of an intervention with clients just like yours, you still can operate from an EIP framework. One option would be to look for less rigorous evaluations that have involved clients like yours and which – while not offering the best evidence from a scientific standpoint – are not fatally flawed and thus offer some credible evidence supporting a particular intervention. If that option doesn't pan out, an alternative would be to use your practice judgment in deciding whether an intervention supported by the best evidence with clients unlike yours seems to be worth proposing to your client. If you monitor client progress (or lack thereof) during your client's treatment, you can change course if the intervention is not achieving the desired result. When you do discover a lack of evidence specific to your particular client population or target problem or problems, you may even be inspired to partner with researchers to test interventions and contribute to the research evidence. Novel practices can come from practitioners who are frustrated with the limitations of the interventions or the currently available research evidence.
2.6.1 Strategies for Overcoming Feasibility Obstacles
As we mentioned earlier, in your real world of everyday practice, you may encounter some practical obstacles limiting your ability to implement the EIP process in an ideal fashion. Some strategies for overcoming those obstacles are presented in Box 2.2.
The time and cost it takes to learn and provide