Evidence-Based Practices in MCH
By Michael R. Fraser, PhD CAE
I love spring. Everything seems so new, so fresh. The different shades of green are amazing, beautiful flowers seem to pop up everywhere and our special treat in Washington is witnessing the blossoming of our world famous cherry trees. To me, spring is a great time to start something new – a “fresh start” – that often coincides with casting out the old through a ritual known as “spring cleaning.”
What might this season of new, fresh starts mean to you? How about taking a look at the work of your maternal and child health program and applying a little spring cleaning to it? What are you holding on to that you do not use anymore, what can be thrown away to make room for the new? What programs are you running the way you ran them last year, the year before that, and the year before the year before that? Are they fresh? Are they meeting your state’s needs? Do they need a little bit of re-tooling and refreshing to meet new challenges?
This month’s issue of Pulse is dedicated to the use of evidence-based best practices in maternal and child health. While there is little appetite for increased spending here in Washington, there is a great deal of attention being paid to “evidence-based” programs. Policy makers want to know that precious federal resources are being invested in programs that work and they want to see evidence of impact before they continue to invest any more resources. The new Maternal, Infant and Early Childhood Home Visiting program is a good example of what we might see Congress do a lot more of in the future – guide federal investments toward programs that have documented evidence of success and support their spread nationwide.
A great deal has been written about what “evidence” is and the issues surrounding deeming a program “evidence based.” I am going to leave the science on this one to a future article but I do realize that there is a continuum of evidence on which programs can be evaluated and that using one “gold standard” for evidence is probably not realistic given some of the challenges of evaluating and measuring impact in maternal and child health populations. That said, it is hard to quibble with the notion that if we know what works we should be doing more of it. So, what keeps us from using more evidence-based practices in MCH? Here are three reasons I have heard in my visits to Title V programs nationwide about what makes it difficult to implement evidence-based practices at the state level.
“It is hard to find evidence-based practices.” AMCHP knows that is certainly true, and is the main reason we are working with MCHB and other national partners to grow our Innovation Station – our online collection of best practices in MCH. As we collect best practices we want the Innovation Station to be the place for state MCH programs to find programs that work. We also know that many Title V program staff do not have time to read the academic research in which evidence-based practices are disseminated. That is why AMCHP works to include summaries of important research findings in our publications, on our conference agenda, and through our webinars and regional meetings and contacts with states.
“It won’t work here.” I love this one – we all want to be unique. But how many times do we need to reinvent the wheel? To me the real issue is not that the model will not work in your state but rather you may have concerns about how to be true to the core facets of the program given a different setting or different population. To address this concern, I think we have to be very clear about the populations we are serving and how close they are to the study population. Certainly it is the case that not all best practices will work in all settings. The notion that we have to tailor every MCH program to meet the needs of every state and community, however, needs to be addressed.
“We don’t have enough evidence.” We certainly want to make sure that enough research has been conducted before we entirely revamp our MCH programs or invest a great deal of resources in a particular program or model. But how much evidence is enough? It strikes me that pretty much every journal article I’ve ever read ends with “there is a need for more research.” Certainly there is the need to continue conducting research down promising pathways. But how much is enough? This is a tough question, but my guess is that for most practices in MCH we have a threshold of research that really is enough. So, as research continues to evaluate specific programs and models, let’s use what we have and ask if more research really will tell us anything more about the program’s effectiveness. Sometimes good enough really is good enough.
Do any of these reasons ring true to you? Let’s work together to continue to share the evidence and support “what works.” As MCH programs come under great scrutiny due to fiscal constraints and challenges, basing decisions on the best evidence becomes paramount. The AMCHP network is a learning network, always eager to share and learn from each other. I hope you use this spring season to engage with AMCHP and share what is working with us.