Skip Navigation Links

 Step 5: Justify Conclusions

The CDC defines this step as “making claims regarding the program that are warranted on the basis of data that have been compared against pertinent and defensible ideas of merit, value, or significance.” Justify conclusions by linking them to the evidence gathered and judging them against agreed-upon values or standards set by the stakeholders.

Title V programs have multiple stakeholders who bring different values to the table including federal partners, consumers and families, local health departments, community organizations, and policy makers. Conclusions are justified when they are based on evaluation evidence and take stakeholder viewpoints into account.

What are the elements of justifying conclusions on the basis of evidence?

Element 1: Standards 

Program standards answer the question “What makes the program successful or unsuccessful in the eyes of the stakeholders?” Since multiple stakeholders are involved in Title V programs, stakeholders may disagree on certain program standards. Ideally, standards and benchmarks for performance indicators are negotiated at the beginning of the evaluation as part of stakeholder engagement (Step 1).

Element 2: Analysis and Synthesis

Data and information need to be organized, classified, summarized, compared to relevant information, and presented.
Possible activities related to Analysis and Synthesis include:
  • Clean and enter data into a database housed in a software such as Microsoft Excel or Access, SAS, STATA, etc. and check for errors.
  • Calculate numbers, percent, etc. for each evaluation indicator. Examples using the PCAP program could include:
    • Title V National Performance Measures: Percent of pregnant women who smoke in a geographic area or Percent of children receiving care in a medical home in a certain geographic area
    • Number of pregnant women served by a program
    • Percent change in before and after patient satisfaction surveys after a clinic intervention
  • Presentation of data in a clear and concise format which may include evaluation reports, tables, data visualizations, or GIS maps.
Element 3: Interpretation

Interpretation answers the question “What are my findings telling me?”. Findings are interpreted in the practical context of the program and are dependent on stakeholder perspectives and standards.

Element 4: Judgement

Based on available evidence and program standards agreed upon by stakeholders, evaluators can make judgements about the program’s merit, usefulness, and success. Judgements about the overall program may not be simple or agreed upon by all stakeholders. Evaluators may find a program is successful in one standard (participants achieving smoking cessation) but not in another standard (number of pregnant women reached in the first year).

Element 5: Recommendations

Based on the program evaluation results, what recommendations can be made to improve the program? Is there evidence to continue the program or terminate the program? Is there an area to focus on for success or cost savings? Whatever the recommendations may be, they should keep stakeholder values in mind and be backed by sufficient evidence.

Example from Innovation Station: Every Child Succeeds Evidence-Based Home Visitation Program

The evaluation of the AMCHP Innovation Station Best Practice Every Child Succeeds Evidence-based Home Visitation program illustrates multiple elements of Step 5: Justify Conclusions. Every Child Succeeds is a home visitation program in Northern Kentucky and Southwest Ohio that aims to give children born into at-risk homes a safe, healthy, nurturing start through a collaborative home visitation program. Data collected from multiple sources are analyzed and synthesized monthly and quarterly in continuous quality improvement data visualizations including trend charts, red-green charts, and control charts. Evaluation data is communicated to all agencies involved with the program in the practical context of interpreted program impact. Recommendations that come out of evaluation data are tested on a small scale or a Plan-Do-Study-Act cycle to identify new practices that may improve program performance.