STEP 5: Evaluation

Step 5

Evaluation is the systematic collection and analysis of information about intervention activities, characteristics, and outcomes. An evaluation helps you describe what you plan to do, monitor what you are doing, and identify needed improvements. The results can help with your sustainability planning by demonstrating which efforts are going well and should be sustained, and showing sponsors that resources are being used wisely.

STEP 5: Evaluation comprises the following primary tasks:

TASK 1: Recognize Each Function of Evaluation

The information you’ll gather through an evaluation has five functions:33

  • Improvement: This is the most important function of an evaluation—improving the efficiency and effectiveness of your chosen strategies and how they are implemented.
  • Coordination: It’s important to assess how your group is functioning. Do all partners know what the others are doing, and how this work fits with their own actions and goals?
  • Accountability: Are the identified outcomes being reached? A good evaluation allows your group to describe its contribution to important population-level change.
  • Celebration: This function is all too often ignored. The path to reducing drug use at the community level is not easy, so a stated aim of any evaluation process should be to collect information that allows your group to celebrate its accomplishments.
  • Sustainability: A thorough evaluation can help you provide important information to the community and various funders, which promotes the sustainability of both your group and its strategies.

Program evaluations often are conducted in response to a grant or other funding requirement. As a result, reporting may be structured to address only those requirements, rather than to provide a functional flow of information among partners and supporters.

In a more comprehensive and well-rounded evaluation process, you provide the needed information to the appropriate stakeholders so that they make better choices (improvement), work closely with your partners (coordination), demonstrate that commitments have been met (accountability), honor your team’s work (celebration), and show community leaders why they should remain invested in the coalition process (sustainability)—thus achieving all five functions of evaluation.

TASK 2: Engage Stakeholders

Evaluation cannot be done in isolation. Community health and development work involves partnerships—alliances among different organizations, board members, and those affected by the problem, who each bring unique perspectives.

When stakeholders are not appropriately involved in developing and implementing your evaluation plan, your findings are likely to be ignored, criticized, or resisted. Any serious effort to evaluate a program must consider the viewpoints of the partners who will be involved in planning and delivering activities, your target audience(s), and the primary users of the evaluation data. Stakeholder involvement helps to ensure that the evaluation design, including the methods and instruments used, is consistent with the cultural norms of the people you serve.

Engaging stakeholders who represent and reflect the populations you hope to reach greatly increases the chance that your evaluation efforts will be successful. When stakeholders are part of the process, they are likely to feel ownership for the evaluation plan and its results. They can also influence how or even whether your evaluation results are used.

Consider forming a Data Committee that would work in collaboration with an evaluator to collect the data, analyze results, and share findings with partners, the community, the media, and others. Having more people trained in data collection and analysis and able to spread the word about the group’s successes contributes to sustainability.

A strong evaluation system can provide monthly data about activities and accomplishments that can be used for planning and better coordination among partners. In addition, sharing evaluation data can give the group a needed boost during the long process of facilitating changes in community programs, policies, or practices.

TASK 3: Ensure Cultural Competence

Culture can influence many elements of the evaluation process, including data collection, implementation of the evaluation plan, and interpretation of results. Tools used to collect data (e.g., surveys, interviews) need to be sensitive to differences in culture—in terms of both the language used and the concepts being measured.

When selecting evaluation methods and designing evaluation instruments, consider the cultural contexts of the communities in which the intervention will be conducted. Here are some guiding questions to consider:

  • Are your data collection methods relevant and culturally sensitive to the population being evaluated?
  • Have you considered how different methods may or may not work in various cultures?
  • Have you explored how different groups prefer to share information (e.g., orally, in writing, one on one, in groups, through the arts)?
  • Do the instruments consider potential language barriers that may inhibit some people from understanding the evaluation questions?
  • Do the instruments consider the cultural context of the respondents?

For more on cultural competence, click here.

TASK 4: Implement the Evaluation Plan

Your evaluation plan should address questions related to both process and outcomes.

A process evaluation monitors and measures your implementation activities, program operations, and service delivery. It addresses the consistency between your activities and goals, whether your activities reached the appropriate target audience, the effectiveness of your management, your use of program resources, and how your group functioned.

Process evaluation questions may include the following:

  • Were you able to involve the members and sectors of the community that you intended to at each step of the way? In what ways were they involved?
  • Did you conduct an assessment of the situation in the way you planned? Did it give you the information you needed?
  • How successful was your group in selecting and implementing appropriate strategies? Were these the “right” strategies, given the intervening variables you identified?
  • Were staff and/or volunteers the right people for the jobs, and were they oriented and trained before they started?
  • Was your outreach successful in engaging those from the groups you intended to engage?
  • Were you able to recruit the number and type of participants needed?
  • Did you structure the program as planned? Did you use the methods you intended? Did you arrange the amount and intensity of services, other activities, or conditions as intended?
  • Did you conduct the evaluation as planned?
  • Did you complete or start each element in the time you planned for it? Did you complete key milestones or accomplishments as planned?

An outcome evaluation looks at the ultimate impact of your intervention—its effect on the environmental conditions, events, or behaviors it aimed to change (whether to increase, decrease, or sustain). An intervention generally seeks to influence one or more particular behaviors or conditions (e.g., risk or protective factors), assuming that this will then lead to a longer-term change, such as a decrease in the use of a particular drug among youth.

An outcome evaluation can be done in various ways:

  • The “gold standard” involves two groups that are similar at baseline. One group receives the intervention and the other group serves as the control group. After the intervention, the outcomes for each group are compared. Ideally, you’ll continue to collect data after the intervention ends to estimate its effects over time.
  • If it’s not possible to have a control group, collect data from the intervention group at several points before, during, and after the intervention (e.g., at 3-, 6-, and 12-month intervals). This allows you to analyze any trends before the intervention and to project what would have happened without the intervention, and to compare your projection to the actual trend after the intervention.

Note: This type of impact evaluation is less conclusive than one using a control group comparison because it does not allow you to rule out other possible explanations for any changes you may find. However, having some supporting evidence is better than not having any.

You may have followed your plan completely and still had no impact on the conditions you were targeting, or you may have ended up making multiple changes in the program or strategy and still reached your desired outcomes. The process evaluation will tell how closely your plan was followed, and the outcome evaluation will show whether your strategy made the changes or results you had intended.

If the intervention produced the outcomes you intended, then it achieved its goals. However, it’s still important to consider how you could make the intervention even better and more effective. For instance:

  • Can you expand or strengthen the parts of the intervention that worked particularly well?
  • Are there evidence-based methods or best practices that could make your work even more effective?
  • Would targeting more or different behaviors or intervening variables lead to greater success?
  • How can you reach people who dropped out early or who didn’t really benefit from your work?
  • How can you improve your outreach? Are there marginalized or other groups you are not reaching?
  • Can you add services—either directly aimed at intervention outcomes, or related services such as transportation—that would improve results for participants?
  • Can you improve the efficiency of your process, saving time and/or money without compromising your effectiveness or sacrificing important elements of your intervention?

Keep in mind that good interventions are dynamic; they keep changing and experimenting, always striving to improve.

TASK 5: Use Evaluation Results to Sustain Your Efforts

Evaluation plays a central role in sustaining your group’s work. It enables you to analyze and organize key pieces of your data so that you have accurate, usable information. It helps you develop the best plan possible for the community and allows your group to accurately share its story and results with key stakeholders. It also can help you track and understand community trends that may have an impact on your group’s ability to sustain its work.

A good evaluation monitors progress and provides regular feedback so that your strategic plan can be adjusted and improved. Your group may implement a variety of activities aimed at changing community systems and environments. By tracking information related to these activities and their effectiveness, as well as stakeholder feedback, community changes, and substance abuse outcomes, you can build a regular feedback loop for monitoring your progress and results. This information allows you to quickly see which strategies and activities have a greater impact than others, determine areas of overlap, and find ways to improve your group’s functioning. Using information from the evaluation, your group can adjust its strategic plan and continuously improve its ability not only to sustain its work, but also to achieve community-wide reductions in opioid misuse and its consequences.

Sharing your evaluation results can stimulate support from funders, community leaders, and others in the community. Communicate your findings in ways that meet the needs of your various stakeholders. Consider the following:

  • Presentation: Think about how your findings are reported, including layout, readability, and user-friendliness, and who will present the information.
  • Timing: Be mindful of when the data are needed. If a report is not ready in time for a key legislative session, the chances of the data being used drop dramatically.
  • Relevance: If the evaluation design is logically linked to the purpose and outcomes of the project, the findings are far more likely to be put to use.
  • Quality: This will influence whether your findings are taken seriously.
  • Post-evaluation technical assistance: Questions of interpretation will arise over time, and people will be more likely to use the results if they can get their questions answered after the findings have been reported.

Evaluations are always read within a particular political context or climate. Some evaluation results will get used because of political support, while others may not be widely promoted due to political pressure. Other factors, such as the size of your organization or program, may matter as well. Sometimes larger programs get more press; sometimes targeted programs do.

It is also important to consider competing information:

  • Do results from similar programs confirm or conflict with your results?
  • What other topics may be competing for attention?

Develop a plan for disseminating your evaluation findings, taking these questions into consideration and anticipating as many of the potential issues as possible.

Read more about sustainability here.