STEP 5: Evaluation

Evaluation is the systematic collection and analysis of information about intervention activities, characteristics, and outcomes. Evaluation activities help groups describe what they plan to do, monitor what they are doing, and identify needed improvements.

The results of an evaluation can be used to assist in sustainability planning, including determining what efforts are going well and should be sustained, and showing sponsors that resources are being used wisely.

STEP 5: Evaluation comprises the following primary tasks:

TASK 1: Incorporate the Five Functions of Evaluation

Information gathered through an evaluation has five functions:37

  • Improvement. This is the most important function of an evaluation—improving the efficiency and effectiveness of your chosen strategies and how they are implemented.
  • Coordination. The evaluation process assesses the functioning of your group, allowing partners to know what the others are doing, how this work fits with their own actions and goals, and what opportunities exist for working together in the future.
  • Accountability. Are the identified outcomes being reached? A good evaluation allows your group to describe its contribution to important population-level change.
  • Celebration. This function is all too often ignored. The path to reducing drug use at the community level is not easy, so a stated aim of any evaluation process should be to collect information that allows your group to celebrate its accomplishments.
  • Sustainability. A thorough evaluation can help you provide important information to the community and various funders, which promotes the sustainability of both your group and its strategies.

Program evaluations often are conducted in response to a grant or other funding requirement. As a result, reporting may be structured only to address the requirement rather than to provide a functional flow of information among partners and supporters. To accomplish the five functions of evaluation, you need a more comprehensive and well-rounded evaluation process in which you provide the needed information to the appropriate stakeholders so that they make better choices (improvement), work more closely with your partners (coordination), demonstrate that commitments have been met (accountability), honor your team’s work (celebration), and show community leaders why they should remain invested in the coalition process (sustainability).

TASK 2: Ensure Cultural Competence

Culture can influence many elements of the evaluation process, including data collection, implementation of the evaluation plan, and interpretation of results. Tools used to collect data (e.g., surveys, interviews) need to be sensitive to differences in culture—both in terms of the language used and the concepts being measured.

When selecting evaluation methods and designing evaluation instruments, you should consider the cultural contexts of the communities in which the intervention will be conducted. Here are some guiding questions to consider:

  • Are your data collection methods relevant and culturally sensitive to the population being evaluated?
  • Have you considered how different methods may or may not work in various cultures?
  • Have you explored how different groups prefer to share information (e.g., orally, in writing, one on one, in groups, through the arts)?
  • Do the instruments consider potential language barriers that may inhibit some people from understanding the evaluation questions?
  • Do the instruments consider the cultural context of the respondents?

Read more about cultural competence here.

TASK 3: Engage Stakeholders

Evaluation cannot be done in isolation. Almost everything done in community health and development work involves partnerships—alliances among different organizations, board members, those affected by the problem, and others who each bring unique perspectives. When stakeholders are not appropriately involved, evaluation findings are likely to be ignored, criticized, or resisted. People who are included in the process are more likely to feel a good deal of ownership for the evaluation plan and results. They will probably want to develop it, defend it, and make sure that the evaluation really works. Therefore, any serious effort to evaluate a program must consider the viewpoints of the partners who will be involved in planning and delivering activities, your target audience(s), and the primary users of the evaluation data.

Engaging stakeholders who represent and reflect the populations you hope to reach greatly increases the chance that evaluation efforts will be successful. Stakeholder involvement helps to ensure that the evaluation design, including the methods and instruments used, is consistent with the cultural norms of the people you serve. Stakeholders can also influence how or even whether evaluation results are used.

All partners in your substance misuse and abuse prevention or reduction efforts should be involved in developing and implementing your evaluation plan. To facilitate this process, you may consider forming a committee focused on evaluation. The committee would work in collaboration with an evaluator to collect the data, analyze results, and share findings with partners, the community, the media, and others. Having more people trained in data collection and analysis and able to spread the word about the group’s successes contributes to sustainability.

A strong evaluation system can provide monthly data about activities and accomplishments that can be used for planning and better coordination among partners. In addition, sharing evaluation data can give the group a needed boost during the long process of facilitating changes in community programs, policies, or practices.

You may want to continue working with the same stakeholders who served as your key informants (etc.), or you might want to reach out to new stakeholders.

TASK 4: Implement the Evaluation Plan

Your evaluation plan should address questions related to both process (i.e., program operations, implementation, and service delivery) and outcomes (the ultimate impact of your intervention).

Process Evaluation

A process evaluation monitors and measures your activities and operations. It addresses such issues as consistency between your activities and goals, whether activities reached the appropriate target audience(s), the effectiveness of your management, use of program resources, and how your group functioned.


Process evaluation questions may include the following:

·       Were you able to involve the members and sectors of the community that you intended to at each step of the way? In what ways were they involved?

·       Did you conduct an assessment of the situation in the way you planned? Did it give you the information you needed?

·       How successful was your group in selecting and implementing appropriate strategies? Were these the “right” strategies, given the intervening variables you identified?

·       Were staff and/or volunteers the right people for the jobs, and were they oriented and trained before they started?

·       Was your outreach successful in engaging those from the groups you intended to engage? Were you able to recruit the number and type of participants needed?

·       Did you structure the program as planned? Did you use the methods you intended? Did you arrange the amount and intensity of services, other activities, or conditions as intended?

·       Did you conduct the evaluation as planned?

·       Did you complete or start each element in the time you planned for it? Did you complete key milestones or accomplishments as planned?

Outcome Evaluation

An outcome evaluation looks at the intervention’s effect on the environmental conditions, events, or behaviors it aimed to change (whether to increase, decrease, or sustain). Usually, an intervention seeks to influence one or more particular behaviors or conditions (e.g., risk or protective factors), assuming that this will then lead to a longer-term change, such as a decrease in the use of a particular drug among youth. You may have followed your plan completely and still had no impact on the conditions you were targeting, or you may have ended up making multiple changes and still reached your desired outcomes. The process evaluation will tell how closely your plan was followed, and the outcome evaluation will show whether your strategy made the changes or results you had intended.


An outcome evaluation can be done in various ways:

·       The “gold standard” involves two groups that are similar at baseline. One group is assigned to receive the intervention and the other group serves as the control group. After the intervention, the outcomes among the intervention group are compared with the outcomes among the control group. Ideally, data should continue to be collected after the intervention ends in order to estimate effects over time.

·       If it is not possible to include a control group (e.g., due to financial constraints), you can evaluate just the intervention group, collecting data at several points before, during, and after the intervention (e.g., at 3-, 6-, and/or 12-month intervals). This design allows the evaluator to analyze any trends before the intervention and to project what would have happened without the intervention, so that the projection may be compared to the actual trend after the intervention. This type of impact evaluation is less conclusive than one using a control group comparison because it does not allow you to rule out other possible explanations for any changes you may find. However, having some supporting evidence is better than not having any.


If the intervention produced the outcomes you intended, then it achieved its goals. However, it is still important to consider how you could make the intervention even better and more effective. For instance:

·       Can you expand or strengthen parts of the intervention that worked particularly well?

·       Are there evidence-based methods or best practices out there that could make your work even more effective?

·       Would targeting more or different behaviors or intervening variables lead to greater success?

·       How can you reach people who dropped out early or who didn’t really benefit from your work?

·       How can you improve your outreach? Are there marginalized or other groups you are not reaching?

·       Can you add services—either directly aimed at intervention outcomes, or related services such as transportation—that would improve results for participants?

·       Can you improve the efficiency of your process, saving time and/or money without compromising your effectiveness or sacrificing important elements of your intervention?

Good interventions are dynamic; they keep changing and experimenting, always reaching for something better.

TASK 5: Use Evaluation Results to Sustain Your Groups Work

Evaluation plays a central role in sustaining your group’s work. Evaluation enables you to take key pieces of data and analyze and organize them so that you have accurate, usable information. This process facilitates the development of the best plan possible for the community and allows your group to accurately share its story and results with key stakeholders. It also can help you track and understand community trends that may have an impact on your group’s ability to sustain its work.

A good evaluation monitors progress and provides regular feedback so that your strategic plan can be adjusted and improved. Your group may implement a variety of activities aimed at changing community systems and environments. By tracking information related to these activities and their effectiveness, as well as stakeholder feedback, community changes, and substance misuse and abuse outcomes, you can build a regular feedback loop for monitoring your progress and results. With this information, you can quickly see which strategies and activities have a greater impact than others, determine areas of overlap, and find ways to improve your group’s functioning. Using information from the evaluation, your group can adjust its strategic plan and continually improve its ability not only to sustain its work, but also to achieve community-wide reductions in substance misuse and abuse and its consequences.

Sharing your evaluation results can stimulate support from funders, community leaders, and others in the community. The best way to ensure the use of your data is to communicate your findings in ways that meet the needs of your various stakeholders. Consider the following:

  • Presentation. Think about how your findings are reported, including layout, readability, and user-friendliness, and who will present the information.
  • Timing. If a report is needed for the legislative session but is not ready in time, the chances of the data being used drop dramatically.
  • Relevance. If the evaluation design is logically linked to the purpose and outcomes of the project, the findings are far more likely to be put to use.
  • Quality. This will influence whether your findings are taken seriously.
  • Post-evaluation TA. Questions of interpretation will arise over time, and people will be more likely to use the results if they can get their questions answered after the findings have been reported.

Evaluations are always read within a particular political context or climate. Some evaluation results will get used because of political support, while others may not be widely promoted due to political pressure. Other factors, such as the size of your organization or program, may matter as well. Sometimes larger programs get more press; sometimes targeted programs do.

It is also important to consider competing information: Do results from similar programs confirm or conflict with your results? What other topics may be competing for attention? It is helpful to develop a plan for disseminating your evaluation findings, taking these types of questions into consideration.

Read more about sustainability here.