Evaluation Design
Evaluation design is the structure that allows you to collect and analyze the data needed to answer your evaluation questions. It is important that the evaluation design align with your logic model, goals, research questions, available resources, and (funder) requirements.
Evaluation Plans
What is it? An evaluation plan is the framework that guides data collection. If done well, it aligns to the overall program goal, evaluation goal, and the logic model itself. Ideally, it would be developed in the planning stages of a program so that it could be incorporated into setting target outcomes. Elements of the evaluation plan include the outcomes or outputs that it will assess, the indicators and targets for these outcomes, the assessment measures (how the outputs or outcomes will be measured), the timeframe in which the measures will be implemented, and who is responsible for collecting the data. Our consultants can help you create a solid evaluation plan that will align with your goals, questions, logic model, and consider your analysis goals.
How can we help you?
- Include formative, summative, process, outcome, or impact in the design, depending on what is needed
- Help determine the evaluation questions
- Align evaluation plan with your logic model
Related Services
Lessons Learned
Lessons learned are experiences, knowledge, understandings, or outcomes gained by experience from a particular project or program that should be taken into account on future projects or programs.
Learn MoreMonitoring & Evaluation
Coaches have experience and expertise to support the learning and tasks of the people that need help.
Learn MoreEvaluation Coaching & Training
Coaches have experience and expertise to support the learning and tasks of the people that need help.
Learn MoreEvaluation Logic Models
A logic model is a one-page, compelling graphic (your road map) that tells the reader/reviewer exactly what, when, where, why, and how.
Learn MoreArticles and White Papers About Evaluation Plans
Whose Job is it to Evaluate?
Articles and White Papers About Monitoring & Evaluation To RCT or Not? Randomized Control Trials in Nonprofit Work Read More Whose Job is it to Evaluate? Read More The Problem with Relying Solely on Dashboards Articles and White Papers About Monitoring & Evaluation Case Study: Apprenticeship Program Evaluation Conducting a...
Read MoreThe Problem with Relying Solely on Dashboards
Articles and White Papers About Monitoring & Evaluation Case Study: Apprenticeship Program Evaluation Conducting a statistically representative comprehensive program evaluation which includes conducting a comprehensive evaluation for two workforce development programs and work with each of the vendors and the County to use interim findings to improve program design and...
Read MoreDelivering Strong M&E Reports
Articles and White Papers About Reports Finalizing Reports: Statements of Differences Read More Do We Really Need to Share Our Results? Read More Delivering Strong M&E Reports Articles and White Papers About Monitoring & Evaluation To RCT or Not? Randomized Control Trials in Nonprofit Work Read More Whose Job is...
Read MoreM&E and Technology
Articles and White Papers About Monitoring & Evaluation To RCT or Not? Randomized Control Trials in Nonprofit Work Read More Whose Job is it to Evaluate? Read More The Problem with Relying Solely on Dashboards Read More Finalizing Reports: Statements of Differences Read More Load More
Read MoreFAQ About Evaluation Plans
Data from document review can be inaccurate, incomplete, biased, disorganized, or irrelevant, therefore, it can also be time consuming to compile, organize, and analyze a large volume of documents.
Semi-structured interviewing is the most common type, which includes an interview guide with pre-determined open-ended questions, but also the flexibility of allowing spontaenuous follow-up questions and probing to yield in-depth data.
There are many things to keep in mind when creating an evaluation plan, but some considerations are the program/project goals, the evaluation questions that need to be answered, who the key stakeholders are, program/project activities, outputs, outcomes, and any challenges or other factors that may affect the program.
Generally speaking, outcome evaluation can begin to measure changes at least six months after program implementation.
Conferences will usually provide their own guidelines for presenting your work. More and more, conferences are moving away from highly text-heavy presentations and icon-graphics. They center more heavily on key findings in layperson terms, with supporting documentation related to your actual methods. Simplicity and an effective use of white space are key.
What Our Clients Say About Us
Peggy Ostrander, DNPc, APRN, FNP-C Plano, Texas