Evaluation Planning
Oftentimes planning gets overlooked, as people focus on the nitty gritty of data collection or analysis. Without giving enough to properly plan, resources may be over utilized or underutilized, the wrong outcomes might be assessed, unfeasible targets set, etc. It is critical to the success of your program or project that you spend enough time on this phase.
Monitoring & Evaluation
What is it? Monitoring is the process of collecting information over time, while evaluation is the process of determining if a program is reaching its stated goal. Together, Monitoring and Evaluation (M&E) is used to assess the performance of projects, institutions and programs set up by governments, international organizations, and NGOs. Its goal is to improve current and future management of outputs, outcomes, and impact. Developing an M&E system plays a vital role for both internal and external stakeholders. For example, internally, it helps you determine the data you need to collect to make decisions about programs, resources, and timelines, as well as ensuring a program is being conducted with fidelity. Externally, it helps to clearly communicate your goals and identity, build confidence and trust by demonstrating results, and provides financial accountability.
To be effective, M&E systems must collect data continually, and ensure that they are appropriately analyzed, reported, and discussed for programmatic improvement. Solid M&E systems find their grounding in strong theories of change or logic models. Once this is in place, an M&E Framework is created, followed by an M&E Plan to be operationalized. Our consultants can help you determine if your organization is ready for M&E by conducting a Readiness Assessment, help you build your foundational logic model or theory of change, help you develop an M&E Framework and Plan that is appropriate to your size and scope of the organization, and get you started on the road to results-based accountability.
How can we help you?
- Conduct M&E Readiness Assessment
- Develop a solid logic model and theory of change to guide M&E
- Build specific M&E frameworks and plans that collect only the data you need
- Build internal capacity to manage M&E
Related Services
Lessons Learned
Lessons learned are experiences, knowledge, understandings, or outcomes gained by experience from a particular project or program that should be taken into account on future projects or programs.
Learn MoreMonitoring & Evaluation
Coaches have experience and expertise to support the learning and tasks of the people that need help.
Learn MoreEvaluation Coaching & Training
Coaches have experience and expertise to support the learning and tasks of the people that need help.
Learn MoreEvaluation Logic Models
A logic model is a one-page, compelling graphic (your road map) that tells the reader/reviewer exactly what, when, where, why, and how.
Learn MoreArticles and White Papers About Monitoring & Evaluation
To RCT or Not? Randomized Control Trials in Nonprofit Work
Articles and White Papers About Monitoring & Evaluation To RCT or Not? Randomized Control Trials in Nonprofit Work Read More Whose Job is it to Evaluate? Articles and White Papers About Monitoring & Evaluation To RCT or Not? Randomized Control Trials in Nonprofit Work Read More Whose Job is it...
Read MoreWhose Job is it to Evaluate?
Articles and White Papers About Monitoring & Evaluation To RCT or Not? Randomized Control Trials in Nonprofit Work Read More Whose Job is it to Evaluate? Read More The Problem with Relying Solely on Dashboards Articles and White Papers About Monitoring & Evaluation Case Study: Apprenticeship Program Evaluation Conducting a...
Read MoreThe Problem with Relying Solely on Dashboards
Articles and White Papers About Monitoring & Evaluation Case Study: Apprenticeship Program Evaluation Conducting a statistically representative comprehensive program evaluation which includes conducting a comprehensive evaluation for two workforce development programs and work with each of the vendors and the County to use interim findings to improve program design and...
Read MoreFinalizing Reports: Statements of Differences
Articles and White Papers About Report Optimization Finalizing Reports: Statements of Differences Read More Do We Really Need to Share Our Results? Read More Delivering Strong M&E Reports Articles and White Papers About Reports Finalizing Reports: Statements of Differences Read More Do We Really Need to Share Our Results? Read...
Read MoreFAQ About Monitoring & Evaluation
Data can be collected through the use of standardized checklists or observation guides, or through hand-written or voice-recorded field notes that collect open-ended narrative data.
It can depend on the specific project, but typically focus groups are best composed by homogenous groups, or groups of people who share common attributes. The commonalities shared by a group should be determined by the evaluation goals, the topics being explored, and the cultural context of the evaluation.
Generally speaking, outcome evaluation can begin to measure changes at least six months after program implementation.
As soon as a program is implemented, and during program implementation.
Conferences will usually provide their own guidelines for presenting your work. More and more, conferences are moving away from highly text-heavy presentations and icon-graphics. They center more heavily on key findings in layperson terms, with supporting documentation related to your actual methods. Simplicity and an effective use of white space are key.
What Our Clients Say About Us
Peggy Ostrander, DNPc, APRN, FNP-C Plano, Texas