February 2018

Viewing posts from February , 2018

Beyond Getting the Grant: The Project Evaluation Process

Receiving grant funding comes with a serious responsibility to manage funds appropriately and demonstrate how the funded project is meeting defined goals and objectives. A comprehensive and carefully crafted project evaluation plan allows grantees to fulfill this responsibility more efficiently and to make any adjustments to the project that may be needed to achieve desired outcomes. Increasingly, grant applicants are incorporating external evaluation into their project design as a means of fulfilling this serious project monitoring and evaluation responsibility. Whether an institution is utilizing the services of an external evaluator or managing evaluation reporting internally, JCCI Resource Development Services recommends following these guidelines when designing a project evaluation plan.


Evaluation starts with the grant application. Formulating a basic plan for evaluating project success and achievement of project milestones and objectives when developing the grant project and writing the grant application ensures that stakeholders are aware of evaluation responsibilities before any work on the project ever begins. Each project and each grantee is unique and will require different evaluation methods; however, there are three standard evaluation tactics that are especially helpful in defining and outlining evaluation needs:


  • Michael Quinn Patton developed the Utilization-Focused Evaluation (UFE) approach that emphasizes engaging real and specific end users of the evaluation results and ensuring that evaluation reports are drafted with these users in mind. Patton advises continuously collecting, analyzing, and reporting quantitative and qualitative information to facilitate data-driven decision making. Using the UFE approach entails examining the evaluation process itself to make sure the process contributes valid and reliable information to end users and stakeholders in a timely manner. The evaluation must be meaningful.


  • Logic models are simplified visual representations of an identified need, a vision for fulfilling the need, and the strategy to accomplish the vision. Logic models use a linear format of inputs, activities, outputs, and outcomes to ensure all aspects of a program are measured against established benchmarks. Logic models can also serve as a map to show where participants and processes should be to ensure long-term project success. Several organizations such as The W. K. Kellogg Foundation, REL Pacific, and ERIC provide helpful information about developing logic models as well as sample logic models.


  • A participatory evaluation approach requires that those with “boots on the ground” rather than those in the “ivory tower” guide evaluation efforts. A participatory evaluation approach has the distinct advantage of gaining especially useful insights based on firsthand experience; project staff participation in the evaluation process is a key component of any successful evaluation effort. This approach can be time consuming, though, and requires a great deal of coordination. MEERA, a group formed to help environmental educators with evaluations, provides a good overview of the participatory evaluation approach’s pros and cons as well as examples of when this methodology would be appropriate.


In addition to determining the best evaluation practice or combination of approaches that suit a particular project and institutional goals, a good plan will also outline the type of evaluation the institution will use. The two main types of evaluation are formative and summative, and subcategories within those two types of evaluation include process, impact, and outcome evaluations.


Formative evaluations are common in many business operations and are commonly referred to as continuous quality improvement (CQI) efforts because they are ongoing rather than a one-time review. Summative evaluations, on the other hand, are reviews of project or program successes and outcomes typically performed at the end of specific time frames or at the end of the entire . Robert Stake, highly regarded for his influential development of program evaluation theory and practice, distinguished these two types of evaluation with his now-famous metaphor: When the cook tastes the soup, that’s formative; when the guests taste the soup, that’s summative.


Process evaluation is a subset of formative evaluations while impact and outcome focuses are subsets of summative evaluations. Recommended best practice is to use both formative and summative evaluations for a comprehensive assessment of the grant project. With selected methods and types of evaluations, the next step in the evaluation design involves identifying specific components at the outset:

  • The organization or person responsible for the evaluation plan and reports.
  • Agreed-upon benchmarks and measurements.
  • Types of data to be collected and an agreed-upon schedule and time-frame for data collection.
  • Agreed-upon methods for analyzing, reporting, and using collected data.
  • Reporting schedule.

Ultimately, the evaluation process should inform stakeholders about progress of the funded project as well as project successes or the need for adjustments to the project plan to achieve desired outcomes. Though funders might not require an evaluation report until the end of the grant period or at the mid-point of multi-year projects, preparing draft evaluations throughout the duration of the grant project will better position the grantee to give a meaningful evaluation report. Good project evaluation reports not only fulfill the responsibility of providing a detailed accounting to funders; they can also help grantees learn from a project and use that knowledge to be more competitive in future project planning and grant applications. Evaluation reports might also guide other institutions in implementing similar projects based on the grantee’s experiences.