That’s center stage today. As members
of the American Evaluation Association
(AEA), JCCI Resource Development Services
personnel are prepared to conduct comprehensive
evaluations of your organization’s
programs and services. We ascribe to the
professional educational program evaluation
standards of the AEA. With these standards
and principles in mind, JCCI Resource
Development Services can assist you in
designing the best evaluation for your
institution’s programs and services.
We can help you answer such questions
- Have you provided
for both formative and summative evaluations?
- Were your measurable
objectives achieved? If not, why not?
- What impact and
results did you really achieve?
- Were your implementation
strategies accomplished according to
If not, why not?
- What, beyond the
measurable objectives, did the project
- What impacts
and outcomes were there?
- How did the institution
- Is your evaluation
Evaluation – Theoretical Framework
As more and more colleges incorporate external evaluation into their project design, JCCI Resource Development Services helps them from the planning stage to conducting the evaluation. Understanding that every project and funding agency is unique, we consider three evaluation frameworks, among others, in our work.
The first, Patton’s “utilization-focused” approach, emphasizes the need to continuously collect, analyze and report quantitative and qualitative information to facilitate data-driven decision making1. Patton also emphasizes examining the evaluation process itself to ensure it contributes valid and reliable information for use by stakeholders in a timely manner. Second, the use of logic models provides a rubric from which to determine where participants and processes should be each year to ensure long-term project success2. Logic models use a linear format of inputs, activities, outputs and outcomes to ensure all aspects of a program are measured against established benchmarks. Finally, a participatory approach encourages the participation of project staff in the evaluation process, a key component of any successful evaluation effort3.
Evaluation of Federal Discretionary Grant Programs
Evaluation Language for Generic Application Packages
“A strong evaluation plan should be included in the application narrative and should be used, as appropriate, to shape the development of the project from the beginning of the grant period. The plan should include benchmarks to monitor progress toward specific project objectives and also outcome measures to assess the impact on teaching and learning or other important outcomes for project participants. More specifically, the plan should identify the individual and/or organization that have agreed to serve as evaluator for the project and describe the qualifications of that evaluator. The plan should describe the evaluation design, indicating:
(1) what types of data will be collected;
(2) when various types of data will be collected;
(3) what methods will be used;
(4) what instruments will be developed and when;
(5) how the data will be analyzed;
(6) when reports of results and outcomes will be available; and
(7) how the applicant will use the information collected through the evaluation to monitor progress of the funded project and to provide accountability information both about success at the initial site and effective strategies for replication in other settings. Applicants are encouraged to devote an appropriate level of resources to project evaluation.
Successful applicants will be expected to report annually on the progress of each project or study included in the grant, including a description of preliminary or key findings and an explanation of any changes in goals, objectives, methodology, or planned products or publications."
Above is from The Federal Register of September 15, 2011.
While this language applies to generic application packages, those pursuing any discretionary grant program are wise to consider it in their evaluation plan.
1 Patton, M.Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
2 Fitzpatrick, J. L., Sanders,
J. R., & Worthen, B. R. (2004). Program
evaluation: Alternative approaches and
practical guidelines. Boston, MA: Pearson.
3 Leff, H., & Mulkern,
V. (2002). Lessons learned about science
and participation from multisite evaluations.
In J. Herrell & R. Straw (Eds.), Conducting
multiple site evaluations in real-world
settings (pp. 89-100). New Directions
for Evaluation (No. 94). San Francisco: