Skip to main content
 

Editorial

Practical Program Evaluation

Connected Science Learning July-September 2019 (Volume 1, Issue 11)

By Beth Murphy

Practical Program Evaluation

I love data—especially when used to look at learning experiences and figure out what’s working, what isn’t, and what to do next. Not surprisingly, then, I’m quite excited about this issue of Connected Science Learning, which is all about Practical Program Evaluation. The choice of the word practical in this issue’s theme is intentional. Program evaluation takes time, and time is valuable. Educators have a lot of work to do before thinking about evaluation—so, if they are going to do it, it had better make a difference for their work.

Throughout my career I’ve seen how powerful program evaluation can be when it is done well. For example, a few years back I was project director for a program called STEM Pathways—a collaborative effort between five organizations providing science programming for the same six schools. Much about our program evaluation experience sticks with me: how important it was to involve program staff efficiently and effectively in evaluation design, implementation of findings, and interpretation to decision-making—and how they were empowered by being part of the process; how program evaluation led us to discover things that were surprising and that we would not have otherwise learned; and how the data-informed program improvements we made led to better participant outcomes and program staff satisfaction.

I’ve also seen program evaluation done poorly—such as when it only tells you what you already know, when the information collected isn’t particularly useful for decisions or improvements, or when it fails to empower program staff. We’ve probably all been there at one time or another. But how do we avoid making those same mistakes again?

In my experience, program evaluation that provides utility and value to program developers and implementers by default also satisfies stakeholders. Frankly, if your program evaluation isn’t used for anything more than grant reports, you might want to rethink what you’re doing. Utilization-focused evaluation, an approach developed by evaluation expert Michael Quinn Patton, is based on the principle that evaluation should be useful for its intended users to inform decisions and guide improvement. Patton reminds us to reflect on why we are collecting data and what we intend to use them for—to make sure that conducting the evaluation is worth the time of the people who will implement it, as well as of the program participants who are being evaluated.

The good news is that there is no need to start from scratch. There are many research-based tools and practices to learn from and use, and you will learn about many of them in Connected Science Learning over the next three months. Join us in exploring case studies of program evaluation efforts as well as helpful resources, processes, and implementation tips. You’ll also find articles featuring research-based and readily available tools for measuring participant outcomes and program quality. I hope this issue of Connected Science Learning inspires your organization’s program evaluation practice!

 

Beth Murphy, PhD (bmurphy@nsta.org) is field editor for Connected Science Learning and an independent STEM education consultant with expertise in fostering collaboration between organizations and schools, providing professional learning experiences for educators, and implementing program evaluation that supports practitioners to do their best work. 

Assessment Professional Learning old Research STEM Informal Education

Asset 2