In a perfect world, the evaluation of professional development programs will provide the:
staff with timely, accurate information so they can adjust project activities; an opportunity to be reflective about the goals of the project so they can revise and re-prioritize them; a knowledge of the overall, long-term effectiveness of the project;
participants with an understanding of project goals and to allow them to contribute to the design of the project;
participants and staff with an opportunity to be reflective about their learning, thereby contributing to that learning and providing a model for effective classroom assessment procedures (Madaus, et al., 1992; Stevens, et al., 1993).
While we recognize that other projects will need to develop instruments suited specifically to its own goals, we recommend considering the following types of evaluation instruments.
Pre-program written questionnaire. These were especially useful in giving staff some background on the Scholars. We returned copies to the Scholars near the end of the program, asking them to reflect on their experience in the project.
Written questionnaire. These were scheduled at the end of each single-day sessions and at the end of every week of summer programs. We found that stating the goal (for the day or week) and then asking for comments provided participants with a better opportunity to be reflective (and provided us with better information) than the usual "what was the most useful/least useful" questions.
End-of-project questionnaire. We provided participants copies of their initial responses, asking them to reflect on their experience.
Interviews. We tried video-taped group interviews and individual telephone interviews. The participants definitely enjoyed the group interviews more and we probably got as much, or more, information. While these sources do not provide easily-tabulated results, they do allow the staff to move beyond the initial answers thus gaining a deeper understanding of the program.
Tests. Our staff and Scholars would say that we never gave tests. We did. Participants were often asked to reflect on what they had learned and what they still had questions about -- both in writing and in discussion. Participants were constantly being observed as they engaged in their research. Staff and other participants helped those having problems with concepts or with new skills. While there was never a grade, there was constant feedback on accuracy of understanding and mastery of skills. The research community staff also met with participants regularly to discuss the progress of their research and their classroom activities.
Products. Participants gave a formal group presentation (talk and/or poster session) at the end of the program. These were reviewed by the evaluators for their scientific accuracy and the extent to which the students appeared to have been engaging in research (as opposed to just activities.) Written summaries were also read by staff who followed up with additional ideas and resources. In a previous project, evaluators visited classrooms to confirm that teachers were, in fact, teaching according to project criteria (in that project, the goal was inquiry-based space science). We found that the correlation between teachers' descriptions of their teaching and the observed teaching was so high that we eliminated this very costly component from future programs. We did not ask teachers to keep portfolios (Actually, we did in the first cycle but found the teachers were so reluctant to surrender them for even a day for fear we would lose them, that we gave up.) Other projects should consider some way to encourage teachers to create and evaluate their own portfolios.
Student work. Teachers brought student work to the follow-up sessions and final presentations where they shared ideas with each other. We could not use test data because state testing practice is constantly changing, making it impossible for us to use state tests to compare students over time. We did review student work, but did not have reliable pre-post, long-term, or control group data. We recommend that all projects devise some way to collect such data.
In general, we recommend that staff be fierce about collecting evaluation forms. We traded meal tickets, parking passes, stipend checks, whatever we had available, for completed forms. The Partnership always asks that participants sign evaluation forms so that we can follow up on comments. We also summarize evaluations and return them to participants and talk with participants about changes we have made based on their suggestions. We also learned, the hard way, to provide enough time to complete evaluation forms thoughtfully. If time is provided at a meeting, the room should be quiet. For weekly evaluations, we handed them out Thursday afternoon so participants could fill them in at their leisure. In our most recent versions of the project, we have scheduled the last 20 minutes of each day for quiet, reflective writing about the day's activities. After participants had gone, teaching staff met to read and discuss the journal entries and to complete planning for the following day.
The selection of the evaluation team is described in the Budget section of the Project Description.
Madaus, George F., Walt Haney, Amelia Kreitzer, Testing and Evaluation: Learning from the Projects We Fund, Council for Aid to Education, NY, 1992.
Stevens, Floraline, Frances Lawrenz, Laure Sharp, User-Friendly Handbook for Project Evaluation: Science, Mathematics, Engineering and Technology Education, 1993, NSF 93-152.