Friday, July 10, 2009

who done it?

I love a good mystery. Except of course when it involves the location of my car keys. Dick Francis and P.D. James are two of my favorite authors. The appeal for me in the works of Dick Francis is that his characters are often just average Joes that find themselves in a predicament that needs attention. After some blundering juxtaposed with serendipitous sleuthing, the resiliency of the protagonist results in a successful resolution. My enjoyment in reading these stories is the sense of uncertainty. A good plot in a mystery should keep you guessing as to what might happen next and who is the responsible party.

However, there should be no mystery in education when it comes down to seeing if reform initiatives are being implemented by the staff. There may be a myriad of reasons why a particular person is not practicing the reform, but regardless of the reason, the administrator needs to know if it is being practiced (and then can determine what interventions are necessary). The issue therefore centers on how the fidelity of the program will be monitored.

Fidelity.

Fidelity is my new favorite catchword in educational reform (usually that phrasing would represent a sarcastic comment, but I really do like this term). It is one of my favorites because it is well worth the effort to create the infrastructure necessary to support it. Thomas Guskey, in Evaluating Professional Development (2000), repeatedly reiterates the value of monitoring professional development initiatives.

Some reforms are content-based, others are more process-oriented that involves the interaction between teacher and student. The first question to be asked is: Can the reform be evidenced in a student performance task? If it can assessed via the review of an artifact, then these artifacts can become items for analysis in Professional Learning Communities (see last week‘s blog for tips on how to structure professional dialogue in a PLC). If the reform is concerned more with a particular classroom practice, such as student-teacher dialogue or the manner in which the teacher references the Essential Understanding for the unit as part of the delivery of the lesson, then this requires a visitation to the classroom.

Walk-through visitations are one technique to monitor fidelity. I admit that I was originally skeptical about the use of walk-through observations because it is a technique that is ripe for misuse or misapplication. In order for the data to have significance, there needs to be multiple visits. My worry is that due to the daily emergent priorities that usurp the intention to visit classrooms, that the number of visits that actually occur will be minimal, but conclusions will still be made on the limited data pool. The staff will quickly resent the process if it is done poorly.

In order for fidelity to be more than a catchword, the approach to monitoring the fidelity must be user-friendly. User-friendly. If it is not an efficient process and user-friendly then it will fade away and the results will be misapplied. Either of those options are disheartening to staff. So spend time on the design of the observation form, pilot its use, and then redesign the form. Here is the mantra: Monitor the fidelity consistently, efficiently, and in a manner in which the data can be displayed graphically.

Consistently, as in everyone knows what the topics are and that there is agreement as to what the observational evidence needs to be. Efficient means it can be done in 5 minutes or less, therefore use a checklist. Sorry if some purists are aghast at the mention of a checklist, but the reality is that it helps maintain the required consistency and it keeps the process user-friendly. I don’t just mean user-friendly for the person conducting the visit, but the person being observed should know the exact nature of the criteria. If the criteria are articulated in a checklist/scoring rubric format then everyone knows beforehand what the evidence needs to be for each rating. Each specific item being targeted should have a spectrum of criteria that includes the optimum level of practice, e.g. daily objective absent, daily objective posted, objective referenced during class by the teacher, objective referenced in context to lesson/unit by student(s). The data garnered from the rubrics should be presented in a graphic format that has visual impact in order to easily trigger conversation.

A well crafted observation form is the key to keeping it user-friendly. Then the only mystery is whether Colonial Mustard used the candlestick in the conservatory or the kitchen.

Make a good day,
Tod

PS. Link to article/interview with Guskey in The Evaluation Exchange: A periodical on emerging strategies in evaluation. http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/professional-development

PSS. Connecticut walk-through guide has an extensive list of topics but the form is not user-friendly and does not translate well into a graphic data record. So it is an example of what NOT to do: http://www.sde.ct.gov/sde/lib/sde/pdf/Curriculum/Walkthrough_Protocol_Guide_2008.pdf

PSSS. Interview with Dick Francis: http://www.eyeonbooks.com/fiction/0901/dickfrancis.html and a synopsis of each of his books: http://home.ca.inter.net/~jbeaumont/francis/

No comments:

Post a Comment