Bitten by the School Improvement Bug

How much can we learn about school improvement by paying a little attention to something completely unrelated to school improvement?  Let’s find out…

I was sitting on my porch this evening, doing a little of the three R’s – ‘riting, reading, reflecting (sorry, math folks…no ‘rithmetic this time) when I noticed a minor annoyance.  Pesky mosquitos slowly buzzed their way onto my legs, as they are wont to do on a balmy evening in central Virginia.  At least, I assumed there were mosquitos, as I did not see or hear any of them.  All I had to show from their visit were a couple of red bumps on my calves.  Bumps that slowly started to itch.  And itch.  And itch.

Media_httpwwwhowtodra_jjmtb

Image from: http://www.how-to-draw-funny-cartoons.com/

While I started to scratch my leg to relieve the uncomfortable sensation, I knew that I had found a problem: I needed to stop these bugs from biting me.

As I continued to write, I became more aware of the mosquitos in my surroundings.  I saw a few flying around the porch, and would periodically notice one land on my leg.  That sense of awareness was of course followed by a quick and thorough intervention: with one swift smack, the couple of bugs I caught lay dead at my feet.  While I may end up with a slight bruise at some point (as I think I was a little overzealous with my slaps), “crisis” had been averted.  

Unfortunately, I was not able to catch all of the bugs in this way: before I knew it, two more bite marks surfaced on my legs.  

It would have been easy enough to go inside for my 3R’s time, but I was a little hellbent on enjoying the evening air.  I realized I needed a new plan.  I remembered that we had some Off! spray in the house, and decided to go in and make use of it.  At the same time, I noticed one of our citronella candles next to me, and realized that lighting the candle may help.  Walking inside, I grabbed the bottle of Off! and a box of matches.  After spraying my legs and arms, I proceeded to light the candle and bring it next to my spot of intended repose.

Hopefully, I thought, this plan will work.  How will I know it worked?  Well, for starters, I’ll end up without any new welt marks as a result of these bug bites.

Sure enough, over the next twenty minutes, I was not bitten by a single additional mosquito.  As dusk approached, I celebrated in my success, blew out the candle, and headed inside.

But What Does It All Mean, Basil?

Naturally, I went to my “organizational change” place and put this situation into that context: how would this situation have been written using the language of school improvement?

  • GOAL: Stop these bugs from biting me.
  • KPI: Number of mosquito bites on my legs and arms
  • STRATEGY 1: Kill the bugs by slapping them as they reach my legs.  (This was not successful.)
  • STRATEGY 2: Repel the bugs by lighting a citronella candle and spraying Off! on my legs.  (Success!)

What can we learn by focusing on such a mundane event?

What first jumps out at me is the relationship between my goal, my indicator of success, and my strategies.  While it seems like it goes without saying, I arrived at the goal before deciding on the strategies- the strategies then arose naturally as a response to the problem needing to be addressed.  In instances of planning for school improvement, how often do we fix our eyes on an appealing strategy without considering whether or not it addresses our needs?  Doing so is just would like saying, “Hey, I have a can of Off!  Let’s spray it!”

Secondly, while my strategy did change mid-stream, my goals did not, and neither did my indicators of successful goal attainment.  In my mind, indicators are inextricably tied to the goals: they are the measure of progression toward reaching a goal, and would not change just because the strategy has shifted.  It makes me wonder, how often do we change our indicators based on a shift in strategy?

Admittedly, I did not do a very good job of isolating my strategies.  In the future, I have no idea which strategy was helpful in repeling the mosquitos: the candle or the Off! spray.  At this point, all I know is that to avoid being bitten, I should use both the candle and the Off! spray.  In that sense, how often do we combine multiple strategies in our plans for improvement to the extent that we would be unsure of how to replicate success?

Finally, part of the success of this “plan” was rethinking the implementation strategy.  My first response had been to consider ways of killing the bugs.  Had I continued down that path, I may have ended up with a flyswatter in place of the candle, or a fumigator in place of the Off!  Instead, by rethinking the strategy from “exterminate” to “repel”, I came to a solution that was helpful.  If I wanted a long-term solution in this realm, I could always screen in the porch or something (though I had neither the time, expertise, or desire to do so this evening).  There were a myriad of other options, each of which may have been just as effective in achieving my goal.  The question is, of all the responses I could have, which of these strategies best fits this moment in time, for this situation?

Hope this post is neither too simplistic nor too esoteric- just thought I would share a couple musings around school improvement from a guy who is now in desperate search of some Bactine.

Advertisements

Why Average? Alternatives to Averaging Grades

(Part 3 of the “Why Average?” trilogy from the week of Aug 7-14. Here’s Part 1. Here’s Part 2.)

Over the past week, the topic of averaging grades has risen to the forefront of the twitter-verse.  Posts abound around the issues that professional educators have with lumping several disparate values together in the hopes of describing a student’s level of competence or understanding.  (For reminder of these posts, see Why Average?, xkcd’s TornadoGuard, David Wees’ A Problem with Averages, and Frank Noschese’s Grading and xkcd.)

Media_http2bpblogspot_jmqlf

http://kishmath421.pbworks.com/w/page/7782913/Math-Cartoons

After seeing so many (including myself) highlight the inadequacy of averaged grades, the words of our county’s assistant superintendent come to mind: “If you offer a problem, you’d better be ready to suggest a solution.”  That being said, here are a few alternatives to sole reliance on averaging student data to describe their competence, organized by the issues described in Part 2 of this “Why Average?” trilogy.

Issue 1: Averages of data that do not match intended outcomes do not suddenly describe outcome achievement.

The xkcd comic (along with the correlation to education on Frank’s blog) ties in most closely to this issue.  So often, we as educators assign points (and therefore value) to things that do not necessarily relate to outcome achievement.  Assigning grades for homework completion, timeliness- even extra credit for class supplies- and combining them with outcome achievement data introduces a high level of “grade fog”, where anyone looking at the final grade would have a high degree of difficulty in parsing out the components that led to a student’s grade.

In his article, “Zero Alternatives”, Thomas Guskey lays out the six overall purposes that most educators have for assigning grades:

  1. To communicate the achievement status of students to parents and others.
  2. To provide information students can use for self-evaluation.
  3. To select, identify, or group students for specific educational paths or programs.
  4. To provide incentives for students to learn.
  5. To evaluate the effectiveness of instructional programs.
  6. To provide evidence of a student’s lack of effort or inability to accept responsibility for inappropriate behavior.

Frank Noschese’s blog post highlights these cross-purposes: in the image paired with the xkcd comic, the student’s grade of B seems to come from averaging grades that are meant to provide motivation (“I do my homework”, “I participate in class”), responsibility (“I organize my binder”) and information on achievement (“I still don’t know anything”).

The simple answer to this issue would be to stop averaging grades for things like homework completion, class participation, and responsibility together with values for student achievement.  Instead, make grades specifically tied to meeting standards and course objectives.  Of course, if it were that easy, we would all be doing it, right?  I guess the bigger question is, How do we provide the desired motivation and accountability without tying it to a student’s grade?  Guskey’s article suggests several ideas for how one might differentiate these cross-purposes (e.g. a grade of “Incomplete” with explicit requirements for completion, separate reports for behaviors, etc).  Other alternatives from my own practice:

  • Report non-academic factors separate from a student’s grade. Character education is an important part of a student’s profile, though it does not necessarily need to be tied to the student’s academic success.  One way of separating the two would be to report the two separately.  I had a category in my gradebook specifically for these kinds of data, though the category itself had no weight relative to the overall grade.  Providing specific feedback to students (and their parents) on topics of organization and timeliness separate from achievement grades can go a long way toward getting behaviors to change.
  • Set “class goals” for homework and class participation.  Sometimes, there is no better motivator than positive “peer pressure”.  One of my bulletin boards in my classroom had a huge graph set up, labeled, “Homework completion as a function of time”.  Each day, we would take our class’ average homework completion, and put a sticker on the graph that corresponded to that day’s completion rate for the class.  We set the class goal as 85% completion every day, and drew that level as the “standard” to be met.  As a class, if we consistently met that standard over the nine-week term, there was a class reward.  One unintended consequence: each class not only held themselves to the standard, but also “competed” with other class periods for homework supremacy!  (Of course, there was that one class that made it their mission to be the worst at completing homework…goes to show that not every carrot works for every mule.)
  • Make homework completion an ‘entry ticket’ for mastery-style retests. If homework’s general purpose is to promote understanding, one would assume a correlation between homework completion and achievement.  While I ‘checked’ for homework completion on a daily basis and recorded student scores under a “Homework” category, that category had no weight in the student’s overall grade.  Instead, once the summative assessment came up, those students who did not reach the sufficient level of mastery needed to show adequate attempts on their previously assigned work before we could set a plan for their re-assessment.  You may think that students would “blow off” their homework assignments in this situation- and some did, initially.  However, once they engaged in the process, students did what was expected of them.  Over time, there was no issue with students being unmotivated to do their homework as necessary.

Issue 2: Averages of long-term data over time do not suddenly describe current state understanding.

This issue is a little trickier to manage.  On his blog Point of Inflection, Riley Lark summed up his thinking on the subject of how to best describe current state understanding with a combination of long-term data in a post entitled, Letting Go of the Past.  In the post, he compares straight averages to several other alternatives, including using maximums and the “Power Rule” (or decaying average).  I strongly suggest all those interested in this topic read Riley’s post.  Riley has since created ActivGrade, a standards-based gradebook on the web that “[makes] feedback the start of the conversation- instead of the end.”

For some other resources for ideas:

– – – – – – – – – –

At the heart of the question “Why Average?” is a push to purpose.  While none of the ideas described in this trilogy of posts are inherently right, at the very least, I hope that it has brought readers some “jumping-off points” on how to ensure that their methods match their intended purpose.  We owe at least that much to our students.  If you have other resources, ideas, or questions that would extend the conversation further, please share them by all means.

Why Average? on the Minds of Many

(Part 2 of the “Why Average?” trilogy from the week of Aug 7-14. See Part 1 here. See Part 3 here.)

So on Sunday, I posted a comic about how goofy it can be to average long-term data to describe current state measurements.  Imagine my surprise this afternoon upon checking the RSS feed to see this new comic on xkcd:

Media_httpimgsxkcdcom_gffeb

http://xkcd.com/937/

Earlier today, physics teacher and #sbar advocate Frank Noschese paired the xkcd image with an educational correlate on his Action-Reaction blog:

Media_httpfnoschesefi_qeteb

http://fnoschese.wordpress.com/2011/08/12/grading-and-xkcd/

While this comic tackles a different problem with averaging than does my own post, it seems like concerns with averaging as a description of data are on the minds of many.  (To get an idea of the scope of the discussion, check out the conversations happening in the comment boxes on posts by Frank Noschese and David Wees, respectively.)

Our comics highlight two different but very real issues with trying to describe such a complex thing as learning with such a simple thing as one averaged value:

  • When we take values that do not match intended outcomes (a student’s knowledge, understanding, and skills acquisition) and average them together, the new number does not somehow suddenly describe outcome achievement.
  • Even if we do happen to measure the outcomes described above, but those measures are taken over time and then averaged together, the new number does not somehow suddenly describe current state.

Have you seen any other visuals that help to describe these problems with averaging data?

A Response to Data-Informed Decisions

Earlier this evening, I read a colleague’s blog post discussing the concept of making data-informed decisions as opposed to data-driven decisions.  It’s a thoughtful post, one I hope you will read in depth.

The post brought out a response in me that unearthed some Sherlock Holmes quotes I thought I had forgotten. (Read the comment, if you’re interested in the quotes themselves.)  There are a couple of images that seem to sync up well with the idea from the response, so I figured I’d put them up here for posterity’s sake:

Img_1546Img_1544Img_1545

These images are meant to be viewed in succession, almost as an evolution.  The 1st image depicts the concept of a data-driven decision as Steven describes it in his blog: data leads to our decision to act in a certain way, and those actions lead to new data.  What this idea is missing- and what Steven asserts- is the process of thoughtful reflection that occurs when you consider not just the data but also the perceived reasons for the data.  In the 2nd image, the data has informed those reasons, and those reasons then drive the decision on how to act.

The 3rd image adds a level of balance into the system as drawn from similar diagrams in Senge’s Fifth Discipline.  In this cycle, our decisions are still driven by the reasons for the data, but here the data is the perceived gap between the actual results and those we expected.  In other words, we’re not necessarily asking ourselves the question, “Why do we see the data we see?” but rather, “What is the reason for the difference between what we see and what we thought we’d see?”

Given time, there would probably be several more iterations of this image- I hope your thoughts will help to continue to shape it into something better than it is today.  Thanks again to Steven & Rich @ Teaching Underground for inspiring the response.