Assessments and Audiences

February 15, 2014    museums research assessment

I keep coming back to a recent post by Nina Simon: Arts Assessment: Let’s Stop “Proving” and Start Improving.

I think Nina really hits the nail on the head. Indeed, when I think of what kinds of assessment will help me improve my own practice as a teacher, it rarely looks like the kinds of quantitative educational data that funders expect. Nina’s post is a reminder to ask questions that make sense to ourselves as educators, because this is where we can really improve our own practice. Summing up the post, she writes,

When we improve our own work, we prove our value.

There’s a bit of circularity in that statement. Or perhaps it exhibits a self-serving bias of sorts. That is, I think educators in the many kinds of spaces that shape our learning ecosystem often do need to explicitly help others understand the value in the work that they do. On the other hand, it is certainly true that a well-rounded, “whole individual” education relies on a diversity of voices and kinds of learning, and when we are improving our own work, we are improving our role within that larger ecosystem.

Nina mentions the Crystal Bridges study, which I have mentioned before. Though the principal audience of this study may indeed have been policymakers and funders, I find it worthwhile to look back to the Critical Thinking Skills instrument that formed the core of the research methodology. For whom was this rubric developed? Museum educators.

Educational research and evaluation is a complex topic to tackle, but it strikes me that it is most valuable when it is rooted in teachers’ own practice of reflection. One implication of this is that there is no one-size-fits-all methodology. I hope that this is a conversation that can continue between educators and education advocates of all stripes.