Looking to better understand quality assurance

Quality assurance. Value-added. Measurement of outcomes. These are more than buzzwords; they’re things likely to impact our practice. They’re also related to the Megan Oakleaf ILU day next week, “Building the case for librarians in the classroom: Communicating the value and assessing the impact.”

I went looking for more understanding of the background in the following article – it was on the reading list for an OISE course on higher eduction (which I ended up not taking), so I assumed it was a good place to start.

Clark, I. D., Moran, G., Skolnik, M.L., & Trick, D. (2009). Chapter 5: The impact of quality and accountability measures on system responsiveness. In Academic transformation: The forces reshaping higher education in Ontario (pp. 113-136). Kingston, ON: McGill-Queen’s University Press.

The authors provide an overview of the discourses and practices surrounding quality and accountability measures, both within higher education generally and in Ontario. They briefly survey their emergence; discuss the concept of quality and the practice of quality assessment; describe the Ontario scene; and discuss the influence of quality and accountability measures on the behaviour of colleges and universities.

The chapter is too dense to easily summarize. Some key points:

  • The broad purpose of the various measures and structures is to allow the various stakeholders (governments, the public, the institutions, the students, etc.) to “tell whether students are getting a good education” (p. 114).
  • The notion of quality and quality measures are subjective, often politicized, often co-opted for other purposes, e.g. funding, student recruitment, defence of the status quo (this is not a surprise).
  • “The literature on quality assurance emphasizes the superiority of the outcomes, value added, and student experience conceptualization of quality, and accordingly institutions and provincial agencies would be well advised to embrace these rather than the resources and admissions selectivity notions of quality” (p. 136). So, for example (and as I understand it), this creates the impetus to measure what students learn in our classes (outcomes measure) and not simply how many classes we teach (input measure).
  • More research is needed. However existing research may be ignored if it doesn’t fit with pre-existing notions of quality. For example, research has generally found no correlation between a professor’s conduct of research and their teaching effectiveness. The authors point to a belief in such a correlation as an “enduring myth” (p. 131).

The authors don’t always provide the clarity I was hoping for, particularly in the section on Ontario, where they don’t always make clear the purpose of different agencies, their influence, or their relationship to one another. A chart would have been helpful. The area is a complicated one, however, and I would probably benefit from circling back to this chapter after more exposure to the topic.

test