Time to confront the tyranny of surveys

Posted By TEU on Dec 15, 2014 | 10 comments


A guest blog from University of Canterbury branch president Jack Heinemann[1]:

Those who use surveys to measure goal achievement are obligated to provide evidence that –

  • accomplishment of the goal can be measured using surveys; and
  • the measurement creates no unacceptable harm to those being surveyed.

In the case of student evaluation of teaching (SET) surveys, I am increasingly coming to the opinion that both their usefulness is marginal and potential to cause harm is real. Pending production of credible evidence to the contrary, I advocate for the immediate removal of compulsion to participate in SET surveys within the employment relationship[2]. Where tertiary staff wish to continue to use surveys, the results should be their private information and the teacher should retain control of whether or not to share that information with others –  including their managers.

After 20 years of near continuous SET and course surveys of my own, how have I recently come to this view? A few of my colleagues and I reviewed literature on teaching quality assessment practices. We formed the opinion that SET is an important part of monitoring teaching quality, but only when it is part of a multifaceted programme of measurement, where it is designed to measure progress toward clearly articulated and understood goals, and when the goals can be measured by student surveys.

However, its use in the tertiary sector fails to fulfill these prerequisites. As such, I am concerned that the effect of these surveys may degrade student achievement despite promoting the student experience, and may harm staff.

For example, a compulsory and particularly influential question on SET surveys used at my university can be paraphrased as[3]: “How effective is Jack as a teacher?” What? Here we have a group of people experiencing material for the first time in their lives and who have no objective means to determine whether they learned more with me than without me, or with me compared to some other hypothetical teacher, reflecting (or not) on an experience that has only recently been completed. In one of the most impressive studies on SET surveys I have seen[4], the researchers found that the answer to this kind of question correlated significantly with the meteorological conditions on the day of the survey rather than ability of students to anticipate their preparedness for their next related course.

I research and teach in an evidence-based discipline in a leading university. Yet my impression is that little more than faith and hearsay are behind advocacy of SET surveys by most members of the academy.

There is increasing and credible research that indicates that when groups of learners of variable ability are given disproportionate voice in the review of the learning environment, they undermine the ability of lecturers to improve outcomes for the learner. It is much like rewarding a fitness instructor who uses mirrors that make you look skinny instead of encouraging you to exercise.

A harmless quality, not harmful quantity, system

I don’t have evidence linking SET surveys to adverse impacts on staff in New Zealand. Nevertheless, I believe that the pressures SET surveys create, as well as their communication to a significant number of colleagues on campus, have psychological impacts. I argue that in the absence of evidence that this impact is benign, it is a responsible employer’s obligation to halt and review their compulsory use.

As an academic working at a university, my routine annual exposure to surveys could be just under the number of courses in which I teach (SET ~6), course surveys roughly doubling the number to, in my case, as many as 12. In addition, I’m evaluated by the PBRF, periodic promotion exercises, annual performance and development reviews, and ad hoc meetings with my employer’s representatives. In the context of my work, the actual outputs are further reviewed, such as by anonymous referees in the course of publishing my research or by external academics that periodically agree to review exam scripts that I have marked.

There would be few professionals in any other sector subject to this frequency of review by an audience with such a diverse set of interests and motivation, much less through the use of tools that have never been validated as accurate for measuring the quality of my teaching.

By contrast, in the 20 years that I have worked for a university, I have only twice been asked to take part in surveys of my impressions of my senior managers. In those cases, I am not even sure if those two surveys were required by the governing Council. Even if they were, the surveys were not dedicated to each individual manager, but to groups in the managerial chain. As a form of manager review, they would be much less revealing than the dozen student surveys and cyclic PBRF exercises that I am expected to regularly participate in. If we regard mass surveys as a good way to measure quality, then why is the tool effectively only used on academics?

If SET surveys remain a component of staff evaluation, and compulsory evidence for promotion, then immediately all tertiary institutions should make equivalent surveys of vice-chancellors, chancellors, pro-vice chancellors, and so on, by all staff a compulsory and annual part of their employment relations. Moreover, the results should be released through a series of confidentiality agreements with different members of the campus community every year so that, like the experience of academics that parade their scores through promotion committees with changing membership, eventually a significant number of people know their scores.

What could replace the SET survey?

Nothing can replace student surveys. They are a valuable part of a multifaceted programme of teaching quality review.

The SET survey can be used to measure achievement toward particular teaching goals, but nothing as vague and all encompassing as “facilitating my learning experience”2. I equally question that they can be used to answer seemingly straightforward questions such as: “The lecturer/tutor was able to communicate ideas and information clearly,” when even this is related to students’ preparation, competence with discipline-specific language and their achievement in prerequisite courses that were likely out of the control of the lecturer.

Therefore, the multifaceted programme of measurements must be led with clear goals. Those goals must be expressed in a way that allows progress toward them to be measured. Finally, the way that they can be measured also must be achievable, practical and validated.

Until that happens, remove the compulsion to communicate the results. Remove any stigma for opting to not communicate the results. There must not be any requirement that staff justify a decision to keep results private. If during the course of promotion or performance exercises attention is drawn to the absence of SET survey results, this act should be taken as an irreconcilable conflict-of-interest and those with such conflicts should be immediately dismissed from further participation in the process.

This may seem like an extreme interim response. However, the pain of these ad hoc procedures might just be high enough to motivate tertiary institutions to research, test and implement proper multifaceted teaching quality programmes based on worthwhile goals with measurable endpoints.

From here

A legitimate progamme for assessing teaching quality must include some outcome-based measurements. SET surveys are process-based measurements, as are peer-review and teacher self-evaluation. Adding up many process-based tools will never replace the need for an outcome-based tool. To measure outcome, again, a clear outcome goal must be identified, such as: “has the lecturer contributed to an improved performance of a cohort of students taking related courses in other time periods?”.

As a minimal first step, why not correlate SET survey responses with demographic information about the student submitting the survey? Comments from say “A” level students can then be contrasted with those from “E”, “D”, “C” and “B” levels. These could evolve with each subsequent semester that the student remains engaged in study.

Another possibility is to hold focus groups with students. They could be given a detailed rationale for why the lecturer feels doing things a particular way is important and then asked to comment on whether the lecturer’s plan worked or how it could work better, rather than if they liked it.

What do you think? What would you recommend as a cohesive set of tools that yielded useful information about teaching quality? I welcome comments from my fellow academics who are also interested in working toward building sector-wide teaching quality assessments that are meaningful, help us to achieve our teaching goals, provide benefit to learners and for which adverse consequences for teachers can be mitigated. Could we form a working party of volunteers from different discipline perspectives under the umbrella of TEU?

Footnotes

  1. Note: this blog is an individual TEU member's perspective and does not represent official TEU policy
  2. Or if compulsory, remove any obligation to communicate the scores to anyone, including any employer representative.
  3. The actual question is: “Overall, the lecturer/tutor was effective in facilitating my learning experience.” What do students understand “facilitating my learning experience” to mean? Is it, “my lecturer helped me achieve more ‘learning’”, “my lecturer created a sense in me that I learned more”, or that “my lecturer allowed me to feel comfortable not asking myself if I could have done better”? I have seen no local research that the terminology used is clear to both lecturers and students, or even that the range of ambiguous meanings is the same between lecturers and students. 
  4. Braga, M., Paccagnella, M., and Pellizzari, M. (2014). Evaluating students’ evaluations of professors. Econ Ed Rev 41, 71-88.
Print Friendly, PDF & Email