All Posts Tagged With: "Student Evaluations"
Prof. Pettigrew: student evaluations won’t help
A recent report from the Ontario Auditor General Jim McCarter has got people talking about student teaching evaluations again. Hoo boy.
McCarter is concerned that evidence of teaching ability is not being taken into account when it comes to granting tenure and promotion to faculty. It’s a legitimate concern in theory. The problem is that this report takes student evaluations as a key method by which quality teaching should be measured. That’s trouble.
As the report rightly points out, the research on the usefulness of student evaluations is a subject of much disagreement. In fact, it’s actually even more hotly contested than the AG’s report admits. The Canadian Association of University Teachers (CAUT) insists, for instance, that such surveys cannot be taken as a measure of teaching effectiveness.
CAUT may be trying to protect the jobs of its members. Still, student evaluations, from the outset suffer from a basic flaw which is that they often fail to meet a very basic standard for any evaluation. That is, an evaluator should be qualified to evaluate. More specifically, the evaluator should be an expert on the subject, should be motivated to take the evaluation seriously, and should be a disinterested third party.
There is no good idea that can’t be killed by administrators and governments
Last year, around this time, I found myself complaining about student evaluations. Not what the students actually said, mind you, the very idea of them. Well, it’s that time of year again, and my general view of the situation has not improved. And as for the specifics, things have become decidedly worse. Let me explain.
It used to be the case here at CBU that individual departments could create their own course evaluation forms, provided they were approved by the Dean. In fact, strictly speaking, individual faculty members could create their own forms, again, if the Dean signed off on them. Such a system did not fix the many problems inherent in student evaluations, but it helped mitigate them by tailoring the form to the discipline and by providing individual faculty with more input into their own department’s evaluation process.
In university life however, there is no good idea that can’t be killed by administrators and governments. Our administration, you see, hated the old system since it was too complicated and made for a lot of work for secretaries. Still worse, the Maritime Provinces Higher Education Commission, one of whose functions is to give the public the impression that they are keeping a close watch on the quality of university instruction in the three maritime provinces (yes, three, Newfoundland is not part of the Maritimes), strongly suggested that we have one standard evaluation form to be used throughout the entire institution. Their idea is that the new instrument will allow for better comparison of instruction across the whole range of departments and schools. This, of course, ignores the fact that allowing for comparisons is useless when the data is meaningless, and in fact, the new administrator and civil servant friendly form is so generically bureaucratic, that no real accountability could possibly come from it.
Some of the questions (or rather statements with which students agree or disagree) are not terribly wrong headed, just anodyne. For instance:
The instructor spoke clearly and audibly.
This one has a bit of a trap in it, since, from my experience, students seem unusually sensitive to English spoken with a foreign accent, and will claim not to be able to understand a prof whose accent is noticeable, even if said professor’s English is perfectly good. But apart from that issue, the question relates to a minor mechanical aspect of teaching, and making it one of twenty or so identically formatted questions gives the impression that it is just as important as more important things. You can almost hear the tenure committee debate: “Well Professor Facile has not completed his doctorate, nor has he published any articles, nor is his teaching innovative, but on the other hand his students say he speaks so clearly!” I’m not saying this type of question should not be included, but technical matters like this should be set apart from more substantial ones. Or they should be somehow weighted appropriately.
Other items are so vague that it’s hard to see how students could reasonably evaluate them, and harder to see how instructors could improve on them even if they wanted to. For instance:
The instructor showed a genuine concern for students’ progress and was approachable.
It’s not at all clear to me what is meant by “genuine concern” here. Is the instructor supposed to phone students at home and ask how they are doing? Fortunately, there is no question that says “My instructor creeped the hell out of me.” And how would a student distinguish genuine concern from fake concern? I have a feeling that many students will take low grades and high standards as evidence of a lack of “concern” on the part of the instructor. After all, if Professor Highbar was so concerned about my success, he would have passed me, right?
And what does “approachable” mean? Many people find intelligence intimidating — does that mean we should avoid hiring very intelligent professors for fear that they will seem unapproachable? Are women usually more approachable than men? And how would a professor who scores poorly on that question aim to be more approachable? I have known of professors who have been so approachable that they have become close friends and confidants with their students, to the point of being unprofessional. You can’t fairly grade the papers of your new BFF.
It just gets worse after this. I mean look at some of these:
This course has improved my critical reading and writing skills.
The readings were useful in achieving the goals of this course.
The instructor was fair in measuring student performance.
All of these questions are dubious because they ask students to evaluate things they are in no position to evaluate. It would be like evaluating physicians by asking patients if they were happy with their diagnoses. But because so much weight is placed on these evaluations, professors have a strong interest in pandering to them, nonetheless. And yet,unless we are very naive, we must acknowledge that students who rate their professor’s grading as not fair are really complaining about their grades being too low. How many students who get an A think the grade was unfair? How many students who get an F think the grade was just right?
None of this would matter if it didn’t matter. But when Professor Noobi sees that her students say the readings in her course are not “useful,” she knows that she can more easily secure her position and livelihood by reducing or eliminating those readings, whether or not they were intellectually justified. Good professors, of course, resist such temptations, but when one’s career is at stake, small compromises (perhaps made even unconsciously) are all too easy, and year after year, they add up. Or rather, drag down. In any case, professors simply should not be put into such positions.
The sad part is that with a little courage and creativity, evaluations could be made to be, at least, somewhat useful. When this new form was in its draft stages, I actually suggested to the committee a large number of questions. One of them was:
My instructor was funny.
That question was rejected on the grounds that not everyone is naturally funny. But what about those of us who are not naturally approachable? Here are some other questions I suggested that did not make the cut:
The instructor seemed to hold students to high standards.
The instructor treated students as though they were responsible for their own success in the course.
The instructor stressed thinking skills, not just knowledge of material.
Notice, by the way, that that last one is not the same as the one about “critical reading and writing” I criticized above. Students can fairly comment on what the instructor seemed to stress, but what they learned may not have anything to do with the quality of the instruction provided. Here’s another good example:
The instructor encouraged students to see how complex the issues raised in the course were.
I like this question because often evaluations ask students about whether the instructor is clear, and students often praise their instructors for making “everything clear and easy to understand.” But I worry that clear and easy to understand is really the student’s perception of what a professor might consider leaving out the hard parts and glossing over the difficulties. If a university course is well done, there should be a lot of things that are decidedly not easy to understand.
As I said, none of these suggestions was accepted by the committee who created the new list, and to be fair, I’m sure they were trying to balance out suggestions from all sides, so it’s really no wonder that the result is what it is: outrageous only in its timidity, and offensive only in its extreme inoffensiveness. But what is the solution? Let people create their own forms, provided they are approved by the Dean?
Oh, right. That’s what we used to have.
Ratemyprofessor.com is useful for students, but not schools
Michael Hirsh still remembers his worst university professor. “The guy rambled, didn’t give an outline or explain how he graded, didn’t explain expectations. I got C,” said Hirsh, who’s in his last year as an economics student at a Toronto-area university.
“It messed up my GPA. A professor can make or break a course,” he explained. “Sure we gave the university our evaluations, but I wanted to warn other students.” Hirsh decided to take his “scathing” comments to ratemyprofessors.com where evaluations are available to anyone, not just university administration.
Official student evaluations have been part of the student experience for decades, but until the advent of the Internet, students had to rely on friends for professor recommendations — and gossip. Almost no universities provide their official evaluations to students. At ratemyprofessors.com, the site boasts of having more than 6,000 schools with over one million evaluations. Other sites that have launched include Professorperformance.com.
The University of Toronto is one place where the administration is aware of the popularity of online evaluation sites and it’s proposing its own online system, which is expected to go live in September 2011. According to Prof. Edith Hillan, vice-provost of faculty and academic life at the university, sites such as ratemyprofessors.com are “useful for students, but we wouldn’t use them from an institutional perspective to gather information.”
“Often you’ll get comments or scores at the extreme end of the spectrum,” she said.
At Ryerson University, the vice-provost for faculty affairs, John Isbister, doesn’t object to online evaluation sites, recognizing that they are useful for students who can’t access the official evaluations that students give to the administration after a course is complete. “We do care about student experience, but it’s part of the faculty’s collective agreement that their evaluations remain confidential,” said Isbister.
Both universities say exterior online evaluations have no weight when the administration is reviewing their faculty and factoring promotions, tenure, salary and other related things. The fact that official evaluations remain confidential is one reason for students flocking to the online sites as an alternative. “The official evaluations are hugely ineffective — they are good for the university to decide on salaries, but not good at helping us decide what courses to take,” said Ben White, a third-year engineering student at the University of Toronto. “I mean I never see them.”
Online evaluations can be beneficial for students who are either new or not connected to the university community. “They are really helpful and useful — I came here not knowing anyone,” said Yael Sperkut, a first-year humanities student at U of T. “I used it in high school — with ratemyteachers.com — and when I was mad or thought they weren’t doing their jobs, sure I posted — a lot.”
Hillan said the University of Toronto’s proposed online evaluations will have “provostial guidelines” — to oversee the system to make sure it’s used appropriately. These proposed evaluations don’t do enough for White. “Even the ones done through the student union are slightly edited if they’re too profane. I’m getting the straight truth when I go online,” he said.
Despite the popularity of sites such as ratemyprofessors.com, not everyone is a fan of online evaluations. “I don’t want to say something mean about my professor, I might be the only one who thinks that,” said Tamara Milavic, a first-year student at the University of Toronto. Milavic also said that she knows which professors to avoid and which ones are recommended due to having an older sister who studied in the same department.
“I have enough people telling me what to do,” she said. Second-year U of T science student Katie Spizarsky had her own reasons for never doing any evaluations, whether through the university or online. “Friends are more reliable,” she said.
The Canadian Press
Court rules that the academic senate has jurisdiction over student evaluations
Students across British Columbia scored a quiet victory last month as the highest appellate court in the province ruled in favour of the University of British Columbia, and against the faculty association, for the right to overhaul policies related to student evaluations, including posting results online. At issue was whether UBC’s faculty union, through their collective agreement, had control over evaluations, or if UBC’s academic Senate did. The case has resulted in direct benefits for UBC students and raises questions over whether the faculty association is concerned with teaching quality.
The website teacheval.ubc.ca, where evaluations may be viewed, has been of benefit to students. Similar to controversial third-party websites such as www.ratemyprofessor.com, but more formalized and scientific, the UBC site provides information on popular professors and what courses to take. Students now being able to see their own evaluations may also be encouraged to take them more seriously. The policy also outlines everyone’s responsibilities on giving, handling, and receiving feedback. As before, evaluations will be a factor when professors are up for tenure and promotions.
The controversy surrounding student evaluations started in May 2007, when UBC’s Senate approved the new policy. Background work went back nearly a decade and was a part of the university’s strategy to improve its teaching quality.
The faculty association immediately filed a grievance and sought legal action, with support from the Canadian Association of University Teachers and the Canadian Union of Public Employees. They argued that UBC’s Senate had no legal right to pass that policy, as it infringed on their collective bargaining agreement.
The faculty union, thus, appeared more interested in their power struggle with administrators than with the enhancement of teaching quality at UBC. As the BC Court of Appeal wrote, “It is apparent that the overall purpose of the Policy is to improve the quality of teaching at UBC.” In fact, there was little merit to the faculty union’s position that there was conflict between the collective agreement and the adopted student evaluation policy. The judges dismissed that “[the] entire grievance was predicated on the basis of a conflict.”
The ruling in favour of UBC also sets precedence on the matter of bicameral governance for universities and colleges. Public institutions are typically governed by both a Board of Governors and an academic Senate. The faculty union’s collective agreement is with the Board, yet the student evaluation policy was passed by the Senate. The BC Court of Appeal has ruled that no agreement signed by the Board of Governors can overrule something passed by the Senate, since the Senate’s jurisdiction is over academic matters. So assuming even if a conflict existed between the collective agreement and the student evaluation policy, the Senate’s policy would preside.
However, despite the legal victory, students are still mostly left in the dark about their own teaching evaluations. The website that was supposed to support transparent and open access for students is incredibly limited. If courses have multiple instructors, if a professor denies consent, if it is the first time a course is offered, or enrollment is too low, evaluations won’t be posted. The policy was condescendingly referred to as “rateyourprofessor@ubc” in one meeting held by the faculty union.
For students across British Columbia, the ruling affirms the autonomy of Senate decisions to better protect the interests of students by providing a legal framework that means students will not get left behind in academic matters when the union and administration disagree.
Keith Van is an accounting student at the British Columbia Institute of Technology.