Colleges develop faculty evaluation equations

3 ways to make effectiveness data work in faculty performance evaluations

Measuring faculty effectiveness has never been a perfect science—and it has always held potential for contention among instructors and administrators.

Today, tighter budgets, greater demands in terms of student outcomes and service responsibilities, and a push for hard data increase the pressure to develop fair and accurate evaluation systems.

The types of information collected on instruction continue to evolve, and it’s crucial for administrators to determine how that data—including from student evaluations—can best be used to improve faculty performance. Following are three actions campus leaders can take to achieve that goal.


Online exclusive: Challenges in using data to evaluate faculty


The dos and don’ts of student ratings

Do call them ratings. Evaluation is how the ratings are used by faculty and administrators.

Do make the purpose of the ratings clear to students (and faculty).

Do write actionable questions that can improve teaching.

Don’t focus on rogue feedback that isn’t part of a pattern.

Don’t compare or rank faculty ratings.

Don’t over-rely on them as a source of information.

1. Tailor the approach.

Though teaching, service and research remain the standard categories for evaluation, they don’t capture the full complexity of a faculty member’s job, says Jeffrey Buller, director of leadership and professional development at Florida Atlantic University.

Well-intentioned institutions may let faculty choose from multiple tracks that emphasize or de-emphasize these three categories for a period of time. He calls these systems “a good start.” But rather than making smaller changes to existing systems, he says, “faculty evaluation is one of the few areas in which it really does pay to re-invent the wheel.”

He suggests weighting each activity instructors are expected to engage in, and allowing them to adjust those weights to reflect individual situations. The process should be aligned with an institution’s mission and with faculty makeup in mind.

At the University of California, Merced, for example, research counts heavily in evaluations. But Provost and Executive Vice Chancellor Tom Peterson asks deans to take into account that the institution’s faculty makeup is mainly untenured assistant professors.

The university, which opened in 2005 as the newest branch of the UC system, also has “heavier than normal” faculty service requirements, according to its website. Previously, faculty performance data had been compared with more established campuses nationwide, but administrators found that process to be “almost irrelevant,” Peterson says.

The focus is now on supporting faculty in teaching and research, and encouraging them to keep service commitments, “within a reasonable range.”

Standard performance criteria may not apply to an increasing portion of non-tenure-track instructors nationwide—the so called “teaching mission” faculty, says Angela Linse, executive director and associate dean of the Schreyer Institute for Teaching Excellence at Penn State, which aims to advance excellence in the university’s teaching and learning community.

2. Go beyond student ratings.

Instructors at any institution, regardless of how they’re doing with service or research, are expected to excel in teaching—more so now than ever before, says Linse. She cites higher tuition, lean budgets and an increased need for accountability as reasons why.

By far, the most commonly used data to evaluate teaching is student ratings. Ninety-four percent of deans cited student ratings as an “always used” tool, up from 88 percent a decade prior, according to a 2014 AAUP survey, “Changing Practices in Faculty Evaluation.”

Craig Vasey, who led the committee that ran the study, says he would be “surprised if the pendulum was swinging the other way.”

Yet skepticism about the meaningfulness of such data remains, adds Vasey, who is chair of the Classics, Philosophy and Religion department at the University of Mary Washington in Virginia. In fact, student ratings are a notorious source of distrust among faculty.

They may view the results as:

  • biased against marginalized or minority populations (including women and people of color)
  • tied too closely to grades
  • unreliable due to low response rates
  • a popularity contest

A few institutions—such as Adelphi University on Long Island, New York—no longer require faculty to submit student ratings as part of their tenure packages. That decision was made during contract negotiations with AAUP due to ongoing national research on validity and potential implicit biases, says Interim Provost and Executive Vice President Sam Grogg.

But there isn’t widespread agreement about these concerns, and some believe recent studies are flawed and limited.

Linse of Penn State says student ratings will continue to be used “because, in general, they reflect reality.” According to 80 years of research, she adds, they are “valid representations of students’ perceptions of the learning environments created by the faculty.”

Bias does exist, she says. The problem is misinterpreting results. “It’s a broad brush instrument,” she adds.

Faculty and administrators need to review the data in context—they should not compare faculty to one another or put too much stock into rogue comments that don’t reveal a pattern. Instead, individual faculty members’ distribution of scores over time should be the focus. That simple change “could decrease the anxiety of faculty by a lot,” Linse says.

Over-reliance on student ratings is another issue. Linse suggests increasing use of faculty peer evaluations and external reviews of syllabi and assignments.

The need for alternative data led Monica Cox, department chair of engineering education at The Ohio State University, to develop an evaluation tool called G-RATE through her Purdue University-affiliated startup, STEMinent.

The tool captures real-time data in the classroom, such as the type of feedback or content delivered, or the number of peer or faculty-student interactions.

“It’s not punitive in nature,” she says. “I give this information to faculty and, if they want, they can use it as part of their portfolio review or to improve their teaching.”

ACE advocates emphasizing qualitative tools such as journals that can be shared among peers.

“Faculty development should be this continuous, self-reflective process,” says Steven Taylor, director of academic innovation and initiatives at the American Council on Education (ACE). “Out of that process you have qualitative artifacts that can be submitted and used as evidence.”

Student input can also be collected through “knowledge surveys” that faculty can administer. These surveys ask students to gauge their ability to answer content questions—with responses ranging from “I have full confidence that I could answer this question correctly” to “I have no idea how to begin to answer this question”—before and after instruction is delivered.

“Students are very well positioned to assess their own level of knowledge and learning. You can get good quantitative data,” says Sherri Hughes, assistant vice president for ACE’s leadership division and a former provost at Marymount University in Virginia.

It’s important, when collecting data, to give students direction about the type of feedback that is most helpful, and how it will be used, Hughes says. But for this—or any—data to be useful, the results need to be looked at longitudinally, not just as a snapshot.

3. Offer support, not punishment.

Administrators must make it clear to faculty how and why performance data will be used. Explaining how it fits into the strategic mission of the institution and how it can help improve performance can go a long way toward gaining faculty buy-in. But evaluation alone is not enough.

Many faculty may be well-respected as authorities in their field, but need resources and support because they haven’t been trained in instructional techniques, notes Taylor at ACE.

Actively supporting faculty development through a well-funded center focused on improving instruction and student learning outcomes “is good practice, but also a good financial return,” he says. Improving teaching can lead to better retention rates and other student outcomes that improve revenue.

UC Merced, with early-career faculty dominating its ranks, has established a formal mentorship program within and beyond the UC system, and most faculty participate. The university also partnered with the National Center for Faculty Development and Diversity, which offers mentoring, workshops and training in topics from grant funding to work-life balance.

Such resources are used by more than half of UC Merced faculty. These initiatives are designed to help curb the effects of what Provost Peterson says are particularly demanding times in academia.

“The very premise that higher education is a benefit to all society is being challenged,” he says. “These views make it more and more difficult to succeed in an academic environment.”


Ioanna Opidee is a Connecticut-based freelance writer. 

Categories:

Most Popular