Putting data to work in higher ed

How one college provided faculty with actionable information to improve outcomes

Data can be a beautiful thing. It can reveal patterns, failures and sometimes, surprises—as long as the measurements are consistent.

At Prince George’s Community College that wasn’t the case. Each class was measuring different things, so campus leaders couldn’t quite see the big picture.

W. Allen Richman, dean of the Office of Planning, Assessment and Institutional Research, says the college has a set of general education outcomes, and each program has a set of learning outcomes. “We needed to know whether those program outcomes were supporting the gen-ed outcomes—and vice-versa—to demonstrate the student has the requisite skills to graduate.”

High-speed scanners import the data into a single system to enable deep analysis. “It isn’t about seeking ‘silver bullets’ or quick fixes,” Richman says, “but instead involves the relentless review of current operations to remove ineffective processes and replace them with proven improvements.”

Developing a process of data collection and analysis doesn’t happen overnight, so how did you begin?

We started small by having faculty identify a culminating assignment for their class and use a common grading tool. For some, that would be a graded rubric, while others used a multiple choice test with answer key tied to a specific learning outcome. We’d scan all that into our system and see how the two were connected—or not.

We don’t measure every student, every semester, in every class, but we do collect about 10,000 graded rubrics as well as multiple choice each semester.

Through that single process of using a common rubric or using a common multiple choice answer key, we can look at course learning outcomes.

That also allows you to assess the effectiveness of the course and the instructor, right?

Absolutely. If we think about maximizing student learning, the place where we’re tweaking that is in the classroom. Let’s say we’re all teaching English 101 and one of the learning outcomes of the course might be about a thesis statement. Faculty look at the data and might see that we were all grading the students fairly low on writing a thesis statement.

Then they would put an action plan into place to try to improve that in their classroom. Finally, they go back and remeasure it. My philosophy is that these adjustments in learning at the course classroom level are going to add up to a greater performance of our students overall.

How quickly can you make adjustments after receiving the assessments?

There are some K12 schools that are capable of trying to adjust the learning on the fly. The push is to assess children based on their performance and make those adjustments right away.

Higher ed is not quite to that point yet. Right now we collect the data in the fall and deliver the results to faculty in mid- to late-January. Then faculty will work on an action plan they can implement the following fall.

Unfortunately, we’re not collecting data at a rate that can actually impact the learning in the current classroom. The impact will be to future learners, if you will, to make the classroom better for the next cohort coming through.

Is higher ed able to be more immediately responsive in assessment and adjustment?

I definitely think that’s the future. It will come as we get deeper into analytics.

In higher ed, we’re looking to add all kinds of data. We’re looking at the classroom, of course, but there’s all kinds of data on campus about student activities—when and where they log into our LMS or where they use their ID cards to check in.

That kind of data is all part of the bigger picture. We’re not quite there, but I think we’re close. And it will really help us identify those students who are at risk.

In what way?

For instance, there are course combinations that we know are more difficult. Maybe a student signs up for two lab sciences and a history course and some other writing-intensive course.

If we see from their learning outcomes that their writing skills aren’t up to par, the information tells us that student has a higher chance of failing the course. Therefore, we need to have plans in place to work with that student, to make certain that they go to the writing center for help with that writing-intensive class.

We can predict some of these things and intervene before the student gets in too deep.

But it’s not just about the courses, it’s also about what you’re bringing into those courses. There are definitely students who can handle that really heavy load.

And there are others with different characteristics—they can handle it but they need extra support, and that’s where we have to become more responsive in higher ed. I believe that’s where we’re headed.

Considering the effort involved in collecting data, was there faculty resistance to this new procedure?

Yeah, it was pretty painful for the first year. But the leadership at the institution was very supportive. They saw the benefit of the process, so it really wasn’t given as an option.

How did it evolve?

Each department was told to agree on a common assignment and how it would be graded. The common assignment would allow us to collect data and measure things, and develop a course of action—that was the plan.

But it didn’t go over well. The first time faculty collected the data, the challenge was getting them to do it. But when we processed that initial data and got it back to them, it opened their eyes. They recognized patterns from their own class, but they had no idea there were similar patterns in every class.

They see that the students in their Psych 101 class always struggle with the notion of a normal curve and population distributions. But it doesn’t always click in their head that, across the board, the faculty teaching this course have students struggling with this specific skill.

So they got together and discussed what they thought was happening, and how they could improve on that. When they made an adjustment and could see the benefit, they were sold.

Fast-forward to almost six years later, and we have 100 percent compliance—meaning that everyone turns their data in on time. Now, it’s really just part of the institutional DNA.

I get the feeling the process wasn’t as smooth as that.

It was probably three years of really pushing the boulder up the hill every day and waking up the next morning and finding it back at the bottom. As with all culture shifts, there were some departments that jumped on this right away and there were holdouts who waited to see how everyone else liked it.

The other thing is, we also have failures. That was just as interesting. Sometimes an action plan makes logical sense, but it has no impact on the learning outcome.

That’s where our process of immediately going back and collecting new data makes sense. If you don’t measure it right away, you won’t know if it’s having the desired impact. Unfortunately, a lot of assessment processes don’t do that. They don’t come back and reassess immediately after an action plan is in place.

Have you had interest in the process from other schools?

I’ve consulted with a number of institutions that want to try components of this. As I said, we had tremendous support from our president and the vice president of academic affairs. They really just weren’t willing to take no for an answer.

The biggest hurdle some institutions face is leadership that isn’t as supportive, but still we’ve been able to implement pieces of the process.

Tim Goral is senior editor.

Most Popular