Testing college admissions tests

A new book examines a prominent, and frequently misunderstood, part of the college admissions process

Standardized tests, once thought to be the best way to compare students from diverse backgrounds, are criticized as being biased and inconclusive. A small but growing number of colleges have made such testing optional for applicants.

Does a test-optional policy result in a more diverse student body or improve attainment and retention rates? Measuring Success: Testing, Grades, and the Future of College Admissions (2018, Johns Hopkins) investigates the implications of admissions testing with experts from both sides of the debate.

Edited by Jack Buckley of the American Institutes for Research, with Lynn Letukas and Ben Wildavsky of the College Board, the book provides a much-needed evaluation of standardized admissions tests in an era of widespread grade inflation.

“The book came about because there was no single place to find research evidence supporting the many claims that people are making,” says Buckley.

You wrote that standardized testing dates back to the start of the 20th century and almost immediately caused problems. Were the problems similar to what’s happening now?

Yes. It’s funny—history definitely seems to have repeated itself. You know, it’s one of these things that people seem to come back to when they realize there’s a need for assessments, election processes or certification to be fair. But it’s like that old joke that democracy is the worst form of government except for all the other ones. Standardized tests are the least fair way to assess people, except for every other way.

If you say, “I want to make a new way to certify firemen, and I want it to be as fair and free of human bias as possible,” eventually you find yourself coming back full circle, and creating something with the same problems you tried to solve. It’s not perfect, and it never has been. But it’s better than all the other alternatives for some narrow tasks.

If three prospective students get perfect scores on the SAT or ACT, it doesn’t tell me which of those students are better equipped. So are we looking at the tests for the wrong reasons?

It would absolutely be a mistake for anyone in any kind of admissions selection process to use only a standardized test. And if they do, it’s admissions malpractice. Where I’ve seen test results most used—or perhaps most misused—has been in screening for remedial coursework in community colleges.

In the two-year sector, folks like College Board and ACT have produced products such as Compass, Accuplacer and other related assessments to see whether you’re ready for credit-bearing coursework when you enter a two-year program. And the same warnings are always there: Don’t look only at their high school transcript. Look at other factors. Don’t just give them the test.

But, too often, I’ve seen colleges with a hard rule to just give the test, look at the results, and immediately route people into either credit-bearing courses or not. And that’s had really harmful effects on students. But it’s always a bad idea just to use a test.

Testing is part of the American fabric. Do other countries rely on it as much as we do?

Many do. There are some other extremely high-stakes selection tests around the world. China is well-known for this. They have a centuries-long history of standardized tests, going back to civil service exams. And that tradition is carried forward to an extremely high-stakes nationwide college entrance exam still to this day in China.

Finland is an interesting one, because people say, “Oh look. This is a country that doesn’t have all this K12 testing like we do, and they always do very well when they are assessed in international comparisons.” They actually do have the equivalent of an exit exam, almost like a huge college entrance exam. It’s the only standardized test that the kids take, but it’s one of the most important ones for their lives.

Do they face the same kind of criticisms as we do?

I think so. One of the criticisms in the U.S. and abroad is the idea that if there’s differential access to test preparation, that students with more resources will get more test prep or coaching, and that would make the system inherently unfair. I believe that’s a valid criticism.

That’s why it has been so interesting and important to see, at least in the U.S., various test makers taking steps to get high-quality test preparation into the hands of students for free, as opposed to being the big for-profit sector that it has been for a very long time.

There’s not a lot of hard science to prove one method is any better or worse. So how long should it take before something like “test optional” is proved viable?

That’s a good question. Those who are most responsible recognize that, even though they’ve looked at all the information and believe they’re making the right decision, it is going to take a few years to figure out how it’s working.

Take the University of Delaware. They’ve started down the road to going “test flexible,” as I call it. But they’ve done it in a really careful, thoughtful way, and they’re going to be looking at multiple years of entering classes to see how it’s working.

It’s tough, because the rest of the world is changing around you. If you’re a moderately selective college, you’ve got a real business problem. You’re fighting to keep your seats full, and you want to make decisions that are in line with your institution’s global strategy, whether it’s to become more competitive, to recruit better students and better faculty or to increase your reputation.

But, don’t forget, a lot of other schools are trying to do the same thing.

I certainly recognize that chief enrollment officers have a real business strategy issue that they’re trying to solve here, too.

You write that many students, teachers, parents, policymakers—nearly anyone outside the testing industry—have little understanding of how tests are used.

That’s right. For me, this book came about from a frustration over the amount of misinformation out there, or at least the lack of solid research or empirical evidence around some of the claims from all sides. This was the book I wish I had when I started the job at College Board.

The book reviews a broad assortment of admissions methodologies. But you also say that you can’t even get admissions officers to agree on a definition of, say, holistic review.

There’s a sociologist in Michigan who took a sabbatical year and worked in selective admissions in a few similar institutions. He just observed what they said their admissions process was, compared to what they actually did in practice. But each had their own idea of holistic admissions.

I was struck personally by that, from talking to many admissions people over the last few years. They know they want something, but they don’t have the resources or the technology or the processes in place to actually do it.

What is the big takeaway that you would like readers to get from this?

I would like people who get involved in this debate—whoever they are, student, family, admissions professional or just a concerned citizen—to just calm down, take a breath and really look at the available evidence for once. The whole point here is to take a step back from the heated debate and really look at evidence. I would encourage that, if nothing else.

People shouldn’t jump to whatever inflammatory argument they’ve just heard in the most recent, scaremongering media article about how hard it is to get into Stanford, or some college’s press release about how they’re going test-optional because tests are unfair. Just step back from that and ask, “What am I hearing here? What are the facts? What does the evidence say?”

I know that’s wishful thinking, because not everyone has the time or even realizes that there’s something out there. But if we can get just a few percentage points more people to stop and think about what’s really true, I think it will be a success.  


Tim Goral is senior editor of UB.

Categories:

Most Popular