How College Board's Summer Snafus Will Impact Your Test-Prep Business

College Board Snafus.png

Updated on June 26, 2023

It was a rough summer for College Board and the SAT®.

First, there was the scoring controversy surrounding the June test. In a nutshell, the College Board administered a test that was significantly easier than other versions of the test. While that may sound like a win for all the students who took the June test, the poorly-calibrated test actually may have resulted in lower scores for some test takers. At least that's what some students and tutors are claiming. And they've got evidence to suggest their claim. Many students received lower scores on the June SAT® than on previous sittings, despite having answered more questions correctly than they did in the past. College Board tried to explain away the issue by clarifying their practice of “equating.”

How the College Board "equates" SAT® scores in just 239 simple steps.

It didn't help. In fairness, those asking what happened to their SAT® scores were more interested in recourse than an explanation. So it's entirely likely that no answer would have been well received. 

Did College Board Pick Winners and Losers? 

What's clear to everyone is that the mistake had been made long before the "equating" process took place. Once the relatively "easy" SAT® had been administered, it was already too late to make everyone happy. There were going to be winners and losers, and College Board had only two options:

Option #1. Use a scoring table that closely matched those used in the past. This would effectively allow everyone who took the June 2018 SAT® to enjoy that advantage over everyone who had taken or will take an SAT® during this admissions cycle. In theory, this move would result in fewer winners and more losers. And in the world of college admissions, unfairly penalizing a small group of students is VERY different from providing an unfair advantage to a small group of students.   

Option #2: Deploy a scoring table forged in the fires of Hell! Since the SAT’s® 2016 redesign, 50 correct answers on a math test generally garners a score above 700 points. In a few cases, 50 correct answers actually earned a 740. On the June 2018 SAT®, however, the easier test and harder "curve" resulted in students with 50 correct answers earning a 650 on the math section. That's a 90-point swing!

At first glance, you may think that option #2 results in relatively fewer losers and more winners. Thus, you could argue that this fact provides a foundation for a pragmatic argument in favor of option #2. But that's fundamentally different from the College Board's public defense of choosing to impose a harsh curve on the June 2018 SAT®. For starters, College Board works pretty hard to avoid (read: actively dispute) the perception that the SAT® is "scored on a curve."

Instead, the College Board argues that nobody received unfair treatment, either to their advantage or disadvantage.

Screenshot from the College Board’s: June 2018 SAT Score Release FAQ

Screenshot from the College Board’s: June 2018 SAT Score Release FAQ

The argument is essentially that some tests are just more difficult than others. Difficult tests are scored—not curved, mind you—in such a way that results in higher scores relative to the number of correct answers a student bubbled in. Students in this position don't generally protest the practice. On the other hand, something feels unjust about answering more questions correctly and receiving a lower final score. There are anecdotal reports of students who retook the SAT® in June and performed better than they had on a previous test, only to find that they’d earned a lower score on the retake. And that may have been the exact scenario that the College Board set out to avoid when it made the decision to reuse a previously administered version of the SAT® on August 25th. 

Where Were You on the Morning of August 25th?

Me? I was at home with my family, celebrating my daughter’s first birthday. Later, when I learned of College Board's decision to re-administer the same SAT® test they gave in Asia in October 2017, I got a good laugh. It seems that College Board and my wife and I had all been caught in similar predicaments. 

As family and friends gathered around my daughter’s highchair, preparing to snap photos of her annihilating her "smash cake," I suddenly realized that we had a problem. My wife and I exchanged glances, hoping nobody would be able to tell that this wasn’t our baby’s first taste of cake. Maybe we’re just typical overly protective first-time parents, but we felt a genuine panic that we’d be exposed as irresponsible parents for having given our daughter a taste of cake before her first birthday? Would everyone think less of us? Had we somehow invalidated a rite of passage? I shouldn't exaggerate the degree to which these thoughts are haunting me. In all honesty, they're not. Even at the time, I found the whole idea pretty funny.

My daughter also enjoys eating sand, which is not ideal when you live at the beach.

My daughter also enjoys eating sand, which is not ideal when you live at the beach.

But later, it seemed like an appropriately analogous situation to the mess resulting from College Board’s decision to re-administer an SAT® test. At the same moment that I posed in photos with the baby, hoping nobody could tell that this wasn't the first time my charge had experienced cake, College Board employees must have been pacing the halls at College Board HQ, hoping that among the thousands of test takers, nobody would give any indication that they’d tasted the main course months earlier.

Reenactment of College Board’s mood on the morning of August 25, 2018.

Reenactment of College Board’s mood on the morning of August 25, 2018.

A Classic Case of Worlds Colliding

Now there are reports on Twitter that copies of the test had been circulated within the Asian test-prep market and possibly more broadly online. Consequently, it seems entirely likely that at least some students may have seen questions from the August 2018 test before they sat down to take it on test day. It seems that a test that College Board apparently believed had been lost to the ages was alive and well—possibly even hiding out within the United States. 

For the College Board, it was a classic case of worlds colliding. 

"Jerry, if the October 2017 SAT comes into contact with this world..."

"Jerry, if the October 2017 SAT® comes into contact with this world..."

Of course, this has led some observers to wonder how the College Board could possibly NOT have anticipated this seemingly inevitable result. Some are even asking whether these blunders will affect the SAT® or the college entrance exam market overall. And more importantly for the readers of this blog, what does this mean for your test-prep business and future test takers? There are some definite implications for both the short and long terms.

Is this the end of college entrance exams?

No. Calm down.

Are standardized tests the optimal way to evaluate every applicant? No. Of course not. But that’s not the question we’re trying to answer. To put it another way, identifying an imperfect system is not the same thing as offering a viable alternative. And that’s the point: nobody—not even College Board or ACT®—is arguing that standardized tests should be used as the singular or even primary metric for college applications.

Literally, nobody believes the SAT® or ACT® should considered a litmus test for merit or a 100% reliable predictor test for college.

Personally, that’s what I find so frustrating about the behavior of those who argue that the SAT® and ACT® have no place in college admissions. They consistently argue that the tests should NOT be installed as the singular method of evaluating student application. It’s a bad-faith argument based on a straw-man fallacy. They’re arguing against a point that literally nobody is making. There is near universal agreement that standardized tests should not be given "undue weight" in college admissions. Reasonable people have put a good deal of scholarship into identifying the optimal amount of weight that should be placed on test scores, but virtually nobody argues that test scores should be valued above 30-35% of the student’s application. So let’s set that aside for the moment, and instead move from examining the “ought” to the “is” of the matter.

What is going on with SAT® and ACT® scores in college applications? That’s the question we’ll tackle later this week in the companion piece to this article. So be sure to subscribe to our blog to make sure you get a copy in your inbox.

Stay in the Know. Get Blog Updates.