For fans of rap music whose tastes go beyond whatever shows up on Rap Caviar, September 29th, 1998 is generally considered to be the greatest release date for rap albums ever. You had Outkast get really into traditional rap subversions and astral funk excursions and spoken word discursions on their masterpiece Aquemini. You had Jay-Z, bolstered by the showtune-sampling hit single, cementing his crossover bona fides force with Vol. 2… Hard Knock Life.* We also had Black Star’s deliberate antidote to the shiny suit era of rap in Mos Def & Talib Kweli Are Black Star, the last pre-break-up album from the legendary A Tribe Called Quest, and, sure why not, a Brand Nubian record thrown in.
*That one, btw, holds a special place in my heart as the first non-Bad Boy C.D. I bought with my own money — which a sentence that probably makes me sound positively ancient to you, if the aforementioned Rap Caviar shade didn’t already make it seem like these old man takes were emanating from a body that time has already turned to dust.
We’d like to think that LSAT commemorated the 20th anniversary of this momentous day by making the score release for the September 2018 LSAT September 29, 2018. After all, they’ve made a firm commitment to trying to actually release the scores on the day they promised to. So it makes sense that they’d treat that day with the same pomp and circumstance that the rappers of 1998 did. Sure, LSAC has only mentioned rap music a few times — most notably the used and new rap CDs game from the June 2000 LSAT that was featured Legally Blonde — but there must be, amongst the psychometricians who write this exam and the legal gatekeepers who run LSAC, a few B-Boys and B-Girls thrown in?
At any rate, only those who took the September 2018 LSAT can tell if September 29, 2018 was as great a release date as September 29, 1998. Will your LSAT performance catapult you into the upper echelons of law schools, as Jay’s performance on “Hard Knock Life (Ghetto Anthem)” catapulted him into the upper echelons of the pop charts? Or did you regret how the test went, like how Jay probably regrets his verse on “Can I Get A …” now that he’s married to a feminist icon?
Either way, LSAC has released the exam, and we’re going to break it down for you, section by section. So just like we did for the Logical Reasoning, Reading Comp, and Logic Games sections from the July 2018 LSAT earlier this year, we’ll have our contributors take a look at each section and the “curve,” and report our findings. If you took the September exam and it went great, these posts can be a chance to reflect on a test in a hopefully not-super-triggering way. If you took the September LSAT and are planning to take it again in November or beyond, these posts can be a chance to figure out what went wrong and how to prepare for the next exam. And even if you’ve never taken the LSAT before, these posts can be a chance to get a little insight into what your exam might look like. So without further adieu, let’s get to today’s point-by-point breakdown of the Logical Reasoning section.
• Any discussion of Logical Reasoning should start with the question distribution. While a Logical Reasoning question could discuss any topic the twisted minds who write this test can think of — though I have to admit, as a fans of weird questions, September’s test was a little light on odd topics — there are only a few different things a question might ask you to do.
At Blueprint, we have a classification system that organizes these questions based on what the question asks you to do. We have three broad “families” of questions — the implication family (for questions that ask you to make an inference), the characterization family (for questions that ask you to describe some feature of an argument), and the operation family (for questions that ask you to make some change to an argument). And within each family, there are various types of questions classified based on the type of inference you’re asked to deduce, or the feature the question you’re asked to describe, or the type of change you’re asked to make.
By looking at the question distribution, on a given exam — and comparing it to past exams — you can start to figure out trends in the Logical Reasoning section. Certain question types show up more frequently than others, and over time the test writers have made subtle but demonstrable shifts in the questions that show up the most frequently. If you’re relying on tests from, say, 1998, you’ll get a misleading impression of which questions are going to be the most prevalent on an upcoming exam. These recent exams will give us the best picture of which question types are likely to be more common on the November exam and beyond.
So here’s the question distribution from the September exam …
This exam mostly continues trends we’ve observed on previous Logical Reasoning sections, but with a few notable exceptions. Strengthen questions have recently emerged as the most common question type, and there were a lot of them here. Additionally, part of ubiquity of the Strengthen questions has to do with the increasing prominence of the Strengthen Principle variation of that question type — and four of the eight Strengthen questions were Strengthen Principle questions here.
The September test also maintained recent exam’s de-emphasis on the implication family — with only about 14% of LR questions from that family on the September exam. Also consistent with recent trends: nearly every question from implication family is a Soft Must Be True question. This exam went pretty extreme with regard to that — even now, it’s still pretty rare that exam completely elides any questions that ask you to make a deduction that “must be true,” as this exam did.
We’re also seeing fewer and fewer questions that ask you to describe some structural feature of an argument — like the main point, the argumentative strategy, or a role played by given proposition. Less than 8% of the LR questions asked test takers to do any of those.
But this exam did buck some trends, too. After over a year of Disagree questions becoming increasingly prominent on the LSAT, this exam featured just one of those. This exam was on the low end of Necessary questions (which may have been good news for test takers, since these are among the more despised question types) and the high end for Weaken questions (less good news, because these are also pretty annoying to test takers). What was good news — to me, at least — was that after being left for dead on recent exams, a Must Be False question showed up once again (more on that particular question in a moment).
• Unlike the June 2018 and December 2017 Logical Reasoning sections, this test didn’t overemphasize any one particular commonly test skill — like conditional statements and comparison fallacies in June 2018, or causation in December 2017. There was an even mix of statements that involved conditional statements (about five of these, per my count) and questions that involved causation (about six), and no common fallacy standing out as especially pervasive.
There were, however, several arguments that failed to address the fact that what we believe to be true is not always what is actually true, or that what we intend to have happen is not always what actually happens. At Blueprint, we say that these types of arguments commit a “perception versus reality” fallacy. On a given LSAT, there’s usually at most one argument that commits this fallacy. On this LSAT, there were four.
There was a question in which someone assumed that just because musicians don’t intend to manipulate listeners’ emotions that the listeners’ emotions aren’t being affected by music. There was a question in which someone confused whether people expected a certain action would benefit them with whether that action in fact benefited them. There was a question that turned on whether people who received an extra dollar when given change from a transaction actually “perceived” that they were getting an extra dollar. There was a question in which a scientist assumed that because we can’t observe any active volcanoes on Mars, that there probably aren’t any active volcanoes contributing to sulfur dioxide spikes (this could also have been classified as what we call an “absence of evidence” fallacy).
So, for whatever reason, this particular fallacy showed up a lot. It’s always tempting to attribute certain motivations to the test writers when something like this shows up. Like, maybe this is all commentary on our increasingly fragmented perceptions of the world — in which we frequently refer to things as “Fake News” and “alternative facts” when they don’t align with our perceptions or motivations. But of course, to attribute these motivations to these test writers just because I perceive them to be true would leave me vulnerable to accusations of this same “perception versus reality” fallacy.
• The two LR sections illustrated two different ways a section can progress. The second of these Logical Reasoning sections progressed in way these sections typically do — with easy questions at the beginning, medium questions in the middle, and difficult questions towards the end. The disparity between the easy and hard questions in that section was more pronounced than usual, if anything. The section featured many of what thought were easy questions all the way through question 17 or so, and then went on a pretty brutal run of several super tough questions at the end.
I thought the first LR section, was a bit more unpredictable. Some harder questions were thrown in among the early questions, and there were some easier questions thrown in among the questions that are typically the hardest in a section. I imagine this section threw off the rhythm and shook the confidence of test takers who are used to going from easy to medium to hard questions on a given section.
• The question that dominated the post-exam online chatter I saw was a question late in the first Logical Reasoning section about “kindness.” People couldn’t even remember what kind of question that was, so I was super happy to see it was a Must Be False question. Why was I happy about this development? Because Must Be False questions are really only hard if you don’t know how to do them — they’re actually quite easy if you do. And I knew that my students, at least, would know how to do a question like this.
Here’s what you do on a Must Be False question: look for conditional statements. Diagram them. If there’s more than one conditional statement, see if you can make a deduction. And then just look for an answer choice that states that you can have the sufficient condition of one of the conditional statements stated in the question or that you could deduce but without its necessary condition. That’s it.
And that’s all you had to do for this question. There were three conditional statements that linked up, allowing you to deduce that if two people who are fully content in each other’s presence, then they must want each other to prosper. Which means that any two people on the planet who are fully content in each other’s presence — whether that’s Andre 3000 and Big Boi, Jay and Bey, Yasiin Bey (née Mos Def) and Talib Kweli, Q Tip and Phife Dawg, some combination of Grand Puba, Sadat X, and Lord Jamar (or, you know, any other pair of people who didn’t necessarily release music 20 years ago) — must want each other to prosper. The right answer said that there are some people who are who are fully content in each other’s presence but don’t want each other to prosper — which directly contradicts the deduction here.
• Finally, how difficult were these sections? After waking up early to do this exam, I gave a quick ranking, one through five, of how difficult I thought it was. It’s hard to fully gage how difficult a section is without seeing how many students answered that question, but I tend to have a pretty good idea of what sorts of things trip up the typical test taker and what things will not. Now, this wasn’t the most mathematically or scientifically sound way to assess the difficulty, but if I were good at math or science I probably wouldn’t have taken the LSAT. Anyway, the average difficulty of the questions on the first LR section rated out to 2.8 out of 5 , and the questions on the second section rated out to 2.78. So based on this, it was a little more difficult than usual, but not by much.