A Look at the September 2017 LSAT: Logical Reasoning

BPPaaron-lsat-blog-september-final-review
On September 16, 2017, droves of people — those hoping to soon begin their legal tutelage at an accredited institution of jurisprudence — took by land, sea, and air to ad hoc testing centers opened by the Law School Admissions Council of Elders. At these testing centers, these legal apprentices in waiting were given a test. Their answers to this test could open pathways to a brighter legal future. A chance to matriculate to institutions at which the sharpest legal minds are forged. The exam these aspirants would take was called the September 2017 LSAT. And unless were unfortunate enough to be in central Florida; Boise, Idaho; Savannah, Georgia; or Richmond, Virginia, take this test is what they did.

After this exam, there were rumors, hearsay, and scuttlebutt bruited about what was exactly on that test. As the guardians of the sacred knowledge that unlocks the answers to this exam, Blueprint’s Consortium of LSAT Instructors sought to discern what the Law School Admissions Council of Elders asked on this exam. We instructors heard tales of the exam in its immediate aftermath, surmised a more detailed appraisal a few days later, and gathered colorful tales far and wide from some of the most traumatized test takers.

But until October 11 — the date the Law School Admissions Council of Elders bequeathed to us the September 2017 LSAT — we were never quite sure what was on the exam. With keen interest, the learned minds of the Consortium of LSAT Instructors inspected the exam, hoping to uncover deeper truths about not only this exam, but what future horrors await the next gathering of law school hopefuls in December and beyond. Here are their findings …

Logical Reasoning

OK, OK. Enough with the pseudo-mythic posturing. The truth is that we wanted to inject some drama, stakes, and intrigue into this post, knowing it would be about one of the less interesting LSATs in recent memory. The September 2017 LSAT, from front to back, was predictable, standard, and eminently fair. Which is great news if you’re one of many who took this exam. But it’s less great if you’re tasked with writing a hopefully interesting post about it.

The overall meh-ness of the September LSAT (from my perspective, at least) is most evident in the Logical Reasoning sections. Recent exams have had some interesting distributions of questions, like the proliferation of Soft Must Be True questions on the June 2017 exam, or the recent expansion of Strengthen questions.

On the September exam, however, we got a pretty standard distribution of questions. Yes, there were slightly fewer Implication questions (Must Be True, Soft Must Be True, Must Be False) than normal (there were only 8 in total, while that figure is normally around 10). And there were slightly more Disagree questions (4, as compared to the usual one or two) and Weaken questions (5 next to the typical 3). But, on the whole, if you apportioned study time based on the historic prevalence of these questions, emphasizing Soft Must Be True, Flaw, Strengthen, and Necessary questions, you would have been exceedingly well prepared. Again, good for the test taker, bad for the guy trying to find nuggets of interest here.

If there was anything that was somewhat noteworthy in these LR sections, it’s that diagramming and conditional statements were all over the place. The number of “diagammable” (our term for questions that involve conditional statements, transitive deductions, and other variants of formal logic that require putting pencil to paper) varies from exam to exam, but is usually fewer than 10 questions. This exam, by my count, had 13 questions that involved conditional statements, 11 of which were made significantly easier by diagramming. That’s 21.5% of the Logical Reasoning section, which is significant.

But again, this is great news for test takers. Diagamming is one of the few techniques that, if mastered, will get you to the correct answer, with near absolute certainty, every single time. Assuming you did master this skill, you should have felt immense confidence as you worked through the diagrammable Must Be True, Soft Must Be True, Parallel, Sufficient, and Necessary questions strewn throughout this test.

The most fun items were found on the margins of these sections. There was a widely reported Flaw question involving researchers who scared the bejesus out of some baby crows by wearing scary, disfigured “caveman masks.” Truly some Crow-nenbergian body horror in that one (sorry). The fallacy in that one was decidedly less interesting — it just involved realizing that for an animal to “pass on” behavior to a second animal, that second animal would have to engage in the learned behavior — but we’ll take our fun where we can find it on this test.

There was also a question that subtly engaged in circular reasoning, one of the least common logical fallacies on this test. Oh, and there was a pretty odd question that asked you describe the findings of a study involving the behavior of ravens when they feast on the carcasses of dead animals in the winter (there was a bit of a scary bird theme on this exam, as you can see). It was basically a hidden Soft Must Be True question, asking for a description that “best fits” the study described in the stimulus.

The overall difficulty of these questions struck me as mild. The hardest question, to my mind, involved weakening the notion that the gold found on ancient artifacts must have been taken from an mine in western Asia, because the only known source of that type of gold was the mine. Most test takers probably looked for an answer choice that said there were other, potentially unknown, mines from which this gold could come. However, the right answer discussed how the west Asian mine could have deposited gold into a riverbank, suggesting the gold for the artifact could have been taken from the riverbank, not the mine. Kind of an ore-riginal answer choice (again, sorry). There was also a somewhat difficult Strengthen EXCEPT question that required test takers to recognize that his use of common Latin phrases would not provide evidence that Shakespeare understood Latin or used Latin translations of ancient Greek plays. Kind of like how knowing the choruses to “Despacito” and “Mi Gente” do not prove that you speak Spanish.

In all, half of this exam was straightforward and pretty mild in its difficult. Check back tomorrow when we’ll talk about the other half, Reading Comp and Logic Games, and the curve …

Leave a Reply

Your email will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>