Skip to main content
Find a Lawyer

Are Time-Pressured Exams on Law, Such as the LSAT, Fair and Useful?
Part Two in a Series on Timed Tests and Legal Education

By VIKRAM DAVID AMAR

Friday, Apr. 01, 2005

In my last column, I described and analyzed some recent research concerning the Law School Admissions Test (LSAT). This research - particularly a recent law review article by Professor Bill Henderson -- argues that the LSAT places substantial (and perhaps undue) weight on the speed with which test-takers can process questions, rather than simply the ability of test-takers ultimately to reason through to arrive at the best answers.

Further, the research asserts (and suggests) that although LSAT scores do correlate to law school grades - a common justification for law school admissions offices' reliance on the LSAT - this correlation may owe largely to the fact that law school grades are also frequently based on exams that place a lot of weight on speed rather than reasoning power.

Specifically, law school exams are often of the "racehorse" type - three or four hours of "issue spotting" and quick analysis. The grades that result from these kinds of exams correlate much more strongly with LSAT performance than do law school grades that are based on "take-home" 8+-hour exams and/or term papers in lieu of exams.

To the extent that the emphasis law school admissions directors place on the LSAT is justified primarily by the LSAT's correlation to law school racehorse exam performance, a question legal educators must then address is whether all of these time-pressured exams measure the kinds of skills that real-world lawyers should be developing. After all, producing effective real-world lawyers - not simply speedy test-takers -- is what law schools are supposed to be in the business of doing.

An Initially Plausible But Unsatisfying Response: The Bar Exam Is A Timed Test Too

One initial response legal educators might make is that the Bar Exam that all new lawyers must take is also a time-pressured, largely issue-spotting, affair. Thus, defenders of the status quo might argue, time-pressured exams before and during law school are perfectly appropriate training and/or screening devices for those who want to join the a profession that uses a similar test as a non-negotiable barrier to entry.

But as Professor Henderson intimates, this response would simply widen the debate - for the Bar Exam structure is itself something over which legal educators should have a fair amount of influence. State bar examiners in most states involve law professors in devising and grading bar exam questions each year. Accordingly, if the legal educational establishment and the practicing bar became convinced and forcefully communicated its view that the issue-spotting question format on the Bar Exam did not do a decent job of assessing the skills needed for the successful practice of law, then there would be every reason to hope and believe that Bar Exam makers, over time, would heed this professional consensus and adapt the structure of the exam accordingly.

Indeed, there is precedent for this kind of give-and-take. The largest single "consumer" of the SAT college entrance exam is the University of California undergraduate system, with its huge number of applicants. The UC system concluded that the old SAT did not do a good enough job measuring the right skills, and threatened to remove the SAT from the admissions process entirely unless changes were made. In response, the College Board substantially revised the format and content of the SAT this year.

Similarly, legal educators could, together with practicing lawyers, put significant pressure on Bar Exam makers to rethink their methods if research and analysis suggests that the current Bar Exam format - similar to that of racehorse law school exams - fails to reward the right skills.

What Skills Do Time-Pressured Legal Exams Measure, or Fail to Measure?

All of this brings us to two key questions: What are the most important skills in practicing law? And, how are those skills measured and/or ignored by time-pressured testing formats?

It's true, of course, that lawyers must think and act on their feet quite often. And time-pressured exams might be assumed to do a fair job of measuring quick thinking. Lawyers who make oral arguments in appellate tribunals must process information and ideas and respond fast. So must trial attorneys deciding whether - and in what way - to object to (or defend) the introduction of testimony that is, at the same moment, coming out of the witness's mouth, or, in the case of a document, being handed to them for the first time.

But it turns out that, at least as far as the LSAT and perhaps law school racehorse exams are concerned, the (tentative) research of Professor Henderson on this point suggests that the oral advocacy skills of the kind most central to in-court trial or appellate work do not seem to correlate very well to time-pressured legal exams. Using data drawn from one school (admittedly a very small sample), Professor Henderson conducted an experiment that may tend to show little or no relationship between student LSAT scores and performance in the oral advocacy component of a moot appellate argument class. Thus, while the timed tests may be testing for quickness of some sort, it may not be quickness of the sort that makes an effective oral advocate.

Similarly, performance on the oral advocacy component of moot court in Professor Henderson's sample did not correlate well with one's law school grades more generally - and the majority of these grades were determined on the basis of in-class time-pressured exams.

Time-Pressured Exams and the Ability to Write Effective Legal Documents

Meanwhile, most lawyers focus not on oral advocacy, but rather on written work- motions, memos, briefs, contracts, releases, settlements, corporate filings and other documents. That leads us to ask whether the skills required in drafting these documents correlate to the skills measured by timed law school exams?

This is not an easy question to answer. Interestingly, Professor Henderson has found some correlation between law school grades generally and the written brief component of the moot court class he examined. And remember, most of a person's law school grades are based on time-pressured in-class exams. (Interestingly, though, LSAT scores did not seem to correlate with the brief-writing grades at all.)

This is one data point, then, that suggests the relevance of some time-pressured exams to some real-world-like written products. And many people believe that even when it comes to courtroom advocacy, written products - briefs and motions - are more important in obtaining client outcomes than are live hearings or oral arguments.

There May Be Good Reasons to Believe that Longer, Take-Home Exams Better Test Legal Skills

On the other hand, it seems intuitive that 8-hour or longer take-home exam formats might simulate real-world written work product situations much more accurately than do much shorter in-class exams.

One reader of my column dated two weeks ago - a recent NYU law graduate who works as an associate at one of the nation's 20 largest and most elite law firms - wrote me to thoughtfully express his view that:

The classic three hour 'issue-spotter' often rewards hurried prose and cursory (if broad) analysis, rather than careful reasoning and clear written expression. But the majority of legal challenges -- briefs, articles, closing arguments, legislation -- benefit from careful thinking and concise expression much more than from accurate snap judgment-making. Oliver Wendell Holmes and Justice Jackson were probably capable of dashing off a sterling three-hour issue spotter, but that is not why we hold them up as models.

To a similar effect are comments made by some inside of legal education. On the weblog "the Volokh conspiracy" the comments of law professor Steven Lubet of the Northwestern University Law School, who wrote a piece for the American Lawyer, are posted. Professor Lubet is quoted as having written:

There is almost nothing about the typical law school examination that is really designed to test the skills involved in law practice. And many aspects of exams are positively perverse. Take time pressure, for example. By their nature, exams are time-limited, usually to about three or four hours, during which it is necessary to assess the problems, decide on the answers, marshal the material (whether strictly from memory or from an "open book"), and then write, hopefully, coherent answers. There is no opportunity for reflection, research, reconsideration or redrafting. You simply dash off your answer and hope you got it right.

Professor Lubet goes on:

The dirty secret (if it is a secret) is that law schools rely on exams primarily because they are easy to grade. The intense time pressure guarantees that the answers will be relatively short and, even more important, that quality will differ significantly. Exams do a great job of dividing test takers into measurable categories, even if those categories measure nothing more than an ability to take tests in an artificial, nonlawyerly setting.

In my next and last installment in this series, I shall explore some possible counterarguments to these condemnations of the traditional approach that almost every law school adopts when it comes to testing students.


Vikram David Amar is a professor of law at the University of California, Hastings College of Law in San Francisco. He is a 1988 graduate of the Yale Law School, and a former clerk to Justice Harry Blackmun. He is a co-author of the Cohen and Varat constitutional law casebook, and a co-author of several volumes of the Wright & Miller treatise on federal practice and procedure. Before teaching, Professor Amar spent a few years at the firm of Gibson, Dunn & Crutcher.

Was this helpful?

Copied to clipboard