The University of Michigan Law School is banning prospective students from using generative AI in personal statements. It’s all about guaranteeing any written materials “reflect the traits and writing ability of aspiring attorneys,” as described in Reuters. That sounds all well and good, but the more you think about it, the less this policy makes sense.
Applicants have functionally infinite time to compose a personal statement and likely run it past multiple editors. Which is not a drawback because it more closely approximates how attorneys write.
So why would pumping out a first draft through a large language model be any less authentic?
The school wants to assess the candidate’s “traits,” right? But the applicant is still feeding the AI all the relevant personal information. If the bot wanders into hallucinating accomplishments outside those confines — and the applicant submits that falsehood — then that’s on the student, not the AI. It’s no different than blaming the AI for those attorneys who submitted fake cases because they never bothered to cite check the AI’s draft.
Personal statements took on elevated importance after the Supreme Court took a hatchet to affirmative action initiatives in higher education this Term. But the majority opinion left open the possibility for admissions officers to consider personal statements that discuss race. But ChatGPT isn’t great at conveying how race impacts its life — any author crafting a statement that sufficiently sets itself apart that the school is willing to risk vexatious litigation challenging “proxies for diversity” took whatever the AI spit out and performed substantial surgery.
Much like an accomplished partner would take a junior associate’s draft and fix it up.
Moreover, to the extent socioeconomic status or being a first generation graduate serve as some of those proxies for diversity, these artificial intelligence offerings might be replacing the rounds of editing that more privileged students currently use their general counsel aunt or Biglaw partner neighbor to give a once over.
This generation of artificial intelligence — at least the consumer facing AI applications like ChatGPT and Bing — aren’t ready to match the hype. But it’s coming. By the end of the decade, telling students they can’t use generative AI in their writing workflow is going to sound like the long line of math teachers scolding us that “you’re not always going to have a calculator handy.”
Or, as a legal corollary, the legal writing instructors explaining how to Shepardize by book because attorneys won’t always be able to go on Westlaw.
By way of contrast, Reuters notes that not everyone sees AI as an admissions problem:
The University of California, Berkeley School of Law was the first to adopt a formal policy on the use of artificial intelligence in the classroom, but for now the school has decided not to specifically ban ChatGPT from the application process, said assistant dean of admissions Kristin Theis-Alvarez.
“We felt that the requirement to attest to the fact that ‘all essays and statements are my original work’ covers the use of generative AI such as ChatGPT in a way we are comfortable with for the time being,” Theis-Alvarez said, though she didn’t rule out asking applicants in the future to certify that they didn’t use AI.
That seems right.
Honestly, if Michigan sifts through undergraduate grades and LSAT (or GRE!) scores and ultimately gets duped into admitting someone who hit the green arrow and printed it out without a second glance…
Then Michigan has some deeper problems.
Use of ChatGPT prohibited on Michigan Law School applications [Reuters]