Illustrating ChatGPT’s limitations, Hammond described how he prompted ChatGPT to generate a description of the MSAI program. The bot response skewed toward the norm by incorrectly stating that MSAI is a two-year degree program — the program is 15 months. This error is an example of bias, or an erroneous assumption, based on ChatGPT’s training data. The bot also omitted unique information about the program, namely MSAI’s industry partnerships.
“When you are looking for an answer that is best practices or conventional wisdom, those are marvelous places for statistical methods,” Hammond said. “But if you start wandering into the realm of the bespoke, or the unique, you’ll run into problems.”
Hammond stressed the importance of understanding the nature of a task and confining technologies to the tasks they were built to solve. He suggested a language model like ChatGPT might not, for instance, be suited to the task of determining how changing a clause in a contract will impact the document.
“You have to understand the length and breadth of the technology and where it collapses, and make sure the task is not one that demands something beyond its limits,” Hammond said. “ChatGPT might be good at taking a test. But, because of the nature of the underlying mechanism, it may never be capable of genuine reasoning, being imaginative, or thinking beyond the moment.”
Implication for law and legal services
McGinnis discussed his expectations regarding how computational technologies like ChatGPT may affect legal services and law, including increasing efficiency, improving accuracy, and reducing costs.
He suggested that certain areas of law that are more conservative and stable over time, like trust law, might be more readily impacted by the technology than edge cases and law that is rapidly changing, like cybersecurity.
“I don’t think, at least in the foreseeable future, that AI tools will make lawyers obsolete. But they will be very important helpmates, as we have already seen with e-discovery and computerized legal search,” McGinnis said. “The question for lawyers and law students will be, how are you going to add value in a world where some of the simpler tasks are going to be taken away by machines?”
The panel agreed that, in the next five to 10 years, iterations of ChatGPT will focus on specialized domains — LawGPT, MedicineGPT, MarketingGPT — underscoring the issues around the need to evaluate, validate, and test the bot’s output given that, unlike search results, it is not feasible to see ChatGPT’s sources or citations.
Hammond predicted, in the short-term, a new “prompt engineer” role will emerge to improve the system and refine for specificity.
“Users who understand enough about a domain will engineer the appropriate prompts to guide the system in the right direction, and the prompts will become the learning driver for the next generation,” Hammond said. “Learn to communicate at the level of the goals that you’re trying to achieve, because that is the language that you’re going to use to control these systems.”