Take Reddit posts with a grain of salt, because their veracity can't be confirmed, but here's a recent post that may be of interest to those of us thinking about how to use AI in law school teaching/learning:
The screenshot may be hard to read, so here is the transcription:
I used Claude to get an A in a law school class that I never once paid attention to or studied for. I fed quality materials like old outlines and others' class notes into Claude, crafted the right prompts, and rocked that A without understanding a single thing taught in the class. Learn this approach, and you can cruise to top grades without ever cracking a textbook. But it definitely takes a certain kind of guts to pull this sh_t off.
To be sure, because of concerns about the use of AI to prepare exam answers, some professors who have given take-home exams in the past will consider switching (or have already switched) to in-class exams.
But is use of large language models by law students always a bad idea? One user responded to the post above with a suggestion:
A better approach is to personalize your learning about giving Claude the materials, and having it come up with new questions/answers based on the material. Try answering the questions yourself first. For stuff you got wrong, have it come up with related concepts. LLMs are amazing learning accelerators.
That seems like sound advice to me. I'm currently experimenting with having Claude generate new hypotheticals that I can use to supplement the 100+ practice questions I already give my students when teaching the basic Income Tax class.
I'd love to hear from other profs (and students) about how they are using Claude, Chat GPT, etc. to teach/learn course material.
Comments