Handbook Menu

4.9 Law School Policy on Generative AI

“Generative AI” refers to software, such as ChatGPT, that can generate original text (including research papers, exam answers, etc.) comparable in many respects to human writing. This is distinct from other forms of AI that have been commonplace for longer, such as software in familiar tools like Google and Westlaw, that facilitates the search for and extraction of information from data sources. The Law School has adopted the following policy, effective Autumn Quarter 2023. This policy may change as circumstances warrant or based on further guidance from the University.

  1. A default applies in the absence of an explicit policy for the class set by the instructor.
    1. For exams, unless the instructor specifies otherwise, the use of generative AI is absolutely prohibited in any exam.
      Commentary: Students must not prompt or engage with generative AI in any way during an exam and (as was always the case) every word of the exam answer must be produced by the student themselves during the exam period. This default prohibition does not extend to the use of generative AI when studying for an exam, so long as it does not violate any other aspect of this policy. For example, while preparing an outline in advance of an exam, a student could ask tools like ChatGPT for the holding of Marbury v. Madison or for a list of arguments for and against judicial review. In this respect, the use of generative AI is not categorically different from the longstanding, uncontroversial use of resources like Google.
    2. For all work that a student submits for any form of academic credit, generative AI may not be used in a way that would constitute academic plagiarism if the generative AI were a human author whose work was used without attribution.
      Commentary: The crux of this policy, which applies in any context in which a student submits work for evaluation, is the application of existing academic integrity policies.

      Thus, some but not all uses of generative AI in preparing work to submit for a class would violate this default policy. Using generative AI to brainstorm ideas for a paper—akin to asking a search engine (like Google), professors, or professional contacts for suggestions—would not violate this default policy. Nor would using generative AI for proofreading. The principle here is that if generative AI is performing a task that other technologies (or people) currently perform, and such uses are not a violation of academic integrity, then those uses of generative AI do not violate the default policy.

      In contrast, using generative AI to compose part or all of a paper, or copying or paraphrasing output from generative AI and passing it off as one’s own writing, would violate this default policy. The same principle applies here: if generative AI is performing a task that other technologies (or people) currently perform, and such uses are a violation of academic integrity, then those uses of generative AI do violate the default policy.

      The default policy sweeps more broadly than a prohibition against academic plagiarism, however. A student cannot avoid limits on the use of generative AI that would otherwise apply by attributing their work to generative AI. It is not plagiarism to copy or paraphrase another’s work, so long as one properly cites that source. But under the default policy, the use of generative AI with or without attribution is compared to the use of a human-created source without attribution. Thus, using generative AI to compose all or part of a paper, even if the use of generative AI is fully documented in properly placed footnotes, is a violation of the default policy.
  2. Instructors have flexibility to set policy for the use of generative AI in their classes. The Law School sets a default for use of generative AI, but each instructor can set their own policies.
  3. Deviations from the default policy shall be stated in writing in the syllabus posted on Canvas for the class. The goal of this aspect of the policy is to ensure that all parties have advance notice of the policies that apply to them, and that those policies are as clear as practicable.
  4. All uses of generative AI must be consistent with the University’s policies on confidential and personal information. Under University policy governing the use of generative AI (emphasis added):

    The use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and may include research that is not yet publicly available. Some grantors, including the National Institutes of Health, have policies prohibiting the use of generative AI tools in analyzing or reviewing grant applications or proposals. Information shared with publicly available generative AI tools may expose sensitive information to unauthorized parties or violate data use agreements.

    Commentary: Concerns about the use of generative AI are most salient in the clinical context, but they are present in any context where you would be including sensitive information in prompts inputted into generative AI. Generative AI tools like ChatGPT can retain and use user input to train their algorithms. By default, all text you enter into tools like ChatGPT will be retained, used for learning, and potentially outputted to other users in response to queries.

    Any information that you consider private or sensitive should not be shared with any generative AI tool unless the retention and use of that data for learning has been disabled. If you are interested in using ChatGPT, we strongly recommend that you disable the retention and use for training of your data. Please see this document for instructions on how to disable ChatGPT from using your data for training. We are not currently aware of a way to disable training in Google Bard.