ÎÛÎÛ²ÝÝ®ÊÓÆµ

Guidance: Syllabus Policies on AI Usage

Sample Syllabus Statements Regarding Student Use of Artificial Intelligence

This post provides sample syllabus language ÎÛÎÛ²ÝÝ®ÊÓÆµ instructors can use or adapt to clearly communicate expectations around student use of AI-based tools, including Microsoft Copilot with Data Protection. These statements reflect Lamar’s institutional policies and commitment to ethical, secure, and pedagogically sound use of generative AI. 

Why Include AI Guidance in Your Syllabus?

Instructor expectations regarding AI use vary widely. Including a clear statement helps students understand:

  • Permitted uses of AI tools in your course.
  • Rationale for your policy.
  • Consequences for misuse.
  • How to ask questions about AI-related expectations.

ÎÛÎÛ²ÝÝ®ÊÓÆµ's Supported AI Tool

ÎÛÎÛ²ÝÝ®ÊÓÆµ provides campus-wide access to Microsoft Copilot with Data Protection, a secure generative AI platform that protects user data and complies with FERPA, HIPAA, and other privacy regulations. Our AI guidelines are clear about restrictions and best practices around sharing institutional and protected data with public AI platforms.

Sample Syllabus Statements

AI Use Permitted with Restrictions

In this course, students may use AI-based tools such as Microsoft Copilot with Data Protection for select assignments. Each assignment will specify whether and how AI tools may be used.

All AI-generated content must be properly cited. Misuse or failure to follow assignment-specific guidelines will be considered academic misconduct. Please note: AI-generated content may be inaccurate or biased. Students are responsible for verifying the accuracy of any AI-assisted work. Questions about appropriate use or citation? Contact me directly or visit the LU Writing Center for support.

AI Use Prohibited

Use of generative AI tools (including Microsoft Copilot) is not permitted for any assignments in this course.

All submitted work must be original and completed without AI assistance. Violations will be treated as academic misconduct and referred to the appropriate university office. This policy supports the development of critical thinking, creativity, and independent learning.

AI Use Encouraged with Attribution

Students are encouraged to explore Microsoft Copilot with Data Protection to support their learning and creativity.  AI tools may be used for brainstorming, summarizing, or refining work, but must be cited appropriately.

Students are responsible for ensuring the accuracy and ethical use of AI-generated content.

For guidance on ethical AI use, visit the CTLE AI Resources or consult the instructor.

Examples of a Range of Syllabus Policies on AI Use

Here are several examples of syllabus policies on the use of AI by higher education faculty, drawn from institutions across the U.S. These reflect a range of approaches—from permissive to restrictive—and offer guidance for faculty crafting their own policies:

– Flexible, Instructor-Defined Policies

Duke encourages faculty to define their own stance on generative AI use, emphasizing transparency and alignment with academic integrity. Example policy types include:

  • Prohibited Use: “Students are not allowed to use AI tools like ChatGPT or DALL-E 2 on assignments. All work must be completed without substantive assistance from automated tools.”
  • Use with Permission: “AI tools may be used only with prior instructor approval.”
  • Use with Acknowledgment: “AI use is allowed if properly documented and cited. For example, include a citation like: ‘ChatGPT. (2025, Aug 12). Prompt: [your prompt]. Generated using OpenAI.’” 

Faculty are encouraged to explain their rationale and support AI literacy by teaching students how to cite AI and understand its limitations. 

– Spectrum of Policies

Vanderbilt offers a categorized approach to AI syllabus policies:

  • Permissive: “Students may use generative AI tools for any purpose, provided they acknowledge the model’s contribution. Failure to do so is considered plagiarism.”
  • Moderately Restrictive: “AI may be used for brainstorming or editing, but not for generating final text. All papers will be checked for AI-generated content.”
  • Completely Restricted: “AI-generated text is prohibited in all assignments. Use of such tools will be treated as a violation of the Honor Code.”

Each policy includes a rationale, such as preserving critical thinking or ensuring originality.

– Contextual and Transparent Use

U-M encourages instructors to tailor AI policies to their course context. Their guidance includes:

  • Encouraged Use: “GenAI tools may be used for brainstorming, editing, or outlining. Students must document their use.”
  • Conditional Use: “AI use is allowed if students distinguish between their own work and AI output.”
  • Prohibited Use: “Any use of GenAI constitutes academic misconduct.”

They also recommend including documentation requirements and ethical transparency in syllabi. 

– Three-Tiered Model

SCCC provides three sample syllabus statements:

  • Allowed Use: “Students are encouraged to use generative AI tools for assignments, but must cite and validate all AI-generated content.”
  • Conditional Use: “AI use is allowed only when specified by the instructor. Unauthorized use is a violation of the Student Code of Conduct.”
  • No Use: “Generative AI tools are not permitted in any coursework. Use will be treated as academic misconduct.” 

Citation formats (APA, MLA) and student guides are also provided.

– Guidelines for Faculty

ODU offers comprehensive guidance for faculty integrating AI into teaching:

  • Permissive Policy: “Students may use GenAI tools for all assignments, with proper citation.”
  • Hybrid Policy: “AI use is allowed only for specific assignments. Default is disallowed unless stated.”
  • Restrictive Policy: “AI tools are prohibited at all stages of assignment creation.”

ODU emphasizes ethical use, data privacy, and training for both faculty and students.

Here are examples of AI syllabus policies from various higher education institutions, organized by approach and context. These can help faculty tailor their own policies based on pedagogical goals, disciplinary norms, and institutional expectations.

Permissive Policies (Encouraging AI Use) 

– Generic Course Policy

“This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT for all assignments and assessments. Any such use must be appropriately acknowledged and cited. Students bear final responsibility for the validity of AI-generated content. Violations will be considered academic misconduct.” 

Wharton School, University of Pennsylvania

“I expect you to use AI (ChatGPT and image generation tools) in this class. Some assignments will require it. Tutorials are provided. You must refine prompts and verify outputs. Include a paragraph at the end of any assignment explaining how AI was used.”

Vanderbilt University – Peabody College

“You can use generative AI models for any purpose, at no penalty, as long as you recognize the model’s contribution. Failure to acknowledge use will be penalized as plagiarism.”

Conditional or Moderately Restrictive Policies

St. Edward’s University – Doctoral Level

“You may use generative AI to brainstorm ideas. However, submitting AI-generated work as your own is prohibited. Cite AI like any other reference. Include a note explaining where and how AI was used.”

Drexel University

“You are welcome to experiment with generative AI for some assignments. Keep track of how you used AI. Some assignments will be designated ‘human only’ to encourage creative flow without algorithmic interference.”

“Students may use AI tools on some assignments. Instructions will specify when and how. All sources must be cited. Improper use will be considered academic misconduct.”

Restrictive or Prohibited Policies

University of Chicago

“Students are not allowed to use any AI tools in this course. All work must be original and unaided by automated tools. Violations will be treated as academic integrity breaches.”

Salem State University

 “All writing assignments must be prepared by the student. AI-generated submissions are not permitted and will be treated as plagiarism.”

Vanderbilt University – History Department

“AI text-generation tools are prohibited for writing assignments. AI may be used for background research, brainstorming, or editing, but not for drafting or paraphrasing assignments.” 

Guiding Principles for Policy Creation

From the California Community Colleges Academic Senate (ASCCC):

  • Ethical Use: Promote transparency, equity, and accountability.
  • Legal Compliance: Adhere to FERPA and data protection laws.
  • Clear Communication: Make expectations explicit in syllabi.
  • Professional Learning: Support faculty and students in understanding AI tools.
  • Student Equity: Avoid exacerbating access gaps due to paywalls or tool limitations.