This post provides sample syllabus language ÎÛÎÛ²ÝÝ®ÊÓÆµ instructors can use or adapt to clearly communicate expectations around student use of AI-based tools, including Microsoft Copilot with Data Protection. These statements reflect Lamar’s institutional policies and commitment to ethical, secure, and pedagogically sound use of generative AI.
Instructor expectations regarding AI use vary widely. Including a clear statement helps students understand:
ÎÛÎÛ²ÝÝ®ÊÓÆµ provides campus-wide access to Microsoft Copilot with Data Protection, a secure generative AI platform that protects user data and complies with FERPA, HIPAA, and other privacy regulations. Our AI guidelines are clear about restrictions and best practices around sharing institutional and protected data with public AI platforms.
AI Use Permitted with Restrictions
In this course, students may use AI-based tools such as Microsoft Copilot with Data Protection for select assignments. Each assignment will specify whether and how AI tools may be used.
All AI-generated content must be properly cited. Misuse or failure to follow assignment-specific guidelines will be considered academic misconduct. Please note: AI-generated content may be inaccurate or biased. Students are responsible for verifying the accuracy of any AI-assisted work. Questions about appropriate use or citation? Contact me directly or visit the LU Writing Center for support.
AI Use Prohibited
Use of generative AI tools (including Microsoft Copilot) is not permitted for any assignments in this course.
All submitted work must be original and completed without AI assistance. Violations will be treated as academic misconduct and referred to the appropriate university office. This policy supports the development of critical thinking, creativity, and independent learning.
AI Use Encouraged with Attribution
Students are encouraged to explore Microsoft Copilot with Data Protection to support their learning and creativity. AI tools may be used for brainstorming, summarizing, or refining work, but must be cited appropriately.
Students are responsible for ensuring the accuracy and ethical use of AI-generated content.
For guidance on ethical AI use, visit the CTLE AI Resources or consult the instructor.
Here are several examples of syllabus policies on the use of AI by higher education faculty, drawn from institutions across the U.S. These reflect a range of approaches—from permissive to restrictive—and offer guidance for faculty crafting their own policies:
– Flexible, Instructor-Defined Policies
Duke encourages faculty to define their own stance on generative AI use, emphasizing transparency and alignment with academic integrity. Example policy types include:
Faculty are encouraged to explain their rationale and support AI literacy by teaching students how to cite AI and understand its limitations.
– Spectrum of Policies
Vanderbilt offers a categorized approach to AI syllabus policies:
Each policy includes a rationale, such as preserving critical thinking or ensuring originality.
– Contextual and Transparent Use
U-M encourages instructors to tailor AI policies to their course context. Their guidance includes:
They also recommend including documentation requirements and ethical transparency in syllabi.
– Three-Tiered Model
SCCC provides three sample syllabus statements:
Citation formats (APA, MLA) and student guides are also provided.
– Guidelines for Faculty
ODU offers comprehensive guidance for faculty integrating AI into teaching:
ODU emphasizes ethical use, data privacy, and training for both faculty and students.
Here are examples of AI syllabus policies from various higher education institutions, organized by approach and context. These can help faculty tailor their own policies based on pedagogical goals, disciplinary norms, and institutional expectations.
– Generic Course Policy
“This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT for all assignments and assessments. Any such use must be appropriately acknowledged and cited. Students bear final responsibility for the validity of AI-generated content. Violations will be considered academic misconduct.”
Wharton School, University of Pennsylvania
“I expect you to use AI (ChatGPT and image generation tools) in this class. Some assignments will require it. Tutorials are provided. You must refine prompts and verify outputs. Include a paragraph at the end of any assignment explaining how AI was used.”
Vanderbilt University – Peabody College
“You can use generative AI models for any purpose, at no penalty, as long as you recognize the model’s contribution. Failure to acknowledge use will be penalized as plagiarism.”
St. Edward’s University – Doctoral Level
“You may use generative AI to brainstorm ideas. However, submitting AI-generated work as your own is prohibited. Cite AI like any other reference. Include a note explaining where and how AI was used.”
Drexel University
“You are welcome to experiment with generative AI for some assignments. Keep track of how you used AI. Some assignments will be designated ‘human only’ to encourage creative flow without algorithmic interference.”
“Students may use AI tools on some assignments. Instructions will specify when and how. All sources must be cited. Improper use will be considered academic misconduct.”
University of Chicago
“Students are not allowed to use any AI tools in this course. All work must be original and unaided by automated tools. Violations will be treated as academic integrity breaches.”
Salem State University
“All writing assignments must be prepared by the student. AI-generated submissions are not permitted and will be treated as plagiarism.”
Vanderbilt University – History Department
“AI text-generation tools are prohibited for writing assignments. AI may be used for background research, brainstorming, or editing, but not for drafting or paraphrasing assignments.”
From the California Community Colleges Academic Senate (ASCCC):