Artificial Intelligence
American University does not currently regulate how faculty and students can or cannot use generative artificial intelligence (AI) through large language models (LLMs) such as Copilot, Perplexity, ChatGPT, Gemini, or Claude. American University thereby encourages faculty to:
- Abide by the principles of AU’s Interim Guidance on the use of artificial intelligence (AI)
- Consult and heed school-level instructional guidelines and principles
- Instruct students about allowable or prohibited AI use in the syllabus and assignment instructions
- Recognize how AU’s Academic Integrity Code governs allowable AI use by default
This page offers the following expanding resources for AU faculty:
- Advising Students about AI in your Course: Appraise students of allowable, recommended, and disallowed AI use in your course
- Teaching with AI: Teach with AI, including resources for faculty to develop AI-related proficiencies
- Resources to instill AI Literacy: Instill AI literacy with your students, including resources and instructional materials to do so
- Addressing AI Policy Violations: Gain familiarity with how AI-use relates to AU’s Academic Integrity Code
Upcoming AI Events at AU:
11/6/2025: Kogod AI Teaching Conference
(9 AM – 4 PM | MGSC 305/306)\
11/10/2025: We Can’t Ban AI, but We Can Friction Fix It: A Workshop on Generative AI in the Classroom
(2:30 – 4:30 PM | MGSC 128)
11/12/2025: Building Communities of Practice around Artificial Intelligence
(3:00 – 4:15 PM | Butler Boardroom)
AI in Teaching Consultations:
If you have questions about anything related to AI in teaching, course design, or as it pertains to academic integrity, you can schedule a consultation with CFE’s Faculty Associate Sonja Walti.
Advising Students about AI in your Course
The course syllabus should lay out an overall AI-use policy that applies to your course. Just as you may allow or disallow the use of devices in class, appraise your students of AU’s academic integrity code, or share your teaching philosophy and expectations, draft a statement that informs your students about allowable, encouraged, or expected AI-use. The following examples offer restrictive and more permissive allowable use formulations:
- AI is not allowed: “By default, the use of generative AI tools such as ChatGPT, Copilot, Midjourney, DALL-E, and the use of AI-features like Grammarly or Google is prohibited in this course unless and to the extent expressly permitted by assignment instructions. In doubt, ask for clarifications!”
- AI is conditionally allowed: “In this course discourages but allows the use of AI. When using AI tools on assignments, add an appendix documenting (a) key passage of your AI exchange, (b) which AI tools were used, and (c) how AI tools were used (e.g. to generate ideas, turns of phrase, elements of text, etc.). AI tools should be used critically with an aim to deepen subject matter understanding. Use the ‘study mode’ setting whenever possible.”
- AI is encouraged for learning: “In this course, you are expected to use AI (e.g., ChatGPT and image generation tools), and some assignments will require you to use AI tools. Learning to use AI is an emerging skill, the acquisition of which this course supports through examples and tutorials. Use the ‘study mode’ setting when relevant.”
The impact of such statements is greater when you explain why the stated approach is beneficial and how it connects with the learning objectives. Similarly, you may wish to request your students to supply an explanation of why they opted to rely on AI.
In addition, AI-use statements that are built into assessment requirements – alongside those pertaining to word-count, page-numbers, citation-styles, etc. – ensure students know how to use or not to use AI tools when completing particular assignments.
In this slide-deck, our Office of Academic Integrity’s Alison Thomas and Kelly Joyner offer an expanded “red-light/green-light” approach to allowing or disallowing AI-use with an eye toward shaping learning outcomes focused on dispositions.
In this handout (p.4), AU’s Office of Academic Integrity offers a “syllabus statement checklist” for faculty to think through aspects of permissible AI use in their course with the learning objectives in mind.
Teaching with AI
When considering how AI relates to our role as faculty, questions about Teaching with AI, ranging from how we do or do not allow students to use AI to how we permit ourselves to seek AI assistance to develop lesson plans or review student work, often come up first. Faculty are encouraged to consider the following steps and questions when integrating AI into their teaching:
- What is my teaching philosophy, and how does the advent of AI tools intersect with it?
- What is my proficiency with AI, and how can I leverage, build on, or compensate for that proficiency?
- How does my academic field see the opportunities and limitations of generative AI?
- How may my students’ professional experience be shaped by AI? What type of AI use may employers ask of my students?
- How comfortable and open are my students to engage with AI? If they are not, can I foster their curiosity about it and/or offer alternative routes to complete assignments that may require AI use?
- Are there school-level, university-level, or other (e.g. copyright) policies that may restrict what I can ask of my students?
Many of these questions have multifaceted and evolving answers and are best explored with an open mind within a community of learners.
Beyond the day-to-day engagement that teaching with AI entails, we can also develop approaches to teaching about AI by adding AI as a subject matter to our instruction and learning objectives. This can be accomplished through added course materials, by engaging with our students on AI-uses in our respective fields, and by discussing implications, such as the potential for innovation, challenges around evidence and validity, the use of energy, questions of copyright, etc.
Depending on our field of study and courses we teach, we may consider teaching AI by making our students’ hands-on proficiency with AI tools a learning objective. Much as we ask our students to complete activities using specific media or applications, we can impart AI-related skills such as prompt engineering or AI-assisted image generation and visualization.
Library-provided resources, such as journal articles, e-books, videos, data, and primary sources, may not be uploaded into AI tools. This finding applies whether they are open or AU-licensed, per our Library’s FAQs. This restriction does not apply to resources designated as Open Access (OA).
Additional AI-related teaching resources are available from our Office of Academic Integrity’s password-protected SharePoint site.
Resources to instill AI Literacy
The AU library offers a wealth of resources to build AI literacy and AI-related skills:
- This Instruction Toolkit offers ready-made presentations and associated exercises to provide an AI overview, discuss AI knowledge creation, and engage students around AI-related ethical considerations.
- The AU Library subscribes to Canvas-compatible Credo Information Literacy Modules, which consist of a set of videos, tutorials, and quizzes, including AI literacy. These materials can remedy gaps, accompany visits to the library, or be scaffolded throughout the course.
- This Artificial Intelligence and Libraries page documents the AU Library’s engagement with AI, including tests it has conducted to test AI’s ability to support work-related activities.
- The Generative Art Project is a showcase of the AU Library’s experimentation with developing engaging visuals to summarize themes of the day using AI tools.
Addressing AI Policy Violations
Judicious engagement with and teaching about AI while laying out policies and (re)designing assessments and associated rubrics can go a long way towards addressing AI policy violations before they occur. The following measures can help to prevent AI policy violations:
- Review and discuss your AI policies repeatedly, including why you have adopted them
- Revise assignments to reward and require documenting processes
- Judiciously explore alternative (in-person, closed-book, oral) modalities for critical proficiency and knowledge assessments
- Rethink the necessity for immutable deadlines and consider revise-and-resubmit policies
- Model and demonstrate expected use of appropriate source materials
- Gain familiarity with AI-generated output in response to your assignment instructions
- Expect higher quality work and guide your students toward added analytic depth
AU’s Office of Academic Integrity governs how AI policy violations can be addressed, if AI policy violations are suspected. The difficulty of proving AI misuse without a reasonable doubt cannot be overstated. It is why AU’s agreement with Turnitin’s plagiarism software, which can be enabled in Canvas, does not include AI verification tools.
Note that AU’s emphasis on due process when AI (and other academic integrity violations) are suspected precludes a faculty’s unilateral sanctioning of suspected violations. Faculty may (a) opt to bring a suspected violation to the Office of Academic Integrity’s attention, (b) assess the submitted work on its merits, or (c) offer a learning opportunity by allowing the student to redo and resubmit their work (without penalty). If in doubt, the Office of Academic Integrity can advise faculty about these options.
Regardless of one’s AI policies and any limitations on sanctioning suspected misuse, AU’s Academic Integrity Code does not permit the fabrication of data and source materials, which commonly occur with the superficial misuse of AI tools,
Faculty needing to discuss or report an academic integrity concern should contact academicintegrity@american.edu.
