The recent growth in generative artificial intelligence (AI) tools has quickly changed the education landscape. Generative AI is a type of artificial intelligence technology that can generate new content (such as text, images, audio, computer code, etc.) based on patterns and information gathered from sets of sample materials. Some instructors are choosing to bring these new tools more explicitly into their teaching and course assignments, while others are aiming to limit the use of AI tools that are incompatible with course goals.
Below are some suggestions from the TLHub to help you be on the forefront of these tools and their impact on your course. Please note that these are emerging tools, and we will continue to provide new and updated resources as we develop our guidance on the intersection of teaching and AI.
- Establish and communicate your course policy on AI, and explain what unpermitted aid for assignments means for your course. State your policy in your course syllabus and the Course Policies and Norms form, and then follow up in class with your students. If AI tools are allowed in your course, provide students with, or allow students to generate, recommendations about how to use these tools most effectively. If AI tools are not allowed or allowed in only limited circumstances in your course, explain why.
- Get to know generative AI tools and how they might emerge in your course context. Consider running a few of your course’s assignment(s) through one or several AI tools. Discuss your findings with your students and explain how the AI tool’s output compares to your expectations for their work.
- Lead a class discussion about generative AI tools in your area of expertise, such as emerging areas of AI use or creative applications of AI tools in business settings.
- Review your course’s assignments and assessments in light of generative AI tools and make adjustments. If students might use AI tools productively to help them complete your coursework, you might consider more explicitly inviting and guiding students’ use of generative AI tools. If students may be able to use AI tools in unpermitted ways, consider adjusting your coursework according to design practices that help improve learning and have an added benefit of incentivizing academic honesty. This could include connecting assignments to in-class discussions, breaking assignments into parts, and incorporating lower stakes practice.
Not sure where to start? See our tips for Starting Small with AI in the Classroom.
A Note on Information Security
In the new resource on Responsible AI at Stanford, UIT has advised the Stanford community to avoid inputting information that should not be made public when using a generative AI tool. This includes materials containing students’ personal information and proprietary or copyrighted materials (which may include case studies, course assignments, data sets, and more), or any information that may be classified as moderate or high risk according to Stanford UIT’s risk classifications. Information you enter into a generative AI tool may be shared with third parties, and the tool may use your prompts or questions to inform content generated for other users.
Stanford does not yet support any generative AI tools for use. If you encourage students to use such tools for work in your course, make sure to offer assignments that students may complete effectively if they are unable or choose not to use AI tools. This especially includes the use of paid tools, the use of tools that require personal information to make an account, or assignments that ask students to discuss personal information.
For more information on safety measures and risk factors related to generative AI tools, visit Stanford UIT’s guide to Responsible AI at Stanford.
FAQs about Generative AI Tools in the Classroom
Getting to Know AI
Generative AI is a type of artificial intelligence technology that can generate new content based on patterns and information gathered from sets of sample materials. Here’s what else to know about this technology:
- The most well-known generative AI tools are ‘chatbots’ (e.g., ChatGPT, Bing Chat, Google Bard, or Claude). These are AI-based text generation tools that users interact with using a chat interface. Users type a question or prompt, and the chatbot responds with a cohesive and creative written answer. The chatbot can then refine or adjust its responses based on how the user continues to interact with the tool, such as by continuing to pose questions or asking for changes to the chatbot’s output.
- Other generative AI tools produce many different outputs such as images (e.g., DALL-E or Midjourney), code (e.g., GitHub CoPilot), data analysis or visualization (e.g., ChatGPT Code Interpreter), or internet search results (e.g., Perplexity).
- AI tools use ‘training data’ (sample materials or data) to inform the content they generate. The quality of AI-generated content can vary widely, in part based on the quality or focus of the training data. For example, a tool trained on large amounts of internet content may be able to respond to a prompt in many different and creative ways, but the tool’s output may also reflect the varying accuracy or biases embedded within the sample internet content. In addition, AI tools sometimes prioritize completing or responding to a user’s prompt over accuracy of information, and can sometimes make up information entirely (known as ‘hallucinating’).
- For definitions of common AI terms and additional explanations of how AI chatbots work, see Stanford Teaching Commons’ resource on defining AI and chatbots.
Looking for more? For a step-by-step guide to generative AI tools in the higher education context, see Stanford Teaching Commons’ Artificial Intelligence Teaching Guide.
Test out your course’s assignment(s) using one or several AI tools. Discuss your findings with your students and explain how the AI tool’s output compares to your expectations for their work. See tips and suggestions for testing your course assignments with AI tools.
Note: When testing assignments with AI tools, be sure to choose an assignment prompt that does not contain proprietary, copyrighted, or other sensitive information.
Learn more about AI tools’ strengths and limitations with five fun exercises. We recommend choosing creative topics and/or topics that you are an expert in (whether that is a business, teaching, or other topic), for the best sense of the scope and limitations of the AI tool. Select your chatbot tool of choice (e.g., ChatGPT, Bing Chat, Google Bard, Claude) and try one or more of the following:
- Stump the AI. Select a topic that you are an expert in and prompt the chatbot for increasingly specific or obscure information.
- Brainstorm creative writing. Ask the chatbot to help you develop a fictional short story, and explore different elements of the story through conversation with the chatbot.
- Practice a foreign language. Prompt the chatbot to be your supportive language teacher in a foreign language of your choice and ask it to help you with vocabulary practice for travel in a relevant country.
- Develop a project plan. Ask the chatbot to develop a project plan for an upcoming project or an itinerary for a trip, and then ask for further details, reorganizations, or sample products.
- Try something fun. Create a recipe for a new fusion dish. Plan a themed surprise party for a special guest of honor. Play a guessing game about animals, movie stars, or another topic. Provide song recommendations based on your creative descriptions.
These and additional sample prompts are provided in Stanford Teaching Commons’ resource on exploring the pedagogical uses of AI chatbots.
Crafting Course Policies on the Use of Generative AI Tools
Guidance from the Board on Conduct Affairs states that Stanford instructors may set course policies regarding generative AI use as they choose. The guidance recommends that such policies are stated in the course syllabus and communicated clearly to students.
According to the Honor Code, students commit to not receiving “unpermitted aid” for assignments or examinations. If you do not specify that students can use generative AI tools, then their use would violate the honor code. But because these are emerging tools, it’s important to explicitly clarify with your students whether or not you permit generative AI tools.
For syllabus statement templates and other tips on setting your course approach to AI use, see our resource on Course Policies on Generative AI Use.
If you are starting the process of developing your own course policy on AI use, here are some general suggestions to consider:
Consider your learning goals. First, determine whether students’ use of generative AI tools would support the learning goals of your course assignments. This may help you decide when to permit students to use the tools. For example, you may choose to allow AI tools during the brainstorming phase but not the drafting or editing process of a writing assignment. Or, you may choose to allow students to use AI tools to help them understand challenging readings but not in assignments that apply those concepts to new cases.
Write your policy. Whatever your approach to the use of AI tools, be as specific as possible in addressing questions such as:
- In what contexts are generative AI tools permitted?
- What is the rationale or reasoning behind this policy?
- What are the consequences for students of not following the course policy?
Consider documentation and alternatives, if you allow the use of AI tools. For example:
- You may want to request that students document their use of AI tools for course assignments. This could include citing text, images, or data visualizations produced by AI tools, or it could include providing a copy of the chatbot conversation that helped students produce assignment material. See how to cite generative AI in MLA style, APA style, and Chicago style.
- Stanford does not yet support any generative AI tools for use. Make sure to offer assignments that all students may complete effectively if they are unable or choose not to use AI tools. This especially includes the use of paid tools, the use of tools that require personal information to make an account, or assignments that ask students to discuss personal information.
Share your course policy. State your policy in your course syllabus and the Course Policies and Norms form (if desired), and then follow up in class with your students.
Using Generative AI Tools in the Classroom
Find suggestions for reframing existing coursework, incorporating short in-class activities, and more in our resource on Starting Small with AI in the Classroom.
Educators have quickly discovered many ways that students can use AI tools to improve their learning. Some of these include using AI chatbots to:
- Explain or paraphrase complex information or challenging text passages in simple terms
- Brainstorm initial ideas for an essay or project
- Generate additional examples of a problem, concept, or scenario
- Provide feedback on writing or an argument
Some AI tools have additional functionality (often limited to paid accounts) that allow the tool to:
- Summarize long or difficult text, including research articles or data sets
- Find relevant articles for a research project
- Represent and visualize data in different ways
- Generate quantitative practice problems and see possible solutions
For more information and example educational use cases of generative AI tools, see Stanford Teaching Commons’ resource on exploring the pedagogical uses of AI chatbots.
Regardless of what tasks students may be using AI tools for, it’s important to give students guidelines for how to effectively and ethically use the tools. If you would like to encourage AI tool use, refer students to your AI use policy to let them know how and/or when to report their AI use, and provide students with suggestions for how to use AI chatbots more effectively. See the FAQs below for what to consider sharing or discussing with your students.
If you would like to encourage students’ use of AI tools, especially chatbots, here are some general tips that you might adapt to share with students:
Quick tips for chatbot prompting. When entering a prompt, consider instructing the chatbot to…
- “Act as if you…[are an expert on X.]”
- “Tell me what else you need to do this.” – or – “Ask me questions until you have enough information to do this.”
- “Provide me with [a specific output]…” This can include details such as form, tone, etc.
- Complete a task step-by-step, either by prompting it one part at a time, or by asking it to “complete the first step, then stop and ask if I need more information.”
- Look up and cite relevant research to answer the prompt, if using a tool that can search the live internet. Then, double check each citation yourself.
- Change its approach if you aren’t satisfied or sure about an answer. Ask the bot for clarification or to provide a different example. Tell the bot that it is incorrect and ask it to try again. Ask the prompt in a new way (or restart the chat) if the chatbot isn’t providing what you’re looking for — more specific questions and more context about how you like the chatbot to respond can often help…when in doubt, experiment!
Quick tips for productive and responsible AI tool use. Here are some tips for using generative AI productively and responsibly when…
- Gathering new (or reviewing) information or ideas. Information you might get from a chatbot can be helpful for reviewing the basics, hearing information in a new way, summarizing or filtering large amounts of information, or brainstorming ideas. But you should always verify the information and data that you find there. Use a chatbot interaction to find starting points, and then follow up on that information in more reliable sources (reputable journal or news articles, vetted data sets, hearing from an expert, course materials, etc.) and make sure that you avoid plagiarizing by finding out which original authors an AI tool may be drawing from.
- Writing. Use chatbots in ways that you might get help from a writing tutor or a copyeditor — to help review your grammar and/or spelling, to find clearer ways of phrasing sentences, for help outlining or arranging ideas. But make sure that you…
- Develop a clear idea of your goals and reasoning in the assignment before using the AI to help you write. In this academic setting, ideas and analyses that you present in your work must be fundamentally your own.
- Get help periodically from a writing tutor, your colleagues, and/or your instructors. They can help refine and develop your writing in creative ways, and they may be able to give you more context-specific advice (discipline, job role, course assignment, etc.) than an AI tool would.
- Coding. Chatbots and other AI coding tools can write sample code, provide suggestions or instructions for writing code, and check code for errors. Use (or limit your use of) such tools in the same way as you might a human collaborator. In some courses, it’s important that you are able to develop certain code without help, or that you are able to generate code yourself but with assistance from tools or collaborators that can help you correct errors. In other courses where learning how to code or complete certain automated processes isn’t part of the course goals, you might be able to use code generation or assistance tools freely.
- Generating charts, visuals, or audio. Chatbots and other AI tools can generate charts or visualizations based on data sets, images based on prompts, and human-like audio. If one of the goals of the course is to learn to generate such data visualizations, images, or audio yourself, avoid using AI tools or check with your instructor about ways you might be able to use AI tools while still learning key skills. In general, make sure you know how the AI tool is working to produce content (e.g., Are you able to spot-check data visualizations to make sure they’re compiling and analyzing data correctly? Can you verify that a tool isn’t using copyrighted material to produce images and/or audio? What biases in source information, such as racial or gender stereotypes, might be showing up in AI-generated content?).
Quick tips for careful AI tool use. When using an AI chatbot (or another type of generative AI tool)…
- Avoid inputting information that you would not want to be made public. This includes personal or confidential information of your own or that others share with you, as well as proprietary or copyrighted materials (which may include case studies, data sets, assignment prompts, or other materials) that may be included in your coursework. Before making an account for an AI tool, look up the tool’s privacy statement and make sure you agree with how the tool’s creator might use your data. Be sure to review and follow the guidelines provided in Stanford IT’s resource on Responsible AI at Stanford.
- Be prepared to fact-check and critically evaluate all AI-generated information. Most AI chatbots aren’t designed to write sentences that are true — they are designed to write sentences that are plausible. Many AI tools get their training sets and information from the internet and can’t make judgements about the information they draw on. Generative AI tools can provide false information (called ‘hallucinations’), perpetuate biases and/or stereotypes, or draw on copyrighted information without proper attribution, and such problematic information is often presented very convincingly.
- Follow your instructor’s policy on how to report AI use or cite AI-generated content.
Generative AI tools are new for students, too. If you would like to encourage students to use AI tools, here are some ideas you may want to consider:
- Most AI tools available today were not designed for educational purposes. It may help to provide students with recommendations for when and how they might benefit from using the tools. Students may also need help learning how to prompt an AI chatbot effectively to get useful responses.
- Students have differing levels of experience with generative AI tools. You might consider leading an open discussion about how students could imagine using AI tools, asking students to use AI tools during an early group assignment, or having students contribute successful chatbot prompts to a course repository, so that they can learn from one another.
- Provide specific instructions and possible prompts for students to use, if possible, if you welcome students to use AI tools for an assignment. The more students are guided in their use of AI, the more likely they are to use the tools to support rather than shortcut their learning.
- Students should learn about the limitations of generative AI tools. For example, AI tools do not always provide correct responses (often called ‘hallucinations’) and the tools’ outputs can be biased (e.g., more often representing a doctor as male and a nurse as female). Students may need support and/or practice in learning how to fact-check or critically evaluate AI-generated information. The Stanford Teaching Commons’ resource on exploring the pedagogical uses of AI chatbots offers a primer on the potential risks of using AI.
- Stanford does not yet support any generative AI tools for use. If you encourage students to use such tools for work in your course, make sure to offer assignments that students may complete effectively if they are unable or choose not to use AI tools. This especially includes the use of paid tools, the use of tools that require personal information to make an account, or assignments that ask students to discuss personal information.
See the adoption process for new technologies if you would like to incorporate these tools into your course.
Consider having students research emerging areas of AI use or brainstorm creative applications of AI tools in your field. This will allow students to explore, gain experience with, and discuss the pros and cons of such applications.
See Stanford Business Insights’ Technology and AI topic for articles that highlight Stanford GSB research on AI and other cutting edge technologies.
Other business journals’ collections of AI or technology-focused articles include The Wall Street Journal’s technology section, the Harvard Business Review’s AI and machine learning topic, and MIT Sloan Management Review’s data, AI, and machine learning topic. These collections can provide up-to-date news examining AI business tools’ promising future and current limitations.
Learn more about a “human-centered approach” to AI, meant to spur innovation and economic benefits, and the cutting-edge research being done at Stanford’s Human-Centered Artificial Intelligence Lab (HAI). HAI also offers executive education programs to explore AI technologies and their business implications.
Reducing Unpermitted Use of AI Tools
We strongly encourage you to test out generative AI tools, and consider running a few of your course assignments through one or several tools. This will provide the clearest sense of how an AI-generated output may compare to high-quality student work in your course. Note that such tools do not generally produce the same output each time a question is posed.
Note: When testing assignments with AI tools, be sure to choose an assignment prompt that does not contain proprietary, copyrighted, or other sensitive information.
Some common strategies for detecting plagiarism may be useful for identifying unpermitted use of AI tools. These include looking for generic or repetitive language, large shifts in writing voice or style, improper and missing citations or facts, or if a student can’t explain how they arrived at an answer or made choices in producing an assignment.
AI detection software is available, but has limitations. See below for more.
Yes. Under the honor code, faculty may use AI detection software if they provide students with “clear, advance notice,” as in the Office of Community Standard’s Tips for Faculty & TAs.
However, Stanford does not currently have approved general licenses for plagiarism monitoring tools (a common example is Turnitin). We recommend faculty use AI detector tools with a high degree of caution, if they choose to use such software, because AI detector tools can vary widely in their accuracy and can produce both false negatives and false positives. Users can also bypass detection tools by revising AI-generated text and testing their work in AI detector tools themselves to check detection likelihood.
Note: Each AI detection tool differs in its approach to privacy and how inputted material may be shared with third parties. If you choose to use an AI detection tool, avoid inputting information that should not be made public, according to UIT’s guidelines for Responsible AI at Stanford. This includes personal, sensitive, confidential, or proprietary information that may be contained within students’ coursework, including student names.
In addition to stating your course policy clearly, there are some methods you can use to minimize students from turning to unpermitted aid to complete their assignments. Lowering the stakes for assignments, connecting course work to students’ intrinsic motivation, and increasing student support systems can help address underlying reasons students sometimes submit dishonest work.
See Emphasizing Student Learning Over Tools for small-scale tips for promoting students’ learning regardless of whether they use AI tools. See Strategies for Promoting Deep Student Engagement with Assignments below for additional suggestions on how to adjust assignments or course structures to ensure students are learning deeply and to promote adherence to your course policy.
For a guide to getting to know generative AI tools in contexts related to higher education classrooms, see Stanford Teaching Commons’ Artificial Intelligence Teaching Guide.
We can offer one-on-one consultations, answers to questions via email, and even a short presentation with Q&A at your next faculty meeting. If you have questions about the impact of generative AI tools on your course and assignments, reach out to discuss your concerns or specific use case with us.
Strategies for Promoting Deep Student Engagement with Assignments
Regardless of your course policy on AI use, it may be helpful to review your assignments and assessments and consider how to enhance students’ learning. Below, we highlight some design practices that help improve learning and encourage deep student engagement but have an added benefit of incentivizing academic honesty. Consider which suggestions may be most relevant for your course.
- Make sure students know how and when to seek help on written assignments and for exam prep.
- Remind students of AI use policies and permitted aid or collaboration when assignments are given, not just at the start of the quarter.
Adjusting Assignment or Assessment Structure
- Incorporate lower-stakes writing assignments as practice before assigning a higher-stakes assignment or exam.
- Break a larger assignment into parts (e.g., topic proposal, outline, rough draft, final draft).
- Provide feedback that students are expected to incorporate into revisions, or, in large classes where it isn’t possible to provide everyone with detailed feedback, open dedicated office hours to students seeking feedback.
- Incorporate in-class brainstorming sessions for students to generate ideas for paper topics or arrange peer review feedback sessions on preliminary drafts.
- Provide work samples and ask students to identify, explain, and then correct flaws or issues in the samples as part of their assignments.
- Convert take-home assignments that may tempt the use of unpermitted aid to in-class activities. For example, instead of assigning an essay on a general topic for homework, have students complete a five- to ten-minute freewrite at the end of class to demonstrate their understanding of course materials, or have students work in small groups to paraphrase the main takeaways of the class session and develop predictions for how those ideas will apply to upcoming course material.
Shaping Meaningful Assignment Prompts
- Allow students to shape assignments according to personal interests. You may consider asking students to apply course concepts to an issue of their choice that they are passionate about, or offering a selection of prompts for students to choose from.
- Adjust assignments to enhance real-world applications and reflect as much as possible the work students may do after leaving the GSB.
- Include assignments that are deeply context-based, such as assignments situated in personal or work experiences, group work, experiential learning tasks, in-class discussions, etc.
Stanford Community Resources
- Starting Small with AI in the Classroom, GSB Teaching and Learning Hub
- Course Policies on Generative AI Use, GSB Teaching and Learning Hub
- Responsible AI at Stanford, Stanford UIT
- Artificial Intelligence Teaching Guide, Stanford Teaching Commons
- Artificial Intelligence teaching resources topic, Stanford Teaching Commons
- Generative AI: Perspectives from Stanford HAI, Stanford’s Human-Centered Artificial Intelligence Lab
- “AI Has Entered the Chat — a ‘Conversation’ with ChatGPT,” Quick Thinks podcast by Lecturer in Organizational Behavior Matt Abrahams
- “I’ll bet you didn’t write that — a robot did,” Op Ed for the Los Angeles Times by Lecturer in Management Glenn Kramon
Resources from Outside Institutions
- Harvard Business Publishing’s AI-focused curated collections of readings, case studies, and tutorials
- Wharton Interactive’s 5-part video series on Practical AI for Instructors and Students (length: 1 hour total)
- Elevate Your Case Prep with ChatGPT (article) or How ChatGPT and Other AI Tools Can Maximize the Learning Potential of Your Case-Based Classes (webinar), from Harvard Business School’s Professor of Management Practice Mitchell Weiss
This article draws from the Artificial Intelligence Teaching Guide, Stanford Teaching Commons.