AI Policy and Rationale
With the onset of the disruptive generative AI technologies, schools and universities are struggling to adapt and determine the best policies on AI use. Our class is no exception. There are several arguments in favor of use of AI tools in course work:
- AI tools claim to improve productivity
- Employers will expect you to use AI tools, so students should learn how to use them
- Tools may help with debugging, documenting, or improving code
- Helps with understanding documentation, finding what one needs to read up about
At the same time, there are some arguments against its use:
- AI tools produce things that look right, but may not be (both code and text)
- Tools are useful and effective after you are proficient in doing something; when learning something new, they hinder learning that comes from actually thinking and doing
- AI tools steal your work and ideas
- Debugging AI-produced code (that you don’t fully internalize) is slow and painful
With these ideas in mind, the policy for this course is summarized below.
-
We strongly discourage the use of generative AI models for coursework for pedagogical reasons. For most of you, this is the first course that requires significant amounts of code to be written or gives you freedom to design. Generative AI tools may rob you of the opportunity to learn and become proficient in code design and thinking through corner cases, etc.
This is akin to using calculators, which undoubtedly improves productivity. However, if you are just learning arithmetic, premature use of calculators will prevent you from learning and becoming proficient. Common pitfalls of calculator use (e.g., hitting the wrong button or transposing two digits) will produce wrong results. The novice user is likely to be completely oblivious to the mistake, but the experienced expert is likely to spot the error (that doesn’t look right!). So having expertise in the task and in the tool are both required before the use of a productivity tool is effective or recommended. This holds for AI tools as well.
On the other hand, the class is very diverse, and some of you may have had years of industry experience in designing and building software systems; perhaps for those already proficient in these areas, the use of AI tools for these tasks may be a new learning experience. We leave it up to you to judge whether this applies to you.
-
Some uses of AI tools are generally considered ok:
-
AI-based search (without generation of code/snippets) to help discover new things to learn about is acceptable (e.g., discovering the existence of a library to do something, finding a new different use case you never heard of before, etc.).
-
Use of generative AI to help translate documentation or text is generally ok. We understand that English is a second language for many of you, and that you can read, think, understand, and write much faster in another language. Use of AI tools to help translate to/from English is acceptable. However, please note that you will still be responsible for any misunderstandings or incorrect claims / behavior due to errors in translation.
-
-
If you do make use of generative AI tools, please follow these guidelines.
-
We expect any course work submitted to be your work. As with any other academic work, if you make use of other’s works (e.g., quote from a paper, reuse a graph from a website) or are inspired by / heavily borrow from (even with modifications) other’s work, you must cite the sources appropriately. We expect the same for AI-tool generated content – you must clearly cite all content generated by AI and the tool(s) used.
-
Almost all generative AI tools steal ideas and content from their users (e.g., use your prompts and generated outputs as data for training or improving the service, or log “snippets” for quality assurance).
This is a very major concern. Suppose you spent two days with an AI tool iterating and refining until a good piece of code that covers all of the corner cases is generated. The AI may “learn” from this, and some time later, if anyone prompts something vaguely similar, it will quickly give or direct them towards your solution! This is like publishing your solution on the Internet, but even worse. At least on the Internet, there is a clear means to request the content be removed, and there is some way for the other person to at least cite your hard work. None of this is possible when the AI tool steals your ideas.
To alleviate this concern, only tools that have been vetted by the University and are contractually obligated to protect your data are allowed. Currently, this is limited to Google Gemini Web App and Microsoft Copilot Chat. Both have protection only through the web portals and only when you log in with your Andrew ID. No other tools or variants (or if not logged in with CMU credentials) provide data protection guarantees. Please see CMU’s Safe AI Use page for more details.
In particular, phone apps and AI plugins / extensions for VScode / other IDEs are not allowed. Though IDE plugins are convenient, none provide contractual guarantees vetted by CMU to not steal your data or ideas.
-
-
Other considerations
- There will be no AI tools available during exams. As most exam questions resemble problem set questions or involve code/pseudocode related to the projects, gaining proficiency in solving these problems without AI assistance may be to your advantage.