AI Use in Graduate Milestone Projects

Department of Psychology Policy and Guidelines

Adopted: March 2026

Educational Goals and AI

The goal of our graduate programs in the Department of Psychology is for students to become independent scientists engaged in creating new knowledge. Students must learn a set of essential skills to achieve this goal, including defining a problem/research question from primary sources, designing and implementing original research, thinking independently, and mastering a subdiscipline of psychology. Milestone projects (e.g., second-year and third-year projects, master’s and dissertation theses, qualifying projects, comprehensive exams) provide an opportunity for students to demonstrate mastery of these essential skills.

Artificial Intelligence (AI) is increasingly being used in research. Although AI can be a valuable tool to assist in research, a concern is that it can hinder the development of the essential skills of an independent scientist. Furthermore, AI is not accountable for the research outcomes since the ultimate responsibility of scholarship and research lies with human users.

A critical question is where to place the boundary between acceptable and unacceptable uses of AI for scholarship. Spellcheckers or grammar checkers are often used and standard tools for editing. However, generative AI tools go beyond simple corrections by producing ideas, text, images, video, or other content that can shape research and scholarship. Because of their broader and more substantial role, using generative AI tools typically require caution and transparency through disclosure.

The following sections summarize the Department of Psychology’s policy for acceptable and unacceptable use of generative AI for graduate milestone projects/products. It is important that students consult with their area head and advisor as each program and laboratory may have additional policies/guidelines regarding use of AI.

What is Generative AI?

Generative AI is a type of artificial intelligence that can create new content — such as text, images, videos, music, artwork and synthetic data — based on user input. By analyzing large datasets, these AI systems learn patterns and structures, enabling them to generate content similar in style and characteristics to the original material used in training. This process uses machine learning models, including complex neural networks, to produce results that reflect the characteristics of human-created content. Examples of generative AI include popular tools such as ChatGPT, Gemini, DeepSeek, Grok, DALL-E, and StableDiffusion.

This policy pertains to restrictions on the use of generative AI. AI tools that are not generative, such as spellcheckers, auto-annotation (e.g., EndNote), and grammar corrections to assist in making your writing more professional are considered acceptable uses of AI for milestone projects. Another example is using AI to perform automated behavioral coding. These tools are typically not considered generative AI, but rather AI-driven deep learning tools. As described in more detail below, it is important to disclose any use of AI (generative and non-generative AI tools) according to APA guidelines.

There are many concerns and unresolved questions regarding the use of generative AI in research that users need to consider, such as: 

  • Who owns the output from generative AI (e.g., the software company)?
  • Does using open-AI platforms risk releasing data owned by a funding entity (e.g., confidential government funded projects, private industry)?
  • Does the AI platform/company violate intellectual property or copyrights owned or retained by other individuals, especially if these are not referenced correctly?
  • Is the output generated by AI dependent on the algorithmic approach, quality of training data, or the user’s understanding of the tools’ limitations and biases? (e.g., generative AI may reproduce and perpetuate biases in the training data, which is not always transparent to users)
  • Given that confidentiality and security of data inputted into a Large Language Model depends on the policies and practices of the company that owns the platform, whatever is fed into a query may become the property of the company, which may not comply with IRB or funding agency policies

These questions and concerns necessitate great care when using generative AI to gather data/information or write text from open-ended prompts. The responsibility for the content generated through AI lies with the human user.

Unacceptable Uses of Generative AI for Graduate Milestone Projects

Each area has specific milestones, and students should consult with their area head and/or advisor to determine which of these constitute milestone projects/products for their degree program that fall under this policy.  

In the Department of Psychology, using generative AI tools is strictly forbidden for text creation, developing outlines for milestone projects, developing scripts for presentations for proposal and defense meetings, or statistical analysis. Any use of generative AI tools for such purposes will be considered a violation of UB’s Academic Integrity policies. This includes uploading any previous papers, drafts, or slide decks into any generative AI tool to help with tasks unless allowed by conditions of the copyright. Students are cautioned from uploading data to generative AI tools as it may violate confidentiality and intellectual property rights according to IRB, UB, and department policies. It is recognized that uploading data as part of a project about AI may be necessary and subject to approval of a committee/advisor (see below).

Students are cautioned from using AI tools for stimuli and illustration creation as it could violate intellectual property and copyright laws. With respect to copyright infringement, who owns material generated using AI tools remains an open question. If you cannot confirm that you own the copyright of the material generated, then AI generated stimuli should not be included in a dissertation or MA thesis (both are published documents) or any other published materials. There may be some AI tools that generate stimuli and illustrations that do not release the input data for training purposes and provide the copyright to the generated material to the user. Use of these tools would be acceptable with proper disclosure.  

Acceptable Uses of Generative AI for Graduate Milestone Projects

There are several uses of generative AI that are considered acceptable. It is acceptable to use generative AI tools to assist in creating analysis code, though any such code must be carefully vetted and compared to other expert sources for verification that it is correct. This use of AI to help write or debug syntax is distinguished from feeding data into an open-AI platform for it to analyze. The latter is unacceptable (as noted above).

It is acceptable to use generative AI for literature searches. AI may be useful in compiling a list of papers to read on a research topic of interest. Similarly, requesting summaries of papers is acceptable to help identify papers to read on a given topic, but users should be aware that the summaries may not be accurate. However, as noted above, it is not acceptable to use such summaries to create text. Feeding the content of a paper into an open-AI platform to generate a summary may violate copyright laws. To be sure not to infringe on copyright protected material it is important to fully understand the copyright limitations of the paper.  

We recognize that there may be cases where using AI is necessary if the project is about AI. For example, if a goal is to evaluate or build an AI model, then AI is likely to be part of the stimuli or research plan. If students need to use AI for generating content, it must be approved by the advisor and relevant project committee prior to any use (see below for more details). The proposal should then include clear statements about why and how AI will be used and the final document should include disclosures indicating the specific use (see below for more details on disclosures). Committees may allow AI use for some components of the milestone (e.g., stimuli creation, coding of behavior from video/audio/passive technology) as required for the question, but that approval does not extend to generating ideas or text for the final milestone project.

Role of Project Committees

Most milestone projects are vetted by a committee, and the committee and advisor are responsible for evaluation of the project to ensure that the work is independent, original, and ethically conducted. They carry much of the responsibility in determining whether the use of AI is within the approved scope. Proper disclosure of AI use is essential for the committee to execute this responsibility. It is best to cover the expectations and restrictions on generative AI use with committee members in advance of starting the project and if required during initial committee meetings (e.g., organizational meeting for preliminary exams, dissertation proposal meetings). At the conclusion of the project (i.e., at the time of the final defense or approval process), disclosure of specific use of generative AI must be made to the committee and be included within the final project document. If an organizational meeting or proposal is not required for a particular milestone, then a student should proactively provide disclosure of their intent to use AI to the committee as early as possible. This will allow early feedback on whether the disclosed use of AI is acceptable. Using generative AI without proper disclosure is a violation of UB’s Academic Integrity policies.

Disclosure of AI Use for Graduate Milestone Projects

As scholars, students are responsible for being transparent about how their work was created. This means acknowledging how AI tools were used for a particular project. In the case that AI tools were not used, it is important to disclose that AI tools were not used.

Explicitly stating that AI was or was not used will help maintain trust with committees, advisors, and the intended audience of the work, and ensures the work is evaluated fairly. In cases where AI was used, it is important to properly disclose exactly how it was used. Below are example disclosures.

Students are expected to disclose any use of AI according to APA guidelines.

Examples of Acknowledgement Statements When No AI Was Used

  • "I did not use generative AI tools in the creation of this work. All research, writing, and revisions are my own work."
  • "No generative AI was used in preparing this project. All content was generated independently by the author. Assistive AI was used to correct spelling and minor grammatical errors"
  • "This project was completed without the use of generative AI tools."

Examples of Acknowledgement Statements When AI Was Used

From Camosun College library guide and similar guidance from other universities

General Structure

  • I acknowledge the use of [AI system(s) and link] to [specific use of generative AI]. I entered the following prompts on [date]: [list of prompts]. The output from these prompts was used to [explain use]. A copy of the original output is attached as an appendix.

Example 1

  • I acknowledge the use of ChatGPT to refine the academic language and accuracy of my own work. I submitted my entire introduction and entered the following prompt(s) on 17 March 2025:
    • Improve the academic tone and accuracy of language, including grammatical structures, punctuation and vocabulary
  • The original output was adapted and modified for the final version. A copy of my original written introduction and a copy of the original output can be found in Supplement x.

Example 2

  • I acknowledge the use of ChatGPT to refine the academic language and accuracy of my own work. When prompted with "Is the left brain right brain divide real or a metaphor?" the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, "the notion that people can be characterized as 'left-brained' or 'right-brained' is considered to be an oversimplification and a popular myth" (OpenAI, 2023; see Appendix A for the full transcript).
  • And then including a citation in reference list:

Acknowledgements and Disclosures

AI policy statements from Dr. Kim Chaney’s research lab, the University of Washington, and Georgia Tech contributed to the writing of this policy.

Generative AI was not used in writing this policy.