Adopted: March 2026
The goal of our graduate programs in the Department of Psychology is for students to become independent scientists engaged in creating new knowledge. Students must learn a set of essential skills to achieve this goal, including defining a problem/research question from primary sources, designing and implementing original research, thinking independently, and mastering a subdiscipline of psychology. Milestone projects (e.g., second-year and third-year projects, master’s and dissertation theses, qualifying projects, comprehensive exams) provide an opportunity for students to demonstrate mastery of these essential skills.
Artificial Intelligence (AI) is increasingly being used in research. Although AI can be a valuable tool to assist in research, a concern is that it can hinder the development of the essential skills of an independent scientist. Furthermore, AI is not accountable for the research outcomes since the ultimate responsibility of scholarship and research lies with human users.
A critical question is where to place the boundary between acceptable and unacceptable uses of AI for scholarship. Spellcheckers or grammar checkers are often used and standard tools for editing. However, generative AI tools go beyond simple corrections by producing ideas, text, images, video, or other content that can shape research and scholarship. Because of their broader and more substantial role, using generative AI tools typically require caution and transparency through disclosure.
The following sections summarize the Department of Psychology’s policy for acceptable and unacceptable use of generative AI for graduate milestone projects/products. It is important that students consult with their area head and advisor as each program and laboratory may have additional policies/guidelines regarding use of AI.
Generative AI is a type of artificial intelligence that can create new content — such as text, images, videos, music, artwork and synthetic data — based on user input. By analyzing large datasets, these AI systems learn patterns and structures, enabling them to generate content similar in style and characteristics to the original material used in training. This process uses machine learning models, including complex neural networks, to produce results that reflect the characteristics of human-created content. Examples of generative AI include popular tools such as ChatGPT, Gemini, DeepSeek, Grok, DALL-E, and StableDiffusion.
This policy pertains to restrictions on the use of generative AI. AI tools that are not generative, such as spellcheckers, auto-annotation (e.g., EndNote), and grammar corrections to assist in making your writing more professional are considered acceptable uses of AI for milestone projects. Another example is using AI to perform automated behavioral coding. These tools are typically not considered generative AI, but rather AI-driven deep learning tools. As described in more detail below, it is important to disclose any use of AI (generative and non-generative AI tools) according to APA guidelines.
There are many concerns and unresolved questions regarding the use of generative AI in research that users need to consider, such as:
These questions and concerns necessitate great care when using generative AI to gather data/information or write text from open-ended prompts. The responsibility for the content generated through AI lies with the human user.
Each area has specific milestones, and students should consult with their area head and/or advisor to determine which of these constitute milestone projects/products for their degree program that fall under this policy.
In the Department of Psychology, using generative AI tools is strictly forbidden for text creation, developing outlines for milestone projects, developing scripts for presentations for proposal and defense meetings, or statistical analysis. Any use of generative AI tools for such purposes will be considered a violation of UB’s Academic Integrity policies. This includes uploading any previous papers, drafts, or slide decks into any generative AI tool to help with tasks unless allowed by conditions of the copyright. Students are cautioned from uploading data to generative AI tools as it may violate confidentiality and intellectual property rights according to IRB, UB, and department policies. It is recognized that uploading data as part of a project about AI may be necessary and subject to approval of a committee/advisor (see below).
Students are cautioned from using AI tools for stimuli and illustration creation as it could violate intellectual property and copyright laws. With respect to copyright infringement, who owns material generated using AI tools remains an open question. If you cannot confirm that you own the copyright of the material generated, then AI generated stimuli should not be included in a dissertation or MA thesis (both are published documents) or any other published materials. There may be some AI tools that generate stimuli and illustrations that do not release the input data for training purposes and provide the copyright to the generated material to the user. Use of these tools would be acceptable with proper disclosure.
There are several uses of generative AI that are considered acceptable. It is acceptable to use generative AI tools to assist in creating analysis code, though any such code must be carefully vetted and compared to other expert sources for verification that it is correct. This use of AI to help write or debug syntax is distinguished from feeding data into an open-AI platform for it to analyze. The latter is unacceptable (as noted above).
It is acceptable to use generative AI for literature searches. AI may be useful in compiling a list of papers to read on a research topic of interest. Similarly, requesting summaries of papers is acceptable to help identify papers to read on a given topic, but users should be aware that the summaries may not be accurate. However, as noted above, it is not acceptable to use such summaries to create text. Feeding the content of a paper into an open-AI platform to generate a summary may violate copyright laws. To be sure not to infringe on copyright protected material it is important to fully understand the copyright limitations of the paper.
We recognize that there may be cases where using AI is necessary if the project is about AI. For example, if a goal is to evaluate or build an AI model, then AI is likely to be part of the stimuli or research plan. If students need to use AI for generating content, it must be approved by the advisor and relevant project committee prior to any use (see below for more details). The proposal should then include clear statements about why and how AI will be used and the final document should include disclosures indicating the specific use (see below for more details on disclosures). Committees may allow AI use for some components of the milestone (e.g., stimuli creation, coding of behavior from video/audio/passive technology) as required for the question, but that approval does not extend to generating ideas or text for the final milestone project.
Most milestone projects are vetted by a committee, and the committee and advisor are responsible for evaluation of the project to ensure that the work is independent, original, and ethically conducted. They carry much of the responsibility in determining whether the use of AI is within the approved scope. Proper disclosure of AI use is essential for the committee to execute this responsibility. It is best to cover the expectations and restrictions on generative AI use with committee members in advance of starting the project and if required during initial committee meetings (e.g., organizational meeting for preliminary exams, dissertation proposal meetings). At the conclusion of the project (i.e., at the time of the final defense or approval process), disclosure of specific use of generative AI must be made to the committee and be included within the final project document. If an organizational meeting or proposal is not required for a particular milestone, then a student should proactively provide disclosure of their intent to use AI to the committee as early as possible. This will allow early feedback on whether the disclosed use of AI is acceptable. Using generative AI without proper disclosure is a violation of UB’s Academic Integrity policies.
As scholars, students are responsible for being transparent about how their work was created. This means acknowledging how AI tools were used for a particular project. In the case that AI tools were not used, it is important to disclose that AI tools were not used.
Explicitly stating that AI was or was not used will help maintain trust with committees, advisors, and the intended audience of the work, and ensures the work is evaluated fairly. In cases where AI was used, it is important to properly disclose exactly how it was used. Below are example disclosures.
Students are expected to disclose any use of AI according to APA guidelines.
AI policy statements from Dr. Kim Chaney’s research lab, the University of Washington, and Georgia Tech contributed to the writing of this policy.
Generative AI was not used in writing this policy.