Open Menu Close Menu Open Search Close Search

Generative AI (abbreviated as “AI” in these guidelines) is increasingly present across Tufts, from classrooms and research labs to clinics and administrative work, raising concerns about its effects on scholarship, learning, and academic integrity. These guidelines provide shared language for responsible AI use across Tufts’ diverse settings. They are deliberately non-prescriptive and are designed to encourage members of the community to think carefully about AI in their own context. Schools, departments, and units should develop context-specific policies and procedures aligned with their existing academic integrity and professional conduct processes.

In addition to these guidelines, all AI use must comply with applicable laws, institutional policies, and contractual agreements, particularly those governing privacy, confidentiality, security, accessibility, intellectual property, and academic integrity.

1. Accountability and Verification: Tufts faculty, staff, and students are responsible and accountable for content and decisions they create using AI.

AI use must align with applicable course expectations, school policies, and professional standards. When AI informs decisions affecting individuals (e.g., grading, admissions, clinical care, advising, and others), users must be able to explain the basis for the decision and document AI’s role. This applies equally to students completing assignments, faculty developing course materials or evaluating student work, staff performing administrative tasks, and those providing clinical care. This requires sufficient understanding of AI tools’ limitations to evaluate outputs appropriately and to assess for bias, errors, and fabrications.

2. Ethical, Legal, and Societal Responsibilities: As a community, we recognize the broader ethical, legal, and societal implications when deciding when and how to engage with AI tools.

Responsible use must protect privacy and confidential data; respect intellectual property, FERPA and HIPAA-protected information, and creator rights; anticipate bias and other harms that may differentially impact vulnerable groups; and weigh broader impacts such as access and affordability, the effects on labor, and the environmental impacts of AI systems development. Users may not input, upload, or otherwise disclose restricted data into AI tools unless the tool has been formally approved for that data use. Users should assume that information entered into third-party AI systems may be retained, reused, or disclosed beyond Tufts’ control. Users should also consider educational impacts, including whether student use may reduce productive struggle, critical thinking, or other core learning processes. Because most AI tools are third-party services, users should be alert to vendor dependence, data reuse or AI training provisions, and changing terms of service that may affect reliability, privacy, and accessibility. Where possible, Tufts should support responsible use through vetted tools and clear data-handling guidance rather than placing all evaluation and risk management on individuals. Individual users remain responsible for ensuring their procurement and use of AI tools complies with applicable policies and legal requirements. For detailed guidance and procedures on data privacy and security consult the TTS Guidelines for Generative AI Use.

3. Community Trust and Professional Standards: Members of the Tufts community have a duty to uphold integrity and professional standards, ensuring they use AI responsibly and for mutual benefit.

Tufts’ commitment to academic freedom means that responsible AI use will, and should, look different across disciplines, roles, and contexts. Community members hold diverse views about AI, from embracing it as essential preparation for students’ futures to viewing it as contrary to educational values and goals. Standards will continue to evolve as we further develop our understanding of AI tools and their capabilities, limitations, and impacts. Any AI use must remain consistent with Tufts’ fundamental values and existing policies on academic integrity and professional conduct, as outlined in school handbooks and relevant policies. This mutual trust is foundational to our community: students, faculty, and staff depend on each other to use AI in ways that support learning, teaching, clinical care, administrative work, and scholarship, and they should recognize that AI choices affect both individual work and our shared academic endeavor.

4. Transparency and Disclosure: Members of the Tufts community must be transparent about their use of AI.
Disclosure of when and how one is engaging with AI is a requirement for trust and integrity within the academic community. Members of the community have the responsibility to disclose substantive AI usage as they would any information, text, images, code, or other material gathered from external sources. Disclosure expectations should be proportional. These expectations apply to all roles: students should disclose AI use according to course expectations and consult instructors when uncertain; instructors using AI to generate feedback, to grade, or who upload student work to AI tools or detectors should disclose these uses to students; researchers should follow disciplinary norms and publisher policies; and all members of the Tufts community should abide by their respective school, department, and unit policies.

5. Clarify AI Use and Disclosure: Instructors and units are strongly encouraged to clarify what AI use is permitted and how disclosure should occur. Where no local guidance is provided, individuals may make their own decisions about how to use and disclose AI according to these guidelines.

As AI has become part of daily practice for many community members and is increasingly embedded in everyday software, expectations of its use vary significantly across courses, departments, and units. Clear, context-specific guidance, stated in syllabi and assignment instructions or in unit procedures, should explain the pedagogical or operational rationale so community members understand not only what is allowed or disallowed, but why. Because automated AI detection is unreliable and can produce harmful false positives that undermine trust, it is strongly discouraged as a primary basis for academic integrity determinations. Concerns should instead be addressed through conversations, assessment designs, and established academic integrity and community standards procedures such as those of AS&E. Clarity and ongoing dialogue are more effective than policing and better support our trust-based academic community.

6. Course Expectations: Faculty have the freedom and responsibility to determine appropriate AI use within their courses, guided by pedagogical goals & disciplinary norms.

Within the bounds of school and department policies, instructors retain the academic freedom to establish when and how AI may be used in their courses. These decisions should be grounded in pedagogical rationale and clearly communicated in the syllabus and/or assignment instructions. Expectations should help students distinguish between AI use that supports learning and AI use that replaces sense-making or that undermines assessed work. Depending on course goals, faculty may require AI use, permit it with conditions, discourage it, or prohibit it. When making these determinations, instructors should consider student access and equity, and they should remain mindful of privacy, ethical, and societal considerations. Resources and examples for using AI in teaching and for developing AI syllabus statements to support faculty in crafting course-specific policies are available on ETS & CELT websites.

These guidelines were adopted by the Tufts AI Council in spring 2026.

Related Resources: